Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Ultimate resolution limits of speckle-based compressive imaging

Open Access Open Access

Abstract

Compressive imaging using sparsity constraints is a very promising field of microscopy that provides a dramatic enhancement of the spatial resolution beyond the Abbe diffraction limit. Moreover, it simultaneously overcomes the Nyquist limit by reconstructing an N-pixel image from less than N single-point measurements. Here we present fundamental resolution limits of noiseless compressive imaging via sparsity constraints, speckle illumination and single-pixel detection. We addressed the experimental setup that uses randomly generated speckle patterns (in a scattering media or a multimode fiber). The optimal number of measurements, the ultimate spatial resolution limit and the surprisingly important role of discretization are demonstrated by the theoretical analysis and numerical simulations. We show that, in contrast to conventional microscopy, oversampling may decrease the resolution and reconstruction quality of compressive imaging.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Irrespective of the microscope system, a conventional far-field optical image represent itself as a convolution of the object’s function with the point-spread function of the microscope, distorted by additive noise. In the Fourier domain, it is equal to a multiplication of the object’s spectrum with the transfer function of the optical system. For far-field imaging, diffraction acts as low-pass filtering. The Abbe limit [1] for an optical system with numerical aperture NA and wavelength λ is generally defined as having a cut-off frequency νcutoff = 2NA/λ. The loss of information carried by high spatial frequency components results in a loss of spatial resolution and ‘blurred’ images. The Abbe diffraction limit has long been considered as an inevitable limit of the spatial resolution achievable by a conventional optical system.

Optics and chemistry joined efforts to revolutionize fluorescence microscopy [2] by proposing many methods for super-resolution microscopy [36]. Conceptually, imaging with a more than 2-fold increase in the spatial resolution could be achieved via two main concepts. The general idea of the first concept is to use fluorophores that make themselves distinguishable for a short period of time, preventing molecules located within the same diffraction region from being detected together [5]. A second concept that does not require any specific fluorescent labelling is based on bandwidth extrapolation or numerical reconstruction of the full spatial frequency range [7]. Computational super-resolution can be achieved in fluorescent Fourier light field microscopy [8]. The recently emerging approach of sparsity-based super-resolution imaging is related to well-known compressive sensing techniques [912]. Sparsity-based methods of super-resolution image reconstruction work for both coherent [1315] and incoherent [16,17] light as well as scanning electron microscopy [18]. Cell imaging beyond the diffraction limit using sparse deconvolution has been demonstrated [19]. Sparsity-based nanoscale imaging paves the way towards the ultimate goal of optical microscopy – studying subcellular structures and molecular details in vivo in a bulk tissue and whole-animal contexts by implementing super-resolution imaging into an ultra-thin endoscopic probe [20,21].

The Nyquist-Shannon sampling theorem states that in order to properly reproduce a signal it should be periodically sampled at a rate that is two-fold the highest frequency present in the signal. With images, the sampling frequency is related to the minimum structure size. Thus, the imaging pixel size (analog of the sampling rate) should be about half the size of the smallest feature one wishes to resolve. However, in contrast to conventional signal sampling, the resolution of far-field optical imaging is not determined by the Nyquist limit only. In order to sample the image adequately, we also need to consider the optical transfer function and resolving power of the optical system, i.e. Abbe diffraction limit.

To summarize, the Abbe and Nyquist limits affect any imaging procedure, requiring a pixel size to be approximately twice smaller than the diffraction limit. Oversampling does not add any useful information and does not provide better resolution, while increasing acquisition time. This conclusion is valid for conventional far-field imaging as well as for some super-resolution techniques, such as STED. The concept of super-resolution optical imaging using the sparsity constraint should be considered differently because it allows simultaneously to overcome both Nyquist and Abbe limits.

Here we address the question of how the Nyquist and the Abbe limits are affecting each other within the super-resolution sparsity-based compressive imaging approach. We simulate compressive imaging by speckle illumination and single-point detection and use two reconstruction algorithms for performance comparison. We investigate the optimal number of measurements (illumination patterns) and the ultimate super-resolution limit as a function of sparsity. We address the role of discretization and demonstrate that in contrast to conventional microscopy, oversampling decreases the resolution and image quality of compressive imaging.

2. Theoretical model

We focus on a single-pixel detection approach of super-resolution microscopy using the sparsity constraint. In this modality of incoherent imaging, the sample is illuminated with a structured (speckled) beam and the signal is collected with a point detector, as sketched in Fig. 1(a). For each speckle pattern, the total response from the sample is detected. We note that the term ‘compressive imaging’ is often used to describe a different technique and is related to the idea in which the object is not sparse in the canonical basis, but rather after some transform (e.g. in DCT basis). Here we present an approach that is also often referred to as compressive ghost imaging or single pixel imaging and has many applications in optical microscopy [2224], bioimaging [2527] and endo-microscopy [20,21].

 figure: Fig. 1.

Fig. 1. (a) Principles of the optical setup for super-resolution imaging via sparsity constraint, speckle illumination and single-point detection. (b) Schematic representation of the imaging procedure. Measurement matrix A, sample vector x and measurement vector b are illustrations (c) The one-dimensional spatial frequency spectra (kx components) of the speckle illumination pattern (the blue line) and the sample (the red line). The cutoff frequency is νcutoff = 2NA/λ

Download Full Size | PDF

As a sample, we use a pseudo-random object with no implicit symmetries to make the explanation more succinct. Such a sample is presented in Fig. 1(a) and its spatial Fourier spectrum is shown in Fig. 1(c) as solid red line. The sample consists of randomly distributed groups of 1 to 5×5 pixels and is set to be K-sparse, meaning that only K pixels are nonzero. The intensity of each group was set randomly. We vary the pixel size from 60 to 500 nm. The field of view (FOV) consists of N by N pixels, varied from 16×16 to 80×80 pixels.

As illumination, we use speckle patterns that can be generated through a multimode fiber or any other scattering medium for the following reasons. Firstly, random matrices are known to provide reconstruction for any type of sample and sample basis chosen if enough measurements are taken [9]. Secondly, speckle illumination does not require special wavefront shaping equipment, which makes it easy to use them in many different experimental conditions. We simulate the imaging procedure as follows. The sample is illuminated with M artificially generated fully developed two-dimensional speckle patterns I(x, y) as proposed in [28]. This approach allows to simulate realistic speckle patterns with a negative-exponential intensity probability density function and predefined cutoff frequency (diffraction limit). The intensity distribution of each speckle pattern was calculated as

$$I({x,y} )= {\left|{{F^{ - 1}}\left[ {F\left( {{r_0}{e^{ - i{\mathrm{\Omega }_1}}} + \sqrt {1 - r_0^2} {e^{ - i{\mathrm{\Omega }_2}}}} \right)H({{k_x},{k_y}} )} \right]} \right|^2}$$
where F is the two-dimensional Fourier transform; (${k_x},{k_y}$) are the spatial frequencies; r0= 0.5 is the correlation coefficient factor; Ω1(x, y) and Ω2(x, y) are two-dimensional matrices, which are composed of independent uniformly distributed variables on the interval (-π, π); H(${k_x},{k_y}$) is a circular window function in the Fourier space with the radius equal to the cutoff frequency defined by νcutoff = 2NA/λ. In our simulations, the number of speckle patterns M varies from 5 to 500. An example of a speckle pattern is presented in Fig. 1(a). We simulate a system with NA = 0.22 and λ = 0.532 μm, corresponded to a diffraction limit, ldl = 1.2 μm. The results can be generalized to different NA’s. We cut the spatial Fourier components of the speckle patterns as presented in Fig. 1(c) by the solid blue line. Such a sharp cut corresponds to the window function, used to compute the speckle patterns, and is aimed to provide a strict diffraction limit.

We represent our 2D image as a 1D vector x with a set of M two-dimensional N x N illumination patterns as illumination matrix A (M x N2), as presented in Fig. 1(b). Typically, M << N 2, meaning the object x is undersampled and the problem is ill-posed. As a result of the imaging procedure, M intensity values are collected (one per speckle pattern) and form a measurement vector b. Under the general experimental conditions described above and presented in Fig. 1, our problem amounts to the reconstruction of an image from the single-point intensity measurements, assuming that the image is sparse. The precise knowledge of the illumination matrix is required in contrast to recently developed methods of structured illumination microscopy (SIM) with unknown patterns [29]. We used two very common reconstruction algorithms, described in the next subsections. The main goal of a sparse recovery algorithm is to estimate x that satisfies the underdetermined least-square equation b = Ax, when only b and A are given. No prior information about K (sparsity) was assumed during the reconstruction, as may be needed for different algorithms [30].

Generally, the results of the compressive sensing reconstruction are specific to many parameters and may vary significantly. We used two methods, which may handle large-scale optimization problems [31]: basis pursuit (l1_ls) and gradient projection for sparse reconstruction (GPSR). The two algorithms differ in a way of how a next point in a process of optimization is chosen. The l1_ls uses an interior point method with truncated Newton optimization: the next step in taken as an approximate solution of Newton's equation. In GPSR the optimization steps are proportional to the negative gradient of the function to be minimized [32].

2.1 Basis pursuit (L1_LS)

Basis Pursuit (BP) algorithms try to find a sparse solution for x with the smallest/minimum l1-norm that satisfies the underdetermined least-square equation ${\boldsymbol{Ax}} = {\boldsymbol{b}\; }$ using convex optimization techniques. One of the available BP solvers is l1_ls [31] and its MATLAB code can be accessed and used for free at https://stanford.edu/∼boyd/l1_ls/. The general approach of BP solves an optimization problem of the form:

$$\begin{array}{c} minimize\; {{\vert\vert \boldsymbol x}\vert\vert_{1}}\\subject\; to\; {\boldsymbol{Ax}} = {\boldsymbol b} \end{array}$$
while l1_ls solves the optimization problem of the form:
$$minimize\; {\vert\vert \boldsymbol{Ax}} - {\boldsymbol b}\vert\vert_2^2 + \lambda {{\vert\vert\boldsymbol x}\vert\vert_1}$$
with ${|{|{\boldsymbol x} |} |_1} = \mathop \sum \nolimits_i^n |{{x_i}} |$ the l1 norm of x and λ > 0 is the regularization parameter. The problem is known as l1-regularized least square problem (LSP). Its optimal solution can be calculated numerically by minimizing the quadratic loss of $\; {\vert\vert\boldsymbol{Ax}} - {\boldsymbol b}\vert\vert_2^2$, where $\; {{\vert\vert \boldsymbol x}\vert\vert_{2}} = \; \sqrt {\left( {\mathop \sum \nolimits_i x_i^2} \right)} $ denotes the l2 norm of x. Since generic solvers for LSP are slow, the l1-regularized LSP is transformed to a convex quadratic problem with linear inequality constraints and a solution can be found using standard convex optimization methods [31], such as a truncated Newton interior-point method (with log-barrier) using preconditioned conjugate gradients (PCG) to accelerate convergence [33]. It can handle large problems and is shown to be faster compared to other common algorithms to solve convex optimization problems, known as interior point methods.

Since we are interested in measuring real objects our obtained observations should be only of positive values. The l1_ls package offers a solver with nonnegativity constraints (l1_ls_nonneg.m), which solves the optimization problem of the form:

$$\begin{array}{c} minimize\; {\vert\vert \boldsymbol{Ax}} - {\boldsymbol b}\vert\vert_2^2 + \lambda \Sigma _{\textrm{i} = 1}^n{x_i}\\ subject\; to\; {x_i} \ge 0,i = 1, \ldots ,n \end{array}$$
resulting in a positive defined solution of x.

In MATLAB, the algorithm is simply called with the following variables: x = l1_ls_nonneg(A, b). For the presented simulations, the regularization parameter is set to λ = 10−3 and kept constant. The maximum number of iterations (MAX_NT_ITER) for the interior-point method is set to 5000 and the maximum number of backtracking line search iterations (MAX_LS_ITER) is set to 500, while all other parameters were using the default values.

2.2 Gradient projection for sparse reconstruction (GPSR)

For comparison, we also use the Gradient Projection for Sparse Reconstruction (GPSR) solver [32] in version 5.0. The MATLAB code is available at http://www.lx.it.pt/∼mtf/GPSR/. In general, GPSR is trying to solve the following convex unconstrained optimization problem:

$$\begin{array}{c} minimize\; \frac{1}{2}\ast {\vert\vert \boldsymbol{Ax}} - {\boldsymbol b}\vert\vert_2^2 + \tau {{\vert\vert \boldsymbol x}\vert\vert_{1}},\\ subject\; to\; {\boldsymbol x} \ge 0,\end{array}$$
with ${{\vert\vert\boldsymbol x}\vert\vert_{1}} = \; \mathop \sum \nolimits_1 |{{x_i}} |$ is the l1 norm of x and τ a non-negative parameter. To simplify the problem, the vector x is substituted by x = u v, with u, v ≥ 0, meaning, the l1 norm is substituted by a linear function. The l2-norm term stays unaffected. The problem can now be described by a bound-constrained quadratic program:
$$\begin{array}{c} minimize\; \frac{1}{2}\ast {\vert\vert \boldsymbol A}({{\boldsymbol u} - {\boldsymbol v}} )- {\boldsymbol b}\vert\vert_2^2 + \tau {({{\vert\vert \boldsymbol u} - {\boldsymbol v}} )\vert\vert_{1}}\\ subject\; to\; {\boldsymbol u} \ge 0,{\boldsymbol \; v} \ge 0 \end{array}$$

GPSR solves the problem by using backtracking line search and making updates in the negative gradient direction [34]. The updates are fed back to the iterative algorithm until the convergence criteria (smaller than the stop criteria tolA) is met and the function minimized. The package offers two solvers which solve the same problem with slightly different algorithms. For what follows we use GPSR_Basic.m with the maxiter variable set to 10000 and τ fixed to 0.1. Further tweaking of the solver was applied based on the suggestion in [33], setting the ToleranceA (tolA) variable to 10−3. The continuation steps (cont_steps) were set to 5, while all other parameters were using the default values given by the function.

3. Results

3.1 Optimal number of measurements

In this section, we investigate the optimal number of measurements required for sparsity-based super-resolution computational imaging. The concept of compressive sensing allows to do far fewer number of measurements than the number of pixels to be resolved (M << N2). Naively one might expect that the more measurements are performed the better the image can be reconstructed, which, as we show here, is not necessarily correct. To address the question of what is the optimal number of measurements for a given set of parameters of an imaging system (diffraction limit and a pixel size), we perform the following simulations.

As a sample, we used an image that consisted of K (varied from 1 to 80) non-zero pixels randomly distributed over the FOV of N2= 32×32 pixels with a pixel size of 200 nm. We simulated the compressive sensing imaging procedure by using a different number of random illumination patterns M, from 5 to 200. The quality of reconstruction was estimated by using the cross-correlation between the original ground truth sample and the reconstructed image. The cross-correlation coefficients r as a function of normalized sparsity (K/N2) and normalized number of measurements (M/N2) are presented in Fig. 2 (a) for the L1_LS algorithm and in Fig. 2(b) for the GPSR algorithm. The dashed red lines in Fig. 2 represent the number of measurements predicted by compressive sensing theory for random matrixes for a good image reconstruction.

 figure: Fig. 2.

Fig. 2. The quality of acquired images characterized via the cross-correlation coefficients r between the reconstructed and ground-truth images as a function of normalized sparsity (K/N2) and normalized number of measurements (M/N2) for the (a) L1_LS and (b) GPSR algorithms. The dashed red line for eye guidance represents the theoretical limit for good image reconstruction, according to the compressive sensing theory for random matrixes: M ${\propto}$ K*log(N2). The optimal number of measurements and the maximal sparsity level for the L1_LS algorithm is shown by the dotted black and white lines, respectively.

Download Full Size | PDF

The total number of measurements needed for a BP reconstruction algorithm is known to be linearly proportional to the sparsity level [35,36]: M ${\propto}$ K*log(N2). In our case the length of the signal is equal to N2 as our sensing matrix is a two-dimensional matrix with the size of N×N. Our simulations nicely coincide with these theoretical predictions for a low K regime. However, for sparsity above a certain level, we see a saturation effect: additional measurements do not improve the quality of the reconstruction. In the case of the GPSR algorithm, an increase in the number of measurements leads to an even worse quality of reconstruction (lower correlation with the original sample), as can be seen in Fig. 2(b). We estimated the maximum level of sparsity as presented by the dotted white line in Fig. 2(a). A good reconstruction was defined as one that has a cross-correlation coefficient r with the original sample of r ≥ 80%. Our results seem to contradict the predictions of compressive sensing theory for random matrixes. However, we are considering the super-resolution limit of compressive sensing theory. As we will show later, the saturation is the result of not all M speckle patterns being independent.

Our main goal is to estimate ultimate limits for speckle-based compressed imaging and the noiseless case sets a limit on super-resolution. However, here we also briefly address the role of noise. We simulated the compressive imaging procedure as described before and normalized the measurement vector b to a certain number of “detected” photons, ranging from 100 to 108. To simulate noise, the measurement vector was replaced by random numbers from the Poisson distribution with the rate parameter b. As a result, Poisson noise with different signal-to-noise ratios (SNRs) was generated in the detected signal. We consider our sensing matrix to be stable (meaning noise free) compared to the signal because the illumination speckle patterns can be potentially measured much more accurate (with a high number of photons and long averaging time) than the actual signal.

The results are presented in Fig. 3. The quality of acquired images reconstructed by the L1_LS algorithm and characterized via the cross-correlation coefficients r between the reconstructed and ground-truth images as a function of normalized sparsity (K/N2) and normalized number of measurements (M/N2) are plotted for the different noise levels: (a) SNR = 32; (b) SNR = 100 and (c) SNR = 1000. In Fig. 3(d) the maximum level of sparsity, for which reconstruction with r >80% is achievable, is presented as a function of SNR in the signal. We show that noise doesn’t play a significant role for SNR > 1000. The case of infinite SNR is expected to be indistinguishable with low-noise case [37].

 figure: Fig. 3.

Fig. 3. (a-c) The quality of the reconstruction of the L1_LS algorithm characterized via the cross-correlation coefficients r between the reconstructed and ground-truth images as a function of normalized sparsity (K/N2) and normalized number of measurements (M/N2) for different noise levels: (a) SNR = 32; (b) SNR = 100 and (c) SNR = 1000. (d) The maximum level of sparsity, for which a reconstruction with the fidelity r >80% is achievable, as a function of SNR in the detected signal.

Download Full Size | PDF

Please note that with L1_LS we aimed to focus on the noiseless case and the use of one of the most common algorithms, which is not made to handle low SNR. The tolerance towards noise can be further improved by using denoising algorithms e.g., Basis Pursuit DeNoising (BPDN) algorithm.

We estimated the optimal number of measurements as the number of measurements for which a further increase does not change the reconstruction quality, as illustrated in Fig. 2(a) by the dotted black line for Moptimal/N2 ∼ 0.13. We repeated the simulations presented in Fig. 2 for different pixel sizes from 150 to 500 nm and determined the optimal number of measurements. Our results demonstrate that the saturation level depends on the pixel size. In Fig. 4(a), we plotted the optimal number of measurements normalized to the total number of pixels as a function of resolution improvement factor (RIF), which is defined as a ratio between the diffraction limit of the imaging system (ldl) and the pixel size (l): RIF = ldl /l. The open red circles in Fig. 4(a) show the results of our numerical simulations.

 figure: Fig. 4.

Fig. 4. (a) The optimal number of measurements (M) normalized to the total number of pixels in the image (N2) as a function of the resolution improvement factor, RIF (the open magenta circles) obtained from our numerical simulations using the L1_LS algorithm. The black dots represent the number of non-zero singular values or the rank of the illumination matrix A normalized to the total number of pixels. The solid red line represents the space-bandwidth product (SBP) normalized to the total number of pixels. The solid blue line represents the mutual coherence of the sensing matrix, A (b) The ultimate spatial resolution limit of the compressive imaging approach as a function of the normalized sparsity of the sample (the filled red circles) obtained from our numerical simulations. The solid red line represents the analytically derived limit of the RIF given by Eq. (8).

Download Full Size | PDF

The results presented in Fig. 4(a) lead to an interesting conclusion: For a better resolution a lower number of measurements suffices. We can increase the number of illumination patterns only as long as we are able to get new information with each measurement. However, the total number of spatial modes supported by our system is limited by the numerical aperture. Consequently, the total number of independent speckle patterns is also limited.

In the low noise case, the information capacity of the imaging system is the main limiting factor. The limited information capacity shows up as the (limited) rank of the measurement matrix. To analyze the optimal number of measurements and confirm that it is determined by the Abbe diffraction limit, we investigated the illumination matrix, A. We calculated the maximum number of linearly independent vectors in A by using singular value decomposition. The number of non-zero singular values is equal to the rank of A, which is presented by black dots in Fig. 4(a) and perfectly coincides with our simulation results for the optimal number of measurements.

To characterize the information capacity of our imaging system, we calculated the space-bandwidth product (SBP) of the sampling function I(x, y), as proposed by Goodman [7]. As a result, the total number of significant samples required to represent I(x,y), or SBP, can be described as:

$$SBP = 4\frac{{{N^2}{l^2}}}{{l_{dl}^2}},$$
where l is the pixel size, ldl is the diffraction limit. The results are presented in Fig. 4(a) by the solid red line and nicely coincide with our numerical simulations and theoretical analysis of the illumination matrix. As can be seen from Fig. 4(a), the rank of the illumination matrix is correlated with the space-bandwidth product of the sampling function. It can be explained by the following: The rank is equal to the maximum number of linearly independent columns of a matrix, which in our case is the maximum number of independent illumination patterns. On the other hand, the SBP characterizes the total number of significant samples required to represent the illumination [7].

We can conclude, that compressive optical imaging has an optimal number of illumination patterns defined by the Abbe diffraction limit, or equivalently by the total number of independent illumination patterns. This optimal number of measurements also limits the level of sparsity of the sample that may be reconstructed. Going further beyond the diffraction limit (larger resolution improvement factor) requires a sparser sample and lower optimal number of measurements.

Normally, the efficiency of the compressive sensing approach can be improved by optimizing the mutual coherence of the sensing matrix. However, in our work we addressed a particular experimental setup that uses randomly generated speckle patterns (in a scattering media or a multimode fiber) as illumination. The solid blue line in Fig. 4(a) represents the mutual coherence of our sensing matrix, A (with N2 = 144×144), as a function of RIF. As expected, a lower rank corresponds to a higher value of the mutual coherence.

3.2 Ultimate resolution limit

Here we derive the ultimate resolution limit of speckle-based super-resolution compressive imaging. As we showed in the previous subsection, the number of measurements that provide useful information for the imaging procedure is upper-limited to the maximum rank of the measurement matrix. By increasing the ratio between the size of the diffraction-limited spot (ldl) and the size of a resolved pixel (l), we decrease the rank of the illumination matrix as well as the space-bandwidth product and therefore the information capacity. It means that for higher resolution the (optimal) number of measurements is decreasing.

It is known that for the reconstruction of a K-sparse signal x, only M = 2K random projections are required when the signal and the measurements are noise-free [38]. To estimate the best possible resolution we use the SBP as the total number of significant (independent) measurements M and consider therefore the lower limit for the number of measurements as M = 2K, as suggested by [38]. By substituting ldl/l by RIF in Eq. (7) we get the limit for resolution improvement in the noiseless case:

$$RIF \le \sqrt {\displaystyle{{2N^2} \over K}} .$$

We simulated a super-resolution compressive imaging procedure for N2 = 32 × 32 pixels, as described above, for different pixel sizes and estimated the maximum sparsity level for a given RIF as presented by the dotted white line in Fig. 2(a). In Fig. 4(b), the RIF extracted from our data is presented as a function of sparsity by the red dots. It clearly shows the expected behavior: the sparser the original sample the better the resolution we can get. The red line in Fig. 4(b) is plotted by using Eq. (8) without any free parameters and represents the results of our theoretical analysis of the resolution limit. We see that our numerical simulations are in a very good agreement with this theoretical prediction.

We can conclude that Eq. (8) provides the ultimate spatial resolution limit for the noiseless compressive imaging approach and the resolution improvement is inverse proportional to the square root of the relative fraction of nonzero basis functions. A nearly similar result was described in the work of Gazit et al. [13], where they state that the resolution limit is inversely proportional to the relative fraction of nonzero basis functions without a square root function. It seems that they were considering 1D signal reconstruction, which would explain the difference. We are focusing on the 2D case because it is more important for microscopy applications.

3.3 Optimal discretization

Any imaging procedure makes a discrete representation of an original sample. The resulting image can only be observed in a finite number of points due to the inevitable spatial domain discretization during analog-to-digital conversion. Following the Nyquist-Shannon sampling theorem, we normally choose the discretization step to be twice smaller than the diffraction limit. Finer discretization is not considered to be bad and typically results in a nice and smooth image, while it doesn’t really provide a better resolution and may require additional imaging time. This conclusion is valid for almost any imaging procedure including super-resolution approaches and one could wrongly extrapolate this concept to the compressive imaging methods. In this section, we investigate what are the optimal spatial discretization parameters for sparsity-based super-resolution imaging.

We performed simulations using the same sample represented by a different number of pixels. The original sample is shown in Fig. 5(a) and consists of N2 = 16 × 16 pixels of 300 nm each with K = 15 randomly distributed non-zero values. The spatial cutoff frequency of the speckle patterns was set to 2NA / λ, where NA = 0.22 and λ = 0.532 μm. We divided each pixel of the original image by d subpixels as presented in the first column of Fig. 4(b). We repeated simulations 5 times for different d from 1 (original sample, the top row) to 5 (high discretization, the bottom row). The first column in Fig. 5(b) illustrates the discretization. Each row in Fig. 4(b) shows acquired images consisting of 16d × 16d pixels with a pixel size l = 300/d nm and K = 15d2 non-zero values grouped into d × d pixels. The normalized sparsity level was kept constant at K/N2 = 6%.

 figure: Fig. 5.

Fig. 5. (a) The random sample with sparsity K = 15. (b) The imaging simulations results. The first column illustrates the discretization: each pixel of the original sample is divided by d subpixels (d varies from 1 to 5) Images in d row consist of 16d × 16d pixels with a pixel size of 300/d nm and K = 15d2 non-zero values grouped into d × d pixels. The images acquired by a standard diffraction-limited microscopy are presented in the second column and become smoother with the increase of discretization. The third and fourth columns show the results of compressive imaging by the L1_LS and GPSR algorithms, respectively. (c) The random sample with sparsity K = 5. (d) The imaging simulation results. The images acquired by a standard diffraction-limited microscopy are presented in the first column. The second and third columns show the results of compressive imaging by the L1_LS and GPSR algorithms, respectively.

Download Full Size | PDF

The results obtained with the conventional diffraction-limited approach are presented in the second column of Fig. 5(b). The images become smoother with an increase of discretization. However, the limited optical resolution doesn’t allow to resolve all the features independently of the pixel size. The third and fourth columns in Fig. 5(b) show the results of compressive imaging with the optimized number of measurements by L1_LS and GPSR algorithms, respectively. Both algorithms perfectly reconstruct the original sample including all the subdiffractional details if the pixel size is equal to a feature size [the first row in Fig. 5(b)]. However, in contrast to conventional microscopy, oversampling decreases the image quality.

This interesting counterintuitive fact can be explained as follows. According to Eq. (8), the resolution improvement factor of our system RIF ≈ 6, and doesn’t change for a different discretization and grid size. It means that our resolution limit is approximately equal to about 200 nm. Consequently, when the pixel size l = 300 nm and is equal to a feature size [the first row of Fig. 5(b)] we see a perfect super-resolution image reconstruction. With a finer grid, the pixel size becomes smaller. The second row represents the same image but with a pixel size of 150 nm, which cannot be reconstructed perfectly according to Eq. (8). We can see image degradation however, both algorithms still provide reasonable results (correlation coefficient r = 0.93) because the pixel size is still approximately equal to the resolution limit. A further decrease of the pixel size to l ≤ 100 nm leads to noticeable worse reconstruction because the pixel size is significantly smaller than the resolution limit (200 nm).

It is important to highlight that different sparsity-based reconstruction algorithms produce different results for a finer discretization grid. The GPSR algorithm is more robust to the finer grid providing a smooth image, as can be seen from Fig. 5(b), last column. It is due to the substitution of the vector under the l1 norm by a linear function, which results in a more smooth solution [33]. In contrast, the Basis Pursuit L1_LS algorithm results in a very pixelated image. The algorithm tries to resolve each pixel separately and fails when a pixel size becomes significantly smaller than the diffraction limit.

In line with our observations presented above, the reconstruction efficiency highly depends on the sparsity level. For example, for a very low sparsity, K/N2 = 2%, we can get a good image even for a high discretization level, as presented in Fig. 5(d). We used the sample presented in Fig. 5(c) and repeated simulations 5 times for different d from 1 (original sample, the top row) to 5 (high discretization, the bottom row). According to Eq. (8) for this system RIF ≈ 10 leading to a resolution limit of about 120 nm. Consequently, when the pixel size is bigger than the resolution limit: l = 300 nm [first row Fig. 5(c)] and l = 150 nm [second row Fig. 5(c)] we see a perfect reconstruction. The third row represents the image with l = 100 nm, which is slightly smaller than the resolution limit and we see some degradation. With a finer grid, the pixel size becomes smaller than the resolution limit and the L1_LS algorithm fails to reconstruct a perfect image. However, the smoothing properties of GPSR algorithm provide very good reconstruction even for a high discretization in case of low sparsity.

To summarize, we have shown that speckle-based compressive imaging imposes restrictions on the reconstruction procedure if one approaches super-resolution regime. The maximum achievable resolution improvement factor below the diffraction limit is determined by the sparsity and discretization level. We showed that compressive imaging is very sensitive to the discretization level and in contrast to conventional microscopy, image quality may significantly decrease for a high discretization level. The optimal reconstruction requires the pixel size to be approximately equal to the feature size.

4. Conclusions

The Abbe and Nyquist limits highly influence almost any imaging procedure requiring a pixel size to be approximately twice smaller than the diffraction (or super-resolution) limit. Here we addressed the question of how the Nyquist and the diffraction limits are connected within the speckle-based super-resolution compressive imaging approach. We simulated super-resolution imaging via sparsity constraints, speckle illumination and single-pixel detection and used two reconstruction algorithms for performance comparison. The fundamental limits of compressive super-resolution imaging have been demonstrated by both numerical simulations and theoretical analysis. We derived the ultimate spatial resolution limit as a function of sparsity and the optimal number of measurements as a function of resolution improvement.

Usually, oversampling makes images look nicer while it does not provide a better resolution and may reduce imaging speed. Here we demonstrated that this concept cannot be directly expanded to super-resolution optical imaging via sparsity constraint. We demonstrated the connection between sparsity and discretization level, which depends on resolution improvement below the diffraction limit. Our simulations show that the optimal pixel size should be equal to the feature size we aim to resolve, and oversampling may cause a noticeable degradation of image quality.

Funding

Nederlandse Organisatie voor Wetenschappelijk Onderzoek (15872, WISE).

Acknowledgments

We thank Arie den Boef and Zhouping Lyu for fruitful discussions.

Disclosures

The authors declare the following competing interests. A patent application, i.e., No. 2021837 (Imaging through multimode fibers), based on these results has been filed by VU University, with L.V.A. and J.F.d.B. as inventors.

References

1. E. Abbe, “Beiträge zur Theorie des Mikroskops und der mikroskopischen Wahrnehmung,” Arch. Für Mikrosk. Anat. 9(1), 413–468 (1873). [CrossRef]  

2. S. J. Sahl, S. W. Hell, and S. Jakobs, “Fluorescence nanoscopy in cell biology,” Nat. Rev. Mol. Cell Biol. 18(11), 685–701 (2017). [CrossRef]  

3. E. Betzig, J. K. Trautman, T. D. Harris, J. S. Weiner, and R. L. Kostelak, “Breaking the Diffraction Barrier: Optical Microscopy on a Nanometric Scale,” Science 251(5000), 1468–1470 (1991). [CrossRef]  

4. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging Intracellular Fluorescent Proteins at Nanometer Resolution,” Science 313(5793), 1642–1645 (2006). [CrossRef]  

5. S. W. Hell, “Far-Field Optical Nanoscopy,” Science 316(5828), 1153–1158 (2007). [CrossRef]  

6. M. Maglione and S. J. Sigrist, “Seeing the forest tree by tree: super-resolution light microscopy meets the neurosciences,” Nat. Neurosci. 16(7), 790–797 (2013). [CrossRef]  

7. J. W. Goodman, Introduction to Fourier Optics (Roberts and Company Publishers, 2005).

8. A. Stefanoiu, G. Scrofani, G. Saavedra, M. Martínez-Corral, and T. Lasser, “What about computational super-resolution in fluorescence Fourier light field microscopy?” Opt. Express 28(11), 16554–16568 (2020). [CrossRef]  

9. E. J. Candes and M. B. Wakin, “An Introduction To Compressive Sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008). [CrossRef]  

10. M. A. Davenport, M. F. Duarte, Y. C. Eldar, and G. Kutyniok, “Introduction to compressed sensing,” Camb. Univ. Press10 (2012).

11. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]  

12. D. L. Donoho, “Superresolution via Sparsity Constraints,” SIAM J. Math. Anal. 23(5), 1309–1331 (1992). [CrossRef]  

13. S. Gazit, A. Szameit, Y. C. Eldar, and M. Segev, “Super-resolution and reconstruction of sparse sub-wavelength images,” Opt. Express 17(26), 23920–23946 (2009). [CrossRef]  

14. A. Szameit, Y. Shechtman, E. Osherovich, E. Bullkich, P. Sidorenko, H. Dana, S. Steiner, E. B. Kley, S. Gazit, T. Cohen-Hyams, S. Shoham, M. Zibulevsky, I. Yavneh, Y. C. Eldar, O. Cohen, and M. Segev, “Sparsity-based single-shot subwavelength coherent diffractive imaging,” Nat. Mater. 11(5), 455–459 (2012). [CrossRef]  

15. M. Bancerek, K. M. Czajkowski, and R. Kotyński, “Far-field signature of sub-wavelength microscopic objects,” Opt. Express 28(24), 36206–36218 (2020). [CrossRef]  

16. Y. Shechtman, S. Gazit, A. Szameit, Y. C. Eldar, and M. Segev, “Super-resolution and reconstruction of sparse images carried by incoherent light,” Opt. Lett. 35(8), 1148–1150 (2010). [CrossRef]  

17. Y. Shechtman, Y. C. Eldar, A. Szameit, and M. Segev, “Sparsity based sub-wavelength imaging with partially incoherent light via quadratic compressed sensing,” Opt. Express 19(16), 14807–14822 (2011). [CrossRef]  

18. S. Tsiper, O. Dicker, I. Kaizerman, Z. Zohar, M. Segev, and Y. C. Eldar, “Sparsity-Based Super Resolution for SEM Images,” Nano Lett. 17(9), 5437–5445 (2017). [CrossRef]  

19. S. D. Babacan, Z. Wang, M. Do, and G. Popescu, “Cell imaging beyond the diffraction limit using sparse deconvolution spatial light interference microscopy,” Biomed. Opt. Express 2(7), 1815–1827 (2011). [CrossRef]  

20. L. V. Amitonova and J. F. de Boer, “Compressive imaging through a multimode fiber,” Opt. Lett. 43(21), 5427 (2018). [CrossRef]  

21. L. V. Amitonova and J. F. de Boer, “Endo-microscopy beyond the Abbe and Nyquist limits,” Light: Sci. Appl. 9(1), 81 (2020). [CrossRef]  

22. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008). [CrossRef]  

23. O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009). [CrossRef]  

24. W. Gong and S. Han, “High-resolution far-field ghost imaging via sparsity constraint,” Sci. Rep. 5(1), 9280 (2015). [CrossRef]  

25. V. Studer, J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proc. Natl. Acad. Sci.109(26), E1679–E1687 (2012). [CrossRef]  

26. M. Pascucci, S. Ganesan, A. Tripathi, O. Katz, V. Emiliani, and M. Guillon, “Compressive three-dimensional super-resolution microscopy with speckle-saturated fluorescence excitation,” Nat. Commun. 10(1), 1327 (2019). [CrossRef]  

27. B. Lochocki, A. Gambín, S. Manzanera, E. Irles, E. Tajahuerce, J. Lancis, and P. Artal, “Single pixel camera ophthalmoscope,” Optica 3(10), 1056–1059 (2016). [CrossRef]  

28. L. Song, Z. Zhou, X. Wang, X. Zhao, and D. S. Elson, “Simulation of speckle patterns with pre-defined correlation distributions,” Biomed. Opt. Express 7(3), 798 (2016). [CrossRef]  

29. L.-H. Yeh, L. Tian, and L. Waller, “Structured illumination microscopy with unknown patterns and a statistical prior,” Biomed. Opt. Express 8(2), 695–711 (2017). [CrossRef]  

30. Y. Shechtman, A. Beck, and Y. C. Eldar, “GESPAR: Efficient Phase Retrieval of Sparse Signals,” IEEE Trans. Signal Process. 62(4), 928–938 (2014). [CrossRef]  

31. S.-J. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky, “An Interior-Point Method for Large-Scale -Regularized Least Squares,” IEEE J. Sel. Top. Signal Process. 1(4), 606–617 (2007). [CrossRef]  

32. M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient Projection for Sparse Reconstruction: Application to Compressed Sensing and Other Inverse Problems,” IEEE J. Sel. Top. Signal Process. 1(4), 586–597 (2007). [CrossRef]  

33. S. Becker, J. Bobin, and E. J. Candès, “NESTA: A Fast and Accurate First-Order Method for Sparse Recovery,” SIAM J. Imaging Sci. 4(1), 1–39 (2011). [CrossRef]  

34. M. Rani, S. B. Dhok, and R. B. Deshmukh, “A Systematic Review of Compressive Sensing: Concepts, Implementations and Applications,” IEEE Access 6, 4875–4894 (2018). [CrossRef]  

35. S. Chen, D. Donoho, and M. Saunders, “Atomic Decomposition by Basis Pursuit,” SIAM Rev. 43(1), 129–159 (2001). [CrossRef]  

36. E. C. Marques, N. Maciel, L. Naviner, H. Cai, and J. Yang, “A Review of Sparse Recovery Algorithms,” IEEE Access 7, 1300–1322 (2019). [CrossRef]  

37. I. J. Cox and C. J. R. Sheppard, “Information capacity and resolution in an optical system,” J. Opt. Soc. Am. A 3(8), 1152–1158 (1986). [CrossRef]  

38. W. Dai and O. Milenkovic, “Subspace Pursuit for Compressive Sensing Signal Reconstruction,” IEEE Trans. Inf. Theory 55(5), 2230–2249 (2009). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. (a) Principles of the optical setup for super-resolution imaging via sparsity constraint, speckle illumination and single-point detection. (b) Schematic representation of the imaging procedure. Measurement matrix A, sample vector x and measurement vector b are illustrations (c) The one-dimensional spatial frequency spectra (kx components) of the speckle illumination pattern (the blue line) and the sample (the red line). The cutoff frequency is νcutoff = 2NA/λ
Fig. 2.
Fig. 2. The quality of acquired images characterized via the cross-correlation coefficients r between the reconstructed and ground-truth images as a function of normalized sparsity (K/N2) and normalized number of measurements (M/N2) for the (a) L1_LS and (b) GPSR algorithms. The dashed red line for eye guidance represents the theoretical limit for good image reconstruction, according to the compressive sensing theory for random matrixes: M ${\propto}$ K*log(N2). The optimal number of measurements and the maximal sparsity level for the L1_LS algorithm is shown by the dotted black and white lines, respectively.
Fig. 3.
Fig. 3. (a-c) The quality of the reconstruction of the L1_LS algorithm characterized via the cross-correlation coefficients r between the reconstructed and ground-truth images as a function of normalized sparsity (K/N2) and normalized number of measurements (M/N2) for different noise levels: (a) SNR = 32; (b) SNR = 100 and (c) SNR = 1000. (d) The maximum level of sparsity, for which a reconstruction with the fidelity r >80% is achievable, as a function of SNR in the detected signal.
Fig. 4.
Fig. 4. (a) The optimal number of measurements (M) normalized to the total number of pixels in the image (N2) as a function of the resolution improvement factor, RIF (the open magenta circles) obtained from our numerical simulations using the L1_LS algorithm. The black dots represent the number of non-zero singular values or the rank of the illumination matrix A normalized to the total number of pixels. The solid red line represents the space-bandwidth product (SBP) normalized to the total number of pixels. The solid blue line represents the mutual coherence of the sensing matrix, A (b) The ultimate spatial resolution limit of the compressive imaging approach as a function of the normalized sparsity of the sample (the filled red circles) obtained from our numerical simulations. The solid red line represents the analytically derived limit of the RIF given by Eq. (8).
Fig. 5.
Fig. 5. (a) The random sample with sparsity K = 15. (b) The imaging simulations results. The first column illustrates the discretization: each pixel of the original sample is divided by d subpixels (d varies from 1 to 5) Images in d row consist of 16d × 16d pixels with a pixel size of 300/d nm and K = 15d2 non-zero values grouped into d × d pixels. The images acquired by a standard diffraction-limited microscopy are presented in the second column and become smoother with the increase of discretization. The third and fourth columns show the results of compressive imaging by the L1_LS and GPSR algorithms, respectively. (c) The random sample with sparsity K = 5. (d) The imaging simulation results. The images acquired by a standard diffraction-limited microscopy are presented in the first column. The second and third columns show the results of compressive imaging by the L1_LS and GPSR algorithms, respectively.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

I ( x , y ) = | F 1 [ F ( r 0 e i Ω 1 + 1 r 0 2 e i Ω 2 ) H ( k x , k y ) ] | 2
m i n i m i z e | | x | | 1 s u b j e c t t o A x = b
m i n i m i z e | | A x b | | 2 2 + λ | | x | | 1
m i n i m i z e | | A x b | | 2 2 + λ Σ i = 1 n x i s u b j e c t t o x i 0 , i = 1 , , n
m i n i m i z e 1 2 | | A x b | | 2 2 + τ | | x | | 1 , s u b j e c t t o x 0 ,
m i n i m i z e 1 2 | | A ( u v ) b | | 2 2 + τ ( | | u v ) | | 1 s u b j e c t t o u 0 , v 0
S B P = 4 N 2 l 2 l d l 2 ,
R I F 2 N 2 K .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.