Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Single-shot lensless imaging via simultaneous multi-angle LED illumination

Open Access Open Access

Abstract

Lensless imaging is a technique that records diffraction patterns without using lenses and recovers the complex field of object via phase retrieval. Robust lensless phase retrieval process usually requires multiple measurements with defocus variation, transverse translation or angle-varied illumination. However, making such diverse measurements is time-consuming and limits the application of lensless setup for dynamic samples. In this paper, we propose a single-shot lensless imaging scheme via simultaneous multi-angle LED illumination. Diffraction patterns under multi-angle lights are recorded by different areas of the sensor within a single shot. An optimization algorithm is applied to utilize the single-shot measurement and retrieve the aliasing information for reconstruction. We first use numerical simulations to evaluate the proposed scheme quantitatively by comparisons with the multi-acquisition case. Then a proof-of-concept lensless setup is built to validate the method by imaging a resolution chart and biological samples, achieving ∼ 4.92 μm half-pitch resolution and ∼ 1.202 mm2 field of view (FOV). We also discuss different design tradeoffs and present a 4-frame acquisition scheme (with ∼ 3.48 μm half-pitch resolution and ∼ 2.35 × 2.55 mm2 FOV) to show the flexibility of performance enhancement by capturing more measurements.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Benefiting from the rapid development of high-throughput imaging sensors and large-scale parallel computational ability, lensless imaging becomes an alternative choice for microscopy [1–4] and has been used in many applications [5, 6]. Compared with conventional lens-based imaging systems, lensless imaging directly samples the object’s diffraction patterns to reconstruct its complex information without the use of any lenses. Based on a simple imaging geometry, lensless imaging has several advantages like simplicity, light weight, compactness, and cost-effectiveness. In addition, it can decouple the field of view (FOV) and spatial resolution of the microscopy, which enables the space-bandwidth product (SBP) to be scaled up easily. Pixel super-resolution methods [7–9] make lensless imaging even more applicable with the resolution of several hundred nanometers. Lensless imaging techniques typically utilize multiple diverse measurements to make the phase retrieval process more robust, such as moving the sensor to multiple heights [10], shifting the pinhole aperture to different positions [11, 12], and sequentially illuminating the object with angle-varied illumination [3, 8, 13]. However, the acquisition process of multiple measurements is time-consuming and limits the lensless imaging in observing static-only samples such as biological tissue slices or physiological slices. Dynamic observation remains a problem.

To realize fast lensless imaging, one way is to improve the acquisition speed. For example, by digitally selecting a smaller FOV within the active area (also known as, region of interest, ROI) of a sensor, one can increase the frames per second (fps). An ultra-high speed ptychographic system [14] is also presented using a digital mirror device (DMD) for fast angular illumination modulation. But a scanning process is still needed in this way. Another way is the single-shot scheme. Sanz Martín et al. [15] and C. Allier et al. [16] realize single-shot lensless imaging by using wavelength multiplexing. However, a resolution loss is introduced due to the Bayer filter of RGB camera in use. Besides, the observed sample is assumed to maintain similar transmission properties across the large bandwidth of multi-wavelength illumination. As having been discussed by papers [9,17,18], the assumption is solid for a transparent or semitransparent phase sample. But for a chromatic or stained sample, the spectral responses to multi-wavelength lights are different. Compressive sensing-based methods have been introduced recently, including a single-shot phase imaging with randomized light (SPIRaL) [19] and a lensless single-shot exposure 3D imaging technique using a diffuser [20]. They both make sparse assumption of the sample and are different from conventional lensless imaging in performance, setup and mathematic model. Deep learning schemes become popular in snapshot lensless imaging due to the fast reconstruction speed and good performance [21–23]. But these methods need a lot of data for training and have some problems for generalization. Single-shot ptychography methods [24,25] multiplex different areas of a sensor to respectively record the object’s Fourier spectrum under multi-angle lights. Due to the high dynamic range of Fourier spectrum and the limited dynamic range of current imaging sensors, the recorded single-shot measurement is with a low signal-to-noise-ratio (SNR). The reconstruction quality needs to be improved for practical applications [24]. In the paper of P. Sidorenko et al. [25], several optical schemes of single-shot ptychography are designed including one using an LED array as the illumination. But in all their mentioned setups, lenses are necessary, which is quite different from traditional lensless imaging. Therefore, there is still much room for the development of a single-shot lensless imaging method.

In this paper, we present an alternative way to realize a cost-efficient and simple system for single-shot lensless imaging. We use a setup with partially coherent illumination and unit-magnification imaging geometry [5]. Rather than utilizing a cross grating [24] or a Dammann grating [26] for diffractive beam splitting, we use a low-cost and programmable LED matrix for a simultaneous multi-angle illumination. A pinhole is added behind the LED matrix to limit the size of each exiting light beam. By properly placing a weakly scattering sample and a camera sensor, we obtain multiple diffraction patterns on different areas of the sensor with small overlap. Different from single-shot ptychography methods [24, 25], our method records a diffraction plane between the object plane and Fourier plane with higher SNR rather than directly measuring the Fourier plane. An optimization algorithm is conducted to demultiplex the aliasing information among diffraction patterns in the single-shot measurement for higher data throughput rather than direct separation with several distinct areas [24]. Simulations are provided to evaluate our single-shot scheme and algorithm quantitatively. A proof-of-concept setup is built with experiments of a resolution chart and biological samples, achieving ∼ 4.92 μm half-pitch resolution and ∼ 1.202 mm2 FOV. To show the flexibility of our system, we further present a 4-frame acquisition scheme for higher resolution (∼ 3.48 μm half-pitch resolution) and larger FOV (∼ 2.35 × 2.55 mm2) of thin samples by capturing a little more measurements.

2. Methods

The schematic diagram of our method is shown in Fig. 1(a), which consists of a quasi-monochromatic and programmable LED matrix, a pinhole, a thin object, and a sensor. The simultaneous multi-angle illumination is provided by the LED matrix. All the light beams then pass through a pinhole and free-propagate to the object plane. Passing through a weakly scattering sample, the exiting light beams preserve their original directions. Finally, after propagating a relative long and proper distance from the object plane, diffraction patterns will arrive at different positions of the sensor with small overlap. The crosstalk (some high frequency components) among the diffraction patterns can be retrieved by our optimization algorithm with the prior knowledge of illumination angles.

 figure: Fig. 1

Fig. 1 The schematic of the proposed single-shot lensless imaging system: (a) the optical setup and (b) the corresponding forward model expression. The mathematical representation of the single-shot measurement Idetected is presented. The coordinate r in equations is omitted for simplicity.

Download Full Size | PDF

2.1. Forward model

We first present the relationship between the object O(r) and the single-shot measurement Idetected(r) within a forward model, as shown in Fig. 1(b). The incident light of the i-th LED with narrow-band wavelength is modeled as an oblique plane wave [27], indicated as Pi(r) = exp (j · kir), where j is the imaginary unit and ⊙ represents the inner product of vectors. ki=(kxi,kyi)=2πλ(xcxis,ycyis) is the wave vector, where s=((xcxi)2+(ycyi)2+d02) is the distance between the i-th LED and the center of the pinhole, λ is the central wavelength, (xc, yc) is the center of the pinhole, (xi, yi) is the position of the i-th LED, and d0 is the distance between the pinhole plane and the LED matrix plane.

The distribution function of the pinhole is marked as PM(r). The resulting complex field after the pinhole is U1(r) = Pi(r) · PM(r), where · means dot product of matrix in this paper. The complex field U1(r) then propagates onto the object plane and becomes U2(r) = PSF1(r) * U1(r), where PSF1(r) is the point spread function (PSF) of Fresnel propagation through a distance d1 and * is the convolution operator. The convolution process of Fresnel propagation is implemented in the Fourier space using angular spectrum theory as follows [28–30]:

U¯2(k)=exp(jk02kx2ky2d1)U¯1(k),
where k = (kx, ky) represents the coordinates in the Fourier space, Ū1(k) and Ū2(k) are Fourier transforms of U1(r) and U2(r), and k0 is the wave number and equals to 2π/λ. PSF1(r) equals to the inverse Fourier transform of exp(jk02kx2ky2d1) and U2(r) can be obtained by the inverse Fourier transform of Ū2(k).

After U2(r) passing through the object O(r), the exiting light field is denoted as U3(r) = U2(r) · O(r). The light field U3(r) then propagates to the detection plane over a distance d2 and the final complex field on this plane under the i-th LED light is U4(r) = PSF2(r) * U3(r). This convolution process can also be implemented in the Fourier space like Eq. (1). The intensity (or diffraction pattern) collected by the sensor is thus Ii(r) = |U4(r)|2, where the operator | · | indicates the module of a matrix. Specifically,

Ii(r)=|U4(r)|2=|PSF2(r){PSF1(r)[Pi(r)PM(r)]O(r)}|2.

In our method, multi-angle lights are illuminated simultaneously and they are incoherent with each other during propagation. Therefore, the resulting single-shot measurement on the detection plane, denoted as Idetected(r), is the sum of all diffraction patterns under multi-angle lights and can be written as

Idetected(r)=iLsIi(r)=iLs|PSF2(r){PSF1(r)[Pi(r)PM(r)]O(r)}|2,
where Ls refers to the set of N multi-angle incident lights.

2.2. Solution to inverse problem

The inverse problem of our scheme is to solve the object’s distribution function O(r) from the single-shot measurement Idetected(r) in above Eq. (3). The objective function is

minO(r),PM(r)|Idetected(r)(iLs|PSF2(r){PSF1(r)[Pi(r)PM(r)]O(r)}|2+b)|2,s.t.Idetected(r)b0,
where b describes the background noise of the measurement. The resolution of our method is limited by the pixel size (that is the sampling ability) of the camera sensor in use.

To solve the problem, an intuitive idea is using the direct split method [24], which directly splits diffraction patterns within the single-shot measurement using distinct circles. A single-shot acquisition without sample can be used for labeling midpoints of these circles. If keeping large intervals among all diffraction patterns, we can assume that each split image only contains information corresponding to a certain incident light. A good recovery can then be obtained. However, real intervals are limited due to some reasons (detailed in Section 5.1) and diffraction patterns have some crosstalks in reality. Moreover, inaccurately split borders of diffraction patterns will severely lower the recovery quality. A traversal search for a proper radius ratio (split radius of circles dividing the pinhole radius) is needed. As demonstrated by the following simulations using parameters of the real setup, even the recovered result using the optimum radius ratio is of low quality with obvious periodic artifacts.

2.2.1. Optimization algorithm

In order to partly eliminate the effect of crosstalk among diffraction patterns and improve the reconstruction quality, we use an optimization algorithm to make better use of captured information. Our method is inspired by the idea of multiplexed illumination Fourier ptychographic microscopy (FPM) proposed by L. Tian et al. [31]. In that paper, they introduce a weight updating step in each iteration for the separation of multiplexed Fourier spectral information within a single measurement. By this, they realize a fast FPM acquisition and get a high quality reconstruction. The efficiency of this idea is also proved by the multi-state FPM [32] and the high-speed in vitro FPM [33]. We modify and conjunctively use the algorithms [12,13,31] for our single-shot lensless imaging scheme by introducing a similar weight updating step for the separation of multiplexed angular information.

One iteration of our reconstruction algorithm includes N (equals to the number of multi-angle lights) inner iterations. The flow chart is shown in Fig. 2. We use the minimum value of the captured image without sample to estimate the background value b and remove it in advance to produce the corrected single-shot measurement Îdetected(r). An all-ones matrix is used as the initial input of the object, denoted as O0(r). The initial input of the pinhole, marked as PM0(r), is a disc function. The estimation of single-shot measurement in the k-th inner iteration, marked as I^recovered(k)(r), can be calculated based on current values of object function O(k)(r) and pinhole function PM(k)(r). Next, a forward propagation from the k-th angular light to the detection plane is undertaken. I^recovered(k)(r) is then used for the weight updating of U^4(k)(r), the present distribution function on the detection plane, to get U^4,update(k)(r). Finally, alternate projection steps are applied for both the object O(k+1)(r) and the pinhole PM(k+1)(r). When all the illumination angles (k = 1 ∼ N) are traversed once, one iteration is finished. 5 to 10 iterations are commonly needed for the recovery in our scheme. More details can been found in Fig. 2.

 figure: Fig. 2

Fig. 2 The flow chart shows one iteration of the reconstruction algorithm, which includes N inner iterations. Particularly, PSF=Fresnel(d)=1{exp(jk02kx2ky2d)}, where −1{·} represents the inverse Fourier transform and d is the propagation distance. The conj(·) is the conjugate operator. 5 to 10 iterations are generally needed for the recovery. The coordinate r is omitted in above equations.

Download Full Size | PDF

3. Simulations

In this section, we use simulations to verify the performance of our scheme. We choose the ’baboon’ image as the amplitude and the ’lighthouse’ image as the phase of the object. 7 × 7 LEDs are lit up simultaneously as the multiplexed illumination. Other parameters are chosen the same as those of single-shot experiments, detailed in Table. 1. We calculate the structural similarity index (SSIM) [34] and the root-mean-square error (RMSE) of central 500 × 500 pixels from the recovered images with 600 × 600 pixels as the evaluation criteria.

Tables Icon

Table 1. Parameters in experiments of single-shot scheme.

We compare results of the multi-acquisition case (49 measurements) and our single-shot scheme in Fig. 3(a). Our method achieves comparable results in terms of SSIM and RMSE with a single-shot measurement instead of 49 measurements. Particularly, as qualitative phase retrieval methods, recovered phase images of both multi-acquisition case and our single-shot scheme are of some inaccuracy compared to the ground truth. Recovered results of directly splitting the single-shot measurement with different radius ratios are also presented in Fig. 3(b). Even the reconstruction of best quality (with SSIM or RMSE in yellow) is with some periodic artifacts, especially the recovered phase images. In Fig. 3(c), we plot SSIM and RMSE curves of recovered amplitude and phase using above-mentioned methods, where the x axis indicates the radius ratio. Our method has an obvious improvement from the direct split method and can get a comparable performance with the multi-acquisition case.

 figure: Fig. 3

Fig. 3 Simulation results of a complex object using 10 iterations. (a) Results of the multi-acquisition lensless imaging and our single-shot scheme. (b) Results using direct split method [24] with different split ratios. A loss of details and periodic artifacts are introduced, especially in recovered phase. Among them, results with SSIM or RMSE in yellow are of best performance. (c) SSIM and RMSE curves of recovered results. Our method achieves similar performance as the multi-acquisition case.

Download Full Size | PDF

4. Experiment results

4.1. Single-shot scheme

To further validate our method, we build up a proof-of-concept setup of the single-shot scheme, as shown in Fig. 4(a). The LED matrix has 32 × 32 LEDs in total with 514 nm central wavelength and ∼ 20 nm bandwidth. A narrow-band spectral filter (514 nm central wavelength and 3 nm full width half maximum (FWHM)) is added to improve the temporal coherence of illumination. The distance between adjacent LEDs is 4 mm. A printed pinhole with 150 μm radius is used. The camera in use is a monochromatic CMOS sensor (Point Grey CM3-U3-50S5M) with 2048 × 2448 pixels, 3.45 μm pixel size and 30 fps maximum frame rate. In the data capturing process, a single illumination pattern is used, as shown in Fig. 4(b), where 9 × 9 LEDs with 3 intervals (12 mm between each) are lit up simultaneously. Central 1100 × 1100 pixels of the sensor’s active area are used for recording the single-shot measurement, resulting in ∼ 1.202 mm2 FOV after reconstruction. Using a single-shot acquisition without sample, the intensity coefficients of LED lights are calculated for intensity calibration in each iteration. Considering the refractive indexes and thicknesses of the pinhole mask and sample slides, we calculate equivalent distances (or optical path length) between planes to model the propagation process. More details of parameters are listed in Table. 1. To speed up the calculation process, we refresh the sum of diffraction patterns I^recovered(k) every 5 angles in reality. We also use parallel calculation of MATLAB (edition 2016b) to accelerate our computation on Inter Core i-7 CPU with 8 cores.

 figure: Fig. 4

Fig. 4 (a) is the real setup of our single-shot scheme. (b) is the single-shot illumination pattern and (c) shows the corresponding single-shot measurement. (d) and (e) are recovered results of a USAF-1951 resolution chart. Although with some artifacts, the group 6-5 of resolution chart can be distinguished, achieving 4.92 μm half-pitch resolution.

Download Full Size | PDF

We show the recovered result of a USAF-1951 resolution chart in Fig. 4(d). From the close-up in Fig. 4(e), the group 6-5 is distinguished with 4.92 μm half-pitch resolution. Compared to the pixel size (3.45μm) of the camera sensor, the resolution loss here is mainly due to the crosstalk within the single-shot measurement. Reconstruction artifacts are caused by both the crosstalk and the partial coherence of illumination. More details will be discussed in Section 5.2.

4.2. 4-frame scheme

Our method is flexible and expansible in the setup. Considering the resolution loss and artifacts in the single-shot scheme, more measurements can be captured to obtain higher resolution and larger FOV (detailed in Section 5.2 and 5.3). In this scheme, we successively light up 4 illumination patterns (10 × 11 LEDs with 12 mm intervals totally) and capture 4 measurements, as shown in Fig. 5(a). A larger pinhole with 300 μm radius is used. Central 1500 × 1500 pixels of each measurement are cut out for reconstruction. The achievable FOV is about 2.35 × 2.55 mm2. More details of parameters are in Table. 2. The modification of algorithm from the single-shot scheme to the 4-frame scheme is straight-forward. The iteration process is the same as that in Fig. 2 except for the calculation of U^4,update(k)(r), where 4 measurements are successively and respectively used as the Îdetected(r) now. We present the recovered result of a USAF-1951 chart to demonstrate the resolution enhancement of the 4-frame scheme, as shown in Fig. 5(c) and 5(d). Since the group 7-2 of the chart can be distinguished, we achieve ∼ 3.48 μm half-pitch resolution. This is mainly limited by the pixel size (3.45 μm) of the sensor.

 figure: Fig. 5

Fig. 5 (a) includes illumination patterns and corresponding 4 measurements of the 4-frame scheme. (b) is the close-up of one raw image. (c) and (d) are recovered results of the USAF-1951 resolution chart. As shown in (d), the group 7-2 can be distinguished, indicating about 3.48 μm half-pitch resolution.

Download Full Size | PDF

Tables Icon

Table 2. Parameters in experiments of 4-frame scheme.

4.3. Performance comparison of two schemes

In this section, we use some results of biological samples to further compare the performance of the single-shot scheme and the 4-frame scheme. Recovered amplitude images of a stained lilium ovary cross section is shown in Fig. 6. Specifically, Fig. 6(a) presents the result of the single-shot scheme. In comparison, Fig. 6(b) is the result using the 4-frame scheme with the same region, which is cropped from the full FOV image in Fig. 6(c) and highlighted with the red dashed box. Obviously, the FOV in the 4-frame scheme (∼ 2.35 × 2.55 mm2) is over 4 times of that in the single-shot scheme (∼ 1.202 mm2). Some close-ups of Fig. 6(a) and 6(b) are shown in the left bottom, from which we can see that the recovered result of the 4-frame scheme has more details and better quality than that of the single-shot scheme.

 figure: Fig. 6

Fig. 6 Recovered amplitude images of a lilium ovary cross section. (a) is the result using the single-shot scheme, while (b) is the result of the 4-frame scheme with the same region cropped from the full FOV image (c), marked by the red dashed box. Some close-ups of both schemes are presented in the left bottom correspondingly. Lineouts are shown to demonstrate the differences. The result of 4-frame scheme has over 4 times FOV and better details.

Download Full Size | PDF

We also show the recovered complex images of a meiosis of grasshopper section in Fig. 7. The recovered amplitude images of the single-shot scheme and the 4-frame scheme with the same region are shown in Fig. 7(a1) and 7(c1), while the phase images are shown in Fig. 7(b1) and 7(d1) correspondingly. From the close-ups of amplitude images in Fig. 7(a2) and 7(c2), and close-ups of phase images in Fig. 7(b2) and 7(d2), we can see that enhanced details can be recovered by the 4-frame scheme, as emphasized by blue and red arrows. The recovered amplitude and phase images with the full FOV using the 4-frame scheme are respectively shown in Fig. 7(e) and 7(f). The cropped areas are marked by blue and red dashed boxes correspondingly, which occupy about 1/4 of the full FOV. Some more close-ups of the results using the 4-frame scheme are also presented in the right of Fig. 7 with good details. In conclusion, all the recovered images have some periodic artifacts in empty areas and borders, which are mainly caused by the crosstalk among diffraction patterns and the partial spatial coherence of illumination (detailed in Section 5.2). The 4-frame scheme is validated to have higher resolution and larger FOV than the single-short scheme. Due to the demultiplexing steps of our algorithm, especially the calculation of Îdetected(r) in the iteration, and the 2.5-fold upsampling, the reconstruction time of the single-shot scheme is ∼ 30 min and that of the 4-frame scheme is ∼ 50 min.

 figure: Fig. 7

Fig. 7 Recovered complex images of a meiosis of grasshopper section. (a1) and (b1) are the amplitude and phase images of the single-shot scheme, while (a2) and (b2) are some close-up images. (c1) and (d1) are the results of the 4-frame scheme with the same region, while the close-ups are shown in (c2) and (d2). Different details are emphasized by blue and red arrows. The cropped areas are marked by blue and red dashed boxes in images (e) and (f) correspondingly, which occupy about 1/4 of the full FOV. More close-up images of the results using the 4-frame scheme are shown in the right.

Download Full Size | PDF

5. Discussion

In this section, we discuss the parameter selection of our system, give the reason of resolution loss in the single-shot scheme and performance improvement in the 4-frame scheme, and present the calculation of the achievable FOV. The performance of our method can be further promoted by using a camera sensor with smaller pixel size, larger effective area, faster frame rate and higher photon efficiency.

5.1. Parameter selection

In both the single-shot and 4-frame schemes, parameters (like the setting of LED matrix, distances between optical devices, and the pinhole size) are chosen based on the following analysis. The number and interval of lighting LEDs and the distance between LED matrix and pinhole (d0) work together to determine the incident lights (number, angle and intensity) of the multiplexed illumination. Considering the spatial size and limited illumination intensity of each LED, the distance d0 is set between 320 ∼ 380 mm.

Once the incident angles of illumination are determined, the rest parameters have two main constraints. Constraint 1: multiple light beams on the object plane should have enough overlaps, so as to guarantee the convergence of the phase retrial algorithm. This requires a close pinhole-to-object distance (d1). But a close distance d1 means a small FOV of observed object, so a tradeoff should be made. Constraint 2: resulting diffraction patterns on the detection plane should have enough intervals to prevent heavy crosstalk, otherwise even using the optimization algorithm will fail to get a good reconstruction. A relative large object-to-sensor distance (d2) is required thereby. However, the distance d2 should not be as large as possible, because it also determines the range of detected diffraction patterns (number of pixels in use) within the active area of the camera sensor. Less number of pixels in use means higher utilization efficiency of the sensor. As shown in the simulation in Fig. 8, in order to balance the effect of crosstalk and the number of pixels in use, we place the sensor ∼ 5 mm behind the object plane in both schemes. Besides, since the LED illumination is of partial temporal coherence (although a spectral filter is added), the relative close distance d2 ≈ 5 mm can mitigate the resolution loss in the recovery.

 figure: Fig. 8

Fig. 8 Simulation of the single-shot scheme with a varying object-to-sensor distance d2. A smaller d2 causes heavier crosstalk and results in bad reconstruction quality (larger RMSE). We use d2 ≈ 5 mm in reality (since the RMSE curve of the amplitude almost converges) to balance the crosstalk and the number of pixels in use.

Download Full Size | PDF

The constraint 1 prefers a large pinhole size, while a small size is preferable to the constraint 2. To provide similar overlaps on the object plane and similar intervals on the detection plane of both schemes, we choose 150 μm in the single-shot scheme and 300 μm in the 4-frame scheme in implementation. The pinhole-to-object distance (d1) is accordingly set to ∼ 3 mm in the single-shot scheme and ∼ 6 mm in the 4-frame scheme. In conclusion, due to the complicated and coupled interaction of these parameters, they are chosen empirically based on the above analysis for both simulations and experiments. Optimal parameter selection is left for future work.

5.2. Achievable resolution and reconstruction artifacts

The reconstruction performance of our method is highly affected by the crosstalk among diffraction patterns and the partial coherence of illumination. As above-mentioned, the existing crosstalk is determined by the parameter selection, especially the pinhole size and the object-to-sensor distance. The spatial coherence of illumination is determined by the spatial size of the LED and the distance between the LED matrix and object. The temporal coherence is determined by the bandwidth of illumination and the object-to-sensor distance, which normally limits the resolution in partially coherent lensless approaches [5].

We use simulations of a Siemens star target to specifically discuss the achievable resolutions and reconstruction artifacts in both schemes. As shown in Fig. 9(c), the 1st column is the coherent case. The 2nd and 3rd columns respectively model the partial temporal coherence of illumination with and without the use of the spectral filter. The 4th column only considers the effect of the partial spatial coherence of illumination and we use the real spatial size of the LED for modeling. The 5th column reflects the concurrence of partial spatial and temporal coherence in real condition. In the single-shot scheme, the crosstalk is heavier because of the stronger diffraction introduced by a smaller pinhole, while other parameters are kept approximately equivalent. As shown in the 1st column of Fig. 9(c), this crosstalk causes resolution loss and artifacts in the recovery, while the 4-frame scheme keeps nearly the same resolution as the ground truth in Fig. 9(a). Results in the same rows of the first two columns, labeled by the blue dashed box in Fig. 9(c), respectively have similar performance. This indicates that the effect of the partial temporal coherence to the resolution has been eliminated due to the spectral filter and ∼ 5 mm object-to-sensor distance in use. This phenomenon can also be found when comparing the results of the last two columns in the green dashed box. On the contrary, the results in the 3rd column have obvious resolution loss because of the wide bandwidth of illumination. As shown in the 5th column, the 4-frame scheme in the real condition has higher resolution and less artifacts than the single-shot scheme. This is accordant with the results in Fig. 4 and Fig. 5, where the half-pitch resolution is promoted from ∼ 4.92 μm in the single-shot scheme to ∼ 3.48 μm in the 4-frame scheme with less artifacts. Concluded from the simulation results here, the resolution loss in our method is mainly caused by the crosstalk among diffraction patterns and is hardly affected by the partial temporal coherence of illumination. The artifacts are mainly caused by both the crosstalk and the partial spatial coherence. In order to mitigate the effect of partial coherence and improve the performance, the mixed-state model [35,36] can be introduced.

 figure: Fig. 9

Fig. 9 Simulation results to show the effect of the partial coherence of illumination to both schemes based on real parameters. We use a Siemens star target to evaluate the resolution, as shown in (a). The spectrum curves of illumination before and after the spectral filter are shown in (b). Simulation results corresponding to different partially coherent conditions are presented in (c). The ’Setup 1’ in the upper half of (c) refers to the single-shot scheme, while the ’Setup 2’ in the bottom half refers to the 4-frame scheme. In comparison, the latter has better resolution and less artifacts due to the less crosstalk among diffraction patterns. Recovered results in the same rows of the blue dashed box (column 1 and column 2) and of the green dashed box (column 4 and column 5) respectively have similar performance. This indicates that the effect of the partial temporal coherence has been mostly eliminated, as a spectral filter and ∼ 5 mm object-to-sensor distance in use. Comparing the results of column 1 and column 4, the spatial size of LED (the partial spatial coherence) leads to artifacts.

Download Full Size | PDF

In real experiments, reconstruction artifacts are also caused by defects of the LED matrix. (1) The diffraction pattern under a large angular light has a low SNR. (2) The LEDs are not accurately arranged in the matrix and we just use approximate guesses of incident angles during reconstruction. Besides the manufacture of a well-built LED matrix, we can also use a domed LED array to improve the intensity of large angular lights [37], LED positional misalignment correction methods for angular illumination calibration [38,39] and the idea of momentum to improve the convergence [40]. Moreover, in order to achieve the high resolution within the single shot (or single exposure), a more coherent light source, instead of the easy-built LED matrix in our proof-of-concept system, can be applied for the realization of the multi-angle illumination. For example, we can use a laser light with a bundle of single mode fibers [8], a fast scanning mirror [41] or a DMD device [14,42] for angular modulation.

5.3. FOV

Due to the multiplexed use of the sensor’s active area, the width of FOV in our method can be estimated as

WFOV2×(z1+Rp),
where z1 = d1/d0 × z0 refers to the distance between the marginal light beam and the central light beam on the object plane. z0 is the distance between the marginal LED and the central LED. The radius of each light beam on the object plane approximates to the radius of pinhole Rp. In the single-shot scheme, z0 = 48 mm, d0 ≈ 323 mm, d1 ≈ 3 mm, and Rp = 0.15 mm, so the FOV is about 1.202 mm2. In the 4-frame scheme, z0 equals to 54 mm in one dimension and 60 mm in another dimension, d0 ≈ 370 mm, d1 ≈ 6 mm, and Rp = 0.3 mm, so the FOV is about 2.35 × 2.55 mm2. In comparison, the utilized area of the sensor is ∼ 3.802 mm2 (11002pixels) in the single-shot scheme. For the 4-frame scheme, the total utilized area is equivalent to ∼ 10.362 mm2 (4 × 15002 pixels). Generally speaking, the larger FOV obtained by the 4-frame scheme is owe to more efficient pixels provided by 4 measurements. From the Eq. (5), once the object-to-pinhole distance (d1) and the pinhole size (Rp) are determined, the FOV can be enlarged by an illumination with more incident lights and larger incident angles (a larger z0). In the meantime, a larger active area of the sensor should be used for information recording.

5.4. Imaging speed

Although the proof-of-concept single-shot lensless imaging system does not require hardware synchronization, the real frame rate of acquisition is still limited. The narrow-band filter added in the light path weakens the illumination intensity of LED light and results in a long exposure time (> 180 ms). For the 4-frame scheme, as 4 measurements are captured sequentially and a software synchronization is applied in implementation, the final frame rate is < 0.3 fps. By using a sensor with better photon efficiency, an LED or a laser light with larger light intensity, a filter with higher transmittance and a hardware synchronous control with more accuracy, we can promote the imaging speed of our schemes in the future.

6. Conclusion

In this paper, we propose a single-shot lensless imaging scheme with the potential of observing dynamic samples. We first present the theoretical analysis and simulations to demonstrate the effectiveness of our method. A proof-of-concept system is then built with some experimental results. To show the flexibility of our system in the tradeoff of data throughput and temporal resolution, we present a 4-frame acquisition scheme for a performance enhancement with higher resolution and larger FOV. We believe the resolution, FOV and imaging speed of our method can be further promoted by using a high performance camera sensor and by improving the intensity and coherence of the illumination. The optimized parameter selection, the reconstruction algorithm with momentum term for better convergence [40], and the use of the mixed-state model to compensate the decoherence of illumination [35,36] are left for future work. We can also extend our application from thin slides to thick samples by introducing the multi-slice model [43–45].

Funding

National Natural Science Foundation of China (NSFC) (No. 61327902, 61722110 and 61627804).

Acknowledgments

The authors thank Dr. Zibang Zhang, Mingjie Zhang, Xuemei Hu and Dr. Xu Zhang for academic discussion, and thank Dr. Ziji Liu and Xingye Chen for technical support. The authors declare that there are no conflicts of interest related to this article.

References and links

1. O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab Chip 10, 1417–1428 (2010). [CrossRef]   [PubMed]  

2. A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9, 889–895 (2012). [CrossRef]   [PubMed]  

3. C. Zuo, J. Sun, J. Zhang, Y. Hu, and Q. Chen, “Lensless phase microscopy and diffraction tomography with multi-angle and multi-wavelength illuminations using a led matrix,” Opt. Express 23, 14314–14328 (2015). [CrossRef]   [PubMed]  

4. A. Ozcan and E. McLeod, “Lensless imaging and sensing,” Annu. Rev. Biomed. Eng. 18, 77–102 (2016). [CrossRef]   [PubMed]  

5. S. B. Kim, H. Bae, K.-i. Koo, M. R. Dokmeci, A. Ozcan, and A. Khademhosseini, “Lens-free imaging for biological applications,” J. Lab. Autom. 17, 43–49 (2012). [CrossRef]   [PubMed]  

6. A. Greenbaum, Y. Zhang, A. Feizi, P.-L. Chung, W. Luo, S. R. Kandukuri, and A. Ozcan, “Wide-field computational imaging of pathology slides using lens-free on-chip microscopy,” Sci. Transl. Med. 6, 267 (2014). [CrossRef]   [PubMed]  

7. W. Bishara, T.-W. Su, A. F. Coskun, and A. Ozcan, “Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution,” Opt. Express 18, 11181–11191 (2010). [CrossRef]   [PubMed]  

8. W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light-Sci. Appl. 4, e261 (2015). [CrossRef]  

9. W. Luo, Y. Zhang, A. Feizi, Z. Göröcs, and A. Ozcan, “Pixel super-resolution using wavelength scanning,” Light-Sci. Appl. 5, e16060 (2016). [CrossRef]  

10. L. Allen and M. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. 199, 65–75 (2001). [CrossRef]  

11. M. Guizar-Sicairos and J. R. Fienup, “Phase retrieval with transverse translation diversity: a nonlinear optimization approach,” Opt. Express 16, 7264–7278 (2008). [CrossRef]   [PubMed]  

12. A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109, 1256–1262 (2009). [CrossRef]   [PubMed]  

13. J. M. Rodenburg and H. M. Faulkner, “A phase retrieval algorithm for shifting illumination,” Appl. Phys. Lett. 85, 4795–4797 (2004). [CrossRef]  

14. A. Sun, X. He, Y. Kong, H. Cui, X. Song, L. Xue, S. Wang, and C. Liu, “Ultra-high speed digital micro-mirror device based ptychographic iterative engine method,” Biomed. Opt. Express 8, 3155–3162 (2017). [CrossRef]   [PubMed]  

15. M. Sanz, J. A. Picazo-Bueno, J. García, and V. Micó, “Improved quantitative phase imaging in lensless microscopy by single-shot multi-wavelength illumination using a fast convergence algorithm,” Opt. Express 23, 21352–21365 (2015). [CrossRef]   [PubMed]  

16. C. Allier, S. Morel, R. Vincent, L. Ghenim, F. Navarro, M. Menneteau, T. Bordy, L. Hervé, O. Cioni, X. Gidrol, Y. Usson, and J.-M. Dinten, “Imaging of dense cell cultures by multiwavelength lens-free video microscopy,” Cytom. Part A 91, 433–442 (2017). [CrossRef]  

17. L. Waller, S. S. Kou, C. J. Sheppard, and G. Barbastathis, “Phase from chromatic aberrations,” Opt. Express 18, 22817–22825 (2010). [CrossRef]   [PubMed]  

18. Y. Zhou, J. Wu, Z. Bian, J. Suo, G. Zheng, and Q. Dai, “Fourier ptychographic microscopy using wavelength multiplexing,” J. Biomed. Opt. 22, 066006 (2017). [CrossRef]  

19. R. Horisaki, R. Egami, and J. Tanida, “Single-shot phase imaging with randomized light (spiral),” Opt. Express 24, 3765–3773 (2016). [CrossRef]   [PubMed]  

20. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “Diffusercam: lensless single-exposure 3d imaging,” Optica 5, 1–9 (2018). [CrossRef]  

21. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017). [CrossRef]  

22. Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light-Sci. Appl. 7, 17141 (2018). [CrossRef]  

23. Y. Wu, Y. Rivenson, Y. Zhang, H. Gunaydin, X. Lin, and A. Ozcan, “Extended depth-of-field in holographic image reconstruction using deep learning based auto-focusing and phase-recovery,” arXiv preprint arXiv:1803.08138 (2018).

24. X. Pan, C. Liu, and J. Zhu, “Single shot ptychographical iterative engine based on multi-beam illumination,” Appl. Phys. Lett. 103, 171105 (2013). [CrossRef]  

25. P. Sidorenko, O. Cohen, and Z. Wei, “Single-shot ptychography,” Optica 3, 9–14 (2016). [CrossRef]  

26. X. He, C. Liu, and J. Zhu, “Single-shot fourier ptychography based on diffractive beam splitting,” Opt. Lett. 43, 214–217 (2018). [CrossRef]   [PubMed]  

27. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution fourier ptychographic microscopy,” Nat. Photonics 7, 739 (2013). [CrossRef]  

28. J. W. Goodman, Introduction to Fourier optics (Roberts and Company Publishers, 2005).

29. K. Matsushima and T. Shimobaba, “Band-limited angular spectrum method for numerical simulation of free-space propagation in far and near fields,” Opt. Express 17, 19662–19673 (2009). [CrossRef]   [PubMed]  

30. Z. Zhang, Y. Zhou, S. Jiang, K. Guo, K. Hoshino, J. Zhong, J. Suo, Q. Dai, and G. Zheng, “Invited article: Mask-modulated lensless imaging with multi-angle illuminations,” APL Photonics 3, 060803 (2018). [CrossRef]  

31. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for fourier ptychography with an led array microscope,” Biomed. Opt. Express 5, 2376–2389 (2014). [CrossRef]   [PubMed]  

32. S. Dong, R. Shiradkar, P. Nanda, and G. Zheng, “Spectral multiplexing and coherent-state decomposition in fourier ptychographic imaging,” Biomed. Opt. Express 5, 1757–1767 (2014). [CrossRef]   [PubMed]  

33. L. Tian, Z. Liu, L.-H. Yeh, M. Chen, J. Zhong, and L. Waller, “Computational illumination for high-speed in vitro fourier ptychographic microscopy,” Optica 2, 904–911 (2015). [CrossRef]  

34. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004). [CrossRef]   [PubMed]  

35. P. Thibault and A. Menzel, “Reconstructing state mixtures from diffraction measurements,” Nature 494, 68 (2013). [CrossRef]   [PubMed]  

36. P. Li and A. Maiden, “Lensless led matrix ptychographic microscope: problems and solutions,” Appl. Optics 57, 1800–1806 (2018). [CrossRef]  

37. Z. F. Phillips, M. V. D’Ambrosio, L. Tian, J. J. Rulison, H. S. Patel, N. Sadras, A. V. Gande, N. A. Switz, D. A. Fletcher, and L. Waller, “Multi-contrast imaging and digital refocusing on a mobile microscope with a domed led array,” PLoS One 10, e0124938 (2015). [CrossRef]   [PubMed]  

38. L.-H. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, and L. Waller, “Experimental robustness of fourier ptychography phase retrieval algorithms,” Opt. Express 23, 33214–33240 (2015). [CrossRef]  

39. J. Sun, Q. Chen, Y. Zhang, and C. Zuo, “Efficient positional misalignment correction method for fourier ptychographic microscopy,” Biomed. Opt. Express 7, 1336–1350 (2016). [CrossRef]   [PubMed]  

40. A. Maiden, D. Johnson, and P. Li, “Further improvements to the ptychographical iterative engine,” Optica 4, 736–745 (2017). [CrossRef]  

41. J. Chung, H. Lu, X. Ou, H. Zhou, and C. Yang, “Wide-field fourier ptychographic microscopy using laser illumination source,” Biomed. Opt. Express 7, 4787–4802 (2016). [CrossRef]   [PubMed]  

42. C. Kuang, Y. Ma, R. Zhou, J. Lee, G. Barbastathis, R. R. Dasari, Z. Yaqoob, and P. T. So, “Digital micromirror device-based laser-illumination fourier ptychographic microscopy,” Opt. Express 23, 26999–27010 (2015). [CrossRef]   [PubMed]  

43. A. M. Maiden, M. J. Humphry, and J. Rodenburg, “Ptychographic transmission microscopy in three dimensions using a multi-slice approach,” J. Opt. Soc. Am. A 29, 1606–1614 (2012). [CrossRef]  

44. T. Godden, R. Suman, M. Humphry, J. Rodenburg, and A. Maiden, “Ptychographic microscope for three-dimensional imaging,” Opt. Express 22, 12513–12523 (2014). [CrossRef]   [PubMed]  

45. L. Tian and L. Waller, “3d intensity and phase imaging from light field measurements in an led array microscope,” Optica 2, 104–111 (2015). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 The schematic of the proposed single-shot lensless imaging system: (a) the optical setup and (b) the corresponding forward model expression. The mathematical representation of the single-shot measurement Idetected is presented. The coordinate r in equations is omitted for simplicity.
Fig. 2
Fig. 2 The flow chart shows one iteration of the reconstruction algorithm, which includes N inner iterations. Particularly, PSF = Fresnel ( d ) = 1 { exp ( j k 0 2 k x 2 k y 2 d ) }, where −1{·} represents the inverse Fourier transform and d is the propagation distance. The conj(·) is the conjugate operator. 5 to 10 iterations are generally needed for the recovery. The coordinate r is omitted in above equations.
Fig. 3
Fig. 3 Simulation results of a complex object using 10 iterations. (a) Results of the multi-acquisition lensless imaging and our single-shot scheme. (b) Results using direct split method [24] with different split ratios. A loss of details and periodic artifacts are introduced, especially in recovered phase. Among them, results with SSIM or RMSE in yellow are of best performance. (c) SSIM and RMSE curves of recovered results. Our method achieves similar performance as the multi-acquisition case.
Fig. 4
Fig. 4 (a) is the real setup of our single-shot scheme. (b) is the single-shot illumination pattern and (c) shows the corresponding single-shot measurement. (d) and (e) are recovered results of a USAF-1951 resolution chart. Although with some artifacts, the group 6-5 of resolution chart can be distinguished, achieving 4.92 μm half-pitch resolution.
Fig. 5
Fig. 5 (a) includes illumination patterns and corresponding 4 measurements of the 4-frame scheme. (b) is the close-up of one raw image. (c) and (d) are recovered results of the USAF-1951 resolution chart. As shown in (d), the group 7-2 can be distinguished, indicating about 3.48 μm half-pitch resolution.
Fig. 6
Fig. 6 Recovered amplitude images of a lilium ovary cross section. (a) is the result using the single-shot scheme, while (b) is the result of the 4-frame scheme with the same region cropped from the full FOV image (c), marked by the red dashed box. Some close-ups of both schemes are presented in the left bottom correspondingly. Lineouts are shown to demonstrate the differences. The result of 4-frame scheme has over 4 times FOV and better details.
Fig. 7
Fig. 7 Recovered complex images of a meiosis of grasshopper section. (a1) and (b1) are the amplitude and phase images of the single-shot scheme, while (a2) and (b2) are some close-up images. (c1) and (d1) are the results of the 4-frame scheme with the same region, while the close-ups are shown in (c2) and (d2). Different details are emphasized by blue and red arrows. The cropped areas are marked by blue and red dashed boxes in images (e) and (f) correspondingly, which occupy about 1/4 of the full FOV. More close-up images of the results using the 4-frame scheme are shown in the right.
Fig. 8
Fig. 8 Simulation of the single-shot scheme with a varying object-to-sensor distance d2. A smaller d2 causes heavier crosstalk and results in bad reconstruction quality (larger RMSE). We use d2 ≈ 5 mm in reality (since the RMSE curve of the amplitude almost converges) to balance the crosstalk and the number of pixels in use.
Fig. 9
Fig. 9 Simulation results to show the effect of the partial coherence of illumination to both schemes based on real parameters. We use a Siemens star target to evaluate the resolution, as shown in (a). The spectrum curves of illumination before and after the spectral filter are shown in (b). Simulation results corresponding to different partially coherent conditions are presented in (c). The ’Setup 1’ in the upper half of (c) refers to the single-shot scheme, while the ’Setup 2’ in the bottom half refers to the 4-frame scheme. In comparison, the latter has better resolution and less artifacts due to the less crosstalk among diffraction patterns. Recovered results in the same rows of the blue dashed box (column 1 and column 2) and of the green dashed box (column 4 and column 5) respectively have similar performance. This indicates that the effect of the partial temporal coherence has been mostly eliminated, as a spectral filter and ∼ 5 mm object-to-sensor distance in use. Comparing the results of column 1 and column 4, the spatial size of LED (the partial spatial coherence) leads to artifacts.

Tables (2)

Tables Icon

Table 1 Parameters in experiments of single-shot scheme.

Tables Icon

Table 2 Parameters in experiments of 4-frame scheme.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

U ¯ 2 ( k ) = exp ( j k 0 2 k x 2 k y 2 d 1 ) U ¯ 1 ( k ) ,
I i ( r ) = | U 4 ( r ) | 2 = | PSF 2 ( r ) { PSF 1 ( r ) [ P i ( r ) P M ( r ) ] O ( r ) } | 2 .
I detected ( r ) = i L s I i ( r ) = i L s | PSF 2 ( r ) { PSF 1 ( r ) [ P i ( r ) P M ( r ) ] O ( r ) } | 2 ,
min O ( r ) , P M ( r ) | I detected ( r ) ( i L s | PSF 2 ( r ) { PSF 1 ( r ) [ P i ( r ) P M ( r ) ] O ( r ) } | 2 + b ) | 2 , s . t . I detected ( r ) b 0 ,
W FOV 2 × ( z 1 + R p ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.