Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Efficient positional misalignment correction method for Fourier ptychographic microscopy

Open Access Open Access

Abstract

Fourier ptychographic microscopy (FPM) is a newly developed super-resolution technique, which employs angularly varying illuminations and a phase retrieval algorithm to surpass the diffraction limit of a low numerical aperture (NA) objective lens. In current FPM imaging platforms, accurate knowledge of LED matrix’s position is critical to achieve good recovery quality. Furthermore, considering such a wide field-of-view (FOV) in FPM, different regions in the FOV have different sensitivity of LED positional misalignment. In this work, we introduce an iterative method to correct position errors based on the simulated annealing (SA) algorithm. To improve the efficiency of this correcting process, large number of iterations for several images with low illumination NAs are firstly implemented to estimate the initial values of the global positional misalignment model through non-linear regression. Simulation and experimental results are presented to evaluate the performance of the proposed method and it is demonstrated that this method can both improve the quality of the recovered object image and relax the LED elements’ position accuracy requirement while aligning the FPM imaging platforms.

© 2016 Optical Society of America

1. Introduction

Fourier ptychography (FP) [1, 2] is a recently developed phase retrieval technique which overcomes the physical space-bandwidth-product (SBP) limit of a low NA imaging system. Similar to the conventional ptychography approaches [3–5 ], FP shares its roots with synthetic aperture concepts [6–12 ] and other phase retrieval techniques [13–19 ].

In a typical Fourier ptychographic imaging platform, a positional fixed LED array is used for angle-varied illuminations. At each illumination angle, a low-resolution (LR) intensity image of the specimen, with the resolution determined by the NA of the objective lens, is recorded. By iteratively combining these LR intensity images together in the Fourier domain, FP recovers a high-resolution (HR) complex image of the sample. The final reconstruction resolution is determined by the sum of the objective lens and illumination NAs [20]. In order to exceed the limitations of the original FPM technique, several studies have been reported to further improve FPM and FP [21–28 ] lately. Some of them correct the system aberrations of FPM and improve its reconstruction accuracy and recovery resolution significantly [21, 22, 24, 28]. The others reduce the measuring time of FPM imaging process and remarkably improve its data acquisition efficiency [23–27 ]. However, similar to conventional ptychography, FP suffers from a positional misalignment problem, which is rarely mentioned in those developed FP techniques.

In conventional ptychography, accurate knowledge of the probe scanning positions is essential for a high-quality reconstruction. Thus, different ptychographic correction techniques have been developed for correcting positioning errors, including the conjugate gradient algorithm [19], the genetic method [29], the annealing technique [30], the global drift model [31], and the cross-correlation approach [32]. On the other hand, during the data acquisition process with the FPM platform, the sample is fixed while an LED array is used for angle-varied illuminations. Since the sample and the illuminating LED matrix are both positional fixed and the FPM platform contains no mechanical scanning device, it seems that the positional misalignment of the LED matrix would not be a major systematic error in FP settings. However, the conventional FPM recovery quality would be degraded by the positioning error of the LED array in a different way. For some particular segments in the FOV, if the incident angle of one LED element is accidentally shifted out of the objective lens’s NA, the corresponding recorded image would become a dark-field (DF) image instead of a bright-field (BF) image. Without positioning correction, this DF image would greatly degrade the recovery accuracy of conventional FPM since a lot of low-frequency information is distorted unexpectedly.

Therefore, to numerically correct the positional misalignment in current FPM platforms, we propose a positioning correction approach, named pcFPM, based on the SA algorithm and non-linear regression technique. Similar to the positioning correction techniques proposed for conventional ptychography, pcFPM involves the evaluation of an image-quality metric at each iteration step, followed by the updated LED position estimation. In order to improve the convergence efficiency of this iterative correction process, large number of initial iterations for several images with low illumination NAs are implemented to correct those low-frequency apertures’ positions accurately. In addition, we introduce a global positional misalignment model of the LED matrix to achieve the enhancements of the iterating efficiency and adjusting accuracy. After obtaining the initial values of the global positioning model with non-linear regression, all the captured images are iterated for precise positional correction. Both the simulation and experimental results demonstrate that robust positioning correction is achievable utilizing our pcFPM method. Such a numerical correction scheme is able to not only improve the quality of the recovered object’s complex image, but also relax the LED array’s positioning accuracy requirement while aligning the FPM platforms.

The remainder of this paper is organized as follows: we analyze the distortion resulting from the positional misalignment in conventional FPM in Section 2.1 and then introduce the global positional misalignment model of the LED matrix for FPM in Section 2.2. In Section 2.3, we propose the iterative method pcFPM to correct positioning errors based on the SA algorithm and non-linear regression technique within FP iterative reconstruction procedure. Numerical simulation and experimental results are presented in Section 3 and Section 4 respectively. At last, conclusions are summarized in Section 5.

2. Principle

2.1. Positional misalignment in FPM

To explain the necessity of the positioning correction for FP algorithm, let us review the basic concepts of conventional FP reconstruction process. As detailed described in [1], a typical FP microscope consists of an LED array, a light microscope with a low-NA objective lens, and a monochromatic CCD camera. The LED elements on the array are turned on sequentially one at a time to illuminate the sample from different incident angles. For each LEDm,n (row m, column n) and its illumination angle (um,n,vm,n), the camera captures an LR intensity image Im,nc of the specimen. Then, these LR images are stitched in the Fourier domain using the conventional FP reconstruction process. In the first step of FP process, an HR complex amplitude distribution of the sample profile is estimated by a random guess o 0(x,y), which has the frequency spectrum O0(u,v)={O0(x,y)}. Secondly, this HR estimation is low-pass filtered in Fourier domain by a circular mask P 0(u,v), simulating the process of the objective lens, to estimate an LR image o0,m,ne(x,y)=1{O0(uum,n,vvm,n)P0(u,v)}. Thirdly, the intensity component of the LR image |o0,m,ne|2 is replaced by the actual measurement Im,nc and the phase component is kept unchanged. The updated LR image o0,m,nu(x,y) is then used for updating the corresponding spectrum region of the sample estimation as the fourth step. This replace-and-update sequence is repeated for all incident angles in the fifth step, and the fifth step is iterated J times until the solution converges. Finally, the HR complex image of the sample profile oJ(x,y), which consists of the HR intensity distribution Ih(x,y) and HR phase distribution Φh(x,y), is recovered.

Noticing that different positions of the LED elements correspond to different incident angles (um,n,vm,n) while accurate knowledge of those frequency apertures is essential to achieve high recovery quality in FPM. Therefore, positional misalignment could bring about frequency distortion in FPM. Moreover, an important advantage of FPM is that it provides a high resolution with a wide FOV. Within such a wide FOV, different regions have different incident angles from one LED. Some segments of the specimen have the particular incident angle which is close to the NA of the objective lens. These segments would be considerably sensitive to the LED matrix’s positioning error. Figure 1 presents an example of these regions.

 figure: Fig. 1

Fig. 1 An example of the segments in the FOV which are very sensitive to the positional misalignment in the FPM platform. (a) the frequency apertures’ positions in the Fourier domain; (b1)–(b3) are the captured LR image, the recovered HR intensity image, and the recovered HR phase distribution without positional misalignment; (c1)–(c3) are the captured LR image, the recovered HR intensity image, and the recovered HR phase distribution with positional misalignment.

Download Full Size | PDF

The frequency spectrum is presented in Fig. 1(a). The red circle presents the illuminating aperture of one LED element if the LED matrix is correctly aligned. Since the zero-frequency is involved in the red circle, a BF image [Fig. 1(b1)] would be captured from this incident angle. However, in the real experiment, the LED array is accidently shifted and the green dot-line circle is the actual illumination aperture of the misplaced LED element. Now, the captured image is a DF image [Fig. 1(c1)] actually because the zero-frequency is shifted out of the green circle. Therefore, when we use this DF image to update the frequency information in the red circle during FP iterative reconstruction, those frequency components in the red circle would be extremely distorted. As presented in Figs. 1(c2) and 1(c3), the recovered intensity and phase profiles are obviously distorted using the conventional FP algorithm, comparing with the ideal intensity and phase profiles displayed in Figs. 1(b2) and 1(b3). Thus, it is very important to correct the misalignment of the LED matrix in the FPM setup.

2.2. Global positional misalignment model in FPM

Before introducing the correction method, we model the misalignment of the LED array in the FP microscope. Figure 2(a) presents the diagram of a misaligned FPM setup and Fig. 2(b) presents the enlargement of the central windowed part in Fig. 2(a).

 figure: Fig. 2

Fig. 2 The diagram of a misaligned FPM setup.

Download Full Size | PDF

A programmable 15×15 LED array is placed beneath the specimen. The red LED in Fig. 2 is the central LED in this LED array. In this paper, we assume those LED elements are located on a horizontal plane which is vertical to the optical axis and the distances between every adjacent LED elements are the same. Therefore, as shown in Figs. 2(a) and 2(b), we define four global factors to determine every LED elements’ positions, which are rotation factor θ, shift factors along x- and y-axis Δxy, and height factor h. Then we can express the position of each LED element as

xm,ni=dLED[cos(θ)m+sin(θ)n]+Δx,ym,ni=dLED[sin(θ)m+cos(θ)n]+Δx,
where xm,ni and ym,ni present the position of the LED element on the row m, column n. dLED denotes the distance between adjacent LED elements. In this paper, we set dLED = 4mm in both simulations and real experiments. In addition, we use a 15 × 15 LED matrix to provide angle-varied illuminations, which means m ∈ {−7,…,0,…,7}, n ∈ {−7,…,0,…,7}. The light from each LED can be accurately treated as a plane wave for each small image region of the specimen. The incident wave-vector (um,n,vm,n) for each segment can be expressed as
um,n=2πλxoxm,ni(xoxm,ni)2+(yoym,ni)2+h2,vm,n=2πλyoym,ni(xoxm,ni)2+(yoym,ni)2+h2,
where (xo,yo) is the central position of each small segment, λ is the illumination wavelength, and h is the distance (at the z direction) between the LED array and specimen.

2.3. Iterative correction algorithm

Similar to the developed correction routines for conventional ptychography introduced in [30, 31], we propose an iterative method, named pcFPM, to correct positioning errors based on the SA algorithm and non-linear regression technique in this paper. Figure 3 shows a flow chart of its operation.

 figure: Fig. 3

Fig. 3 Block diagram of the pcFPM method.

Download Full Size | PDF

At the beginning, an initial guess of the sample spectrum and pupil function, labelled as O 0(u,v) and P 0(u,v), are provided to start the algorithm. Generally, the Fourier transform of an up-sampled LR BF image is taken as the initial sample spectrum. The initial pupil function guess is set as a circular shape low-pass filter, with all ones inside the pass band, zeros out of the pass band and uniform zero phase. The radius of the pass band is 2πNAλ, where NA is the numerical aperture of the microscope objective lens and λ is the illumination wavelength. All the captured images are addressed in a sequence Im,nc(x,y) and considered in turn, with both the sample spectrum and pupil function updated in each loop.

Secondly, we define the LED updating range Sj for each iteration. Normally, the updating process should repeat for all the 225 images until each incident angle has been processed, completing one iteration of the algorithm. However, it is shown that the low-frequency information is more important of the reconstructed image using the FPM method in Section 2.1. Based on this fact, large number of iterations for several images with low illumination NAs are firstly implemented to correct the low-frequency apertures’ positions in the Fourier domain. For pcFPM, in the first nine iterations, where j = 1,…,9, the process repeats for 5 × 5 images with the LED updating range Sj ={(m,n)|m = −2,…,2,n = −2,…,2}. Since only 25 images are computed in each iteration, the initial value of the four global positional factors can be efficiently obtained within nine iterations. It should be noted that different choice of parameters in pcFPM would affect its correction accuracy and efficiency. In this paper, considering the FPM setup we used for the experiments, we select 5 × 5 images for initial iterations because these 25 images approximately involve all the images which could be accidentally changed from a BF image to a DF image (or from a DF image to a BF image) because of the positional misalignment. In addition, since the convergence efficiency of the SA algorithm depends on the initial guess, an accurate initial solution before the iterations of the entire LED array is very important for improving the convergence efficiency of pcFPM. Generally, the apertures’ positions of those 25 images could be corrected accurately after nine initial iterations under different noise conditions. Thus, we implement nine initial iterations in this manuscript empirically. At the end of each initial iterations, Oj(u,v) and Pj(u,v) need to be initialized because the object’s profile could be extremely distorted when those 25 apertures’ positions have not been corrected properly. After nine initial iterations, all the 225 captured images are iterated for precise positional correction with more iterations. Therefore, in pcFPM, the LED updating range Sj for each iteration is defined as

Sj={{(m,n)|m=2,,2,n=2,,2}j9{(m,n)|m=7,,7,n=7,,7}else.

Next, start the SA algorithm to correct the incident angle of LEDm,n in the Fourier domain. Firstly, a set of further estimates of the frequency aperture that resulting in or,j,m,ne(x,y) are computed (r ∈ {1,2,…,R}), each with a different frequency-shifting, of the form (Δur,m,nvr,m,n). Here each (Δur,m,nvr,m,n) is a vector of two random frequency-shifting distances between ±Δuv. The variable Δuv begins at a predefined value but is decreased to a small (or zero) value over a set number of iterations. In pcFPM, we set R = 8 empirically to ensure the accuracy and efficiency of the SA algorithm. Afterwards, with the knowledge of reconstructed Oj(u,v) and Pj(u,v) from the previous loop, the rth wavefront estimate in the Fourier domain is computed as

Or,j,m,ne(u,v)=Oj(u(um,n+Δur,m,n),v(vm,n+Δvr,m,n))Pj(u,v),
and the simulated image on the detector is the inverse Fourier transform of it: or,j,m,ne(x,y)=1{Or,j,m,ne(u,v)}. Next, the intensity distribution of each or,j,m,ne(x,y) is compared to Im,nc(x,y) to give a set of errors
E(r)=x,y(|or,j,m,ne(x,y)|2Im,nc(x,y))2.
Since the estimated intensity distribution |or,j,m,ne(x,y)|2 should approximate to the captured image Im,nc(x,y) when the rth shifted aperture’s position (um,n + Δur,m,n,vm,n + Δvr,m,n) is close to the actual misaligned position. So the index of the minimum value of E(r) is labelled as s and the frequency aperture’s position can be updated as
s=argmin[E(r)],um,nu=um,n+Δus,m,n,vm,nu=vm,n+Δvs,m,n.

After updating the frequency aperture’s position (um,n,vm,n), the modulus of the wavefront os,j,m,ne(x,y) that resulting in the lowest error value E(s) is replaced by Im,nc(x,y) to give an auxiliary function for following updating process [24]

ΔOm,n(u,v)={Im,nc(x,y)os,j,m,ne(x,y)|os,j,m,ne(x,y)|}os,j,m,ne(u,v).
Next, using the corrected aperture’s position ( um,nu, vm,nu) and revised auxiliary function ΔOm,n(u,v) in Eqs. (6) and (7), two update functions provide the updated object and pupil function [24]
Oj(uum,n,vvm,n)=Oj(uum,nu,vvm,nu)+|Pj(u,v)|Pj*(u,v)|Pj(u,v)|max(|Pj(u,v)|2+δ1)ΔOm,n(u,v),Pj(u,v)=Pj(u,v)+|Oj(uum,nu,vvm,nu)|Oj*(uum,nu,vvm,nu)|Oj(uum,nu,vvm,nu)|max(|Oj(uum,nu,vvm,nu)|2+δ2)ΔOm,n(u,v).
where δ 1 and δ 2 are some regularization constants to ensure numerical stability, which are set as δ 1 = 1, δ 2 = 1000 in pcFPM. When the LEDm,n is updated, select another LED element and repeat the iterative steps above until all the LED elements in Sj have been implemented.

At the end of jth iteration, a set of updated frequency apertures’ positions ( um,nu, vm,nu) for every LED elements have been updated. Since they should obey the global positional misalignment model proposed in Section 2.2, we utilizing a non-linear regression algorithm [33] to update those four factors (θxy,h) of the LED matrix’s positional misalignment. Mathematically, the non-linear regression process can be expressed as

Q(θ,Δx,Δy,h)=m,n[(um,n(θ,Δx,Δy,h)um,nu)2+(vm,n(θ,Δx,Δy,h)vm,nu)2],(θ,Δx,Δy,h)u=argmin[Q(θ,Δx,Δy,h)],
where Q(θxy,h) is the defined non-linear regression function which needs to be minimized. [um,n(θxy,h),vm,n(θxy,h)] denotes the incident angle varying with the global positional factors and (θxy,h)u are the updated global positional factors. Afterwards, if j < J, decrease the variable Δuv by half to compress the frequency searching range of the SA algorithm and then go back for another iteration. Finally, after Jth iteration, the positional misalignment is corrected while reconstructing object’s high-resolution information.

3. Simulations

Before applying pcFPM to the actual experimental data, we first evaluate its effectiveness using simulations. The parameters in the simulations are chosen to realistically model a light microscope, with an incident wavelength of 632nm, an imaging pixel size of 1.6µm, a small segment of 100×100 pixels and an objective NA of 0.1. HR input intensity and phase profiles are shown in Figs. 4(a1) and 4(a2). They serve as the ground truth of the simulated complex sample. We utilize a 15 × 15 LED matrix as the light source for providing angle-varied illuminations. The distance between adjacent LED elements is 4mm, and the distance between the sample and LED matrix is about 60mm. A set of 225 LR intensity images is simulated under this setting. Positional misalignment was artificially introduced by setting the four positional factors with random values. We then employ the conventional FP reconstruction routine to recover the HR complex sample. Figures 4(b) and 4(c) show two typical situations of the FP reconstructing different small regions in the FOV under the same positional misalignment condition. Figure 4(b) presents the segment which is in the center of the FOV while Fig. 4(c) presents the segment which is away from the center of the FOV with (100µm,200µm) shifting along x-axis and y-axis respectively. Figures 4(b1), 4(c1), 4(b2), and 4(c2) show the recovered intensity and phase profiles without positional correction, while Figs. 4(b3) and 4(c3) show the central part of the recovered spectrums in the Fourier domain. The illumination apertures’ positions of Figs. 4(b3) and 4(c3) are presented in Figs. 4(b4) and 4(c4) respectively. Comparing Figs. 4(b1), 4(c1), 4(b2), and 4(c2), the recovered results of two different small regions in the FOV are quite different using a same conventional FP algorithm with the same positional misalignment condition. It also can be seen that a lot of low-frequency components in Fig. 4(c3) are obviously distorted, comparing with Fig. 4(b3). Furthermore, Fig. 4(b4) and 4(c4) illustrate the different sensitivities of these two regions by presenting their illumination apertures’ positions in the Fourier domain. Red triangle-dots denote the uncorrected positions while green circular-dots denote the actual misaligned positions. In Fig. 4(c4), one of the frequency aperture (presented as green-dot-line circle) is accidentally shifted out of the objective lens’s NA. So, the corresponding captured image is actually a DF image instead of a BF image. As analysed in Section 2.1, a lot of low-frequency components in Fig. 4(c3) would be extremely distorted. On the other hand, in Fig. 4(b4), the frequency aperture of the green-dot-line circle still generates a BF image. Therefore, without positional correction, the recovered intensity and phase profiles in Figs. 4(b2) and 4(b3) are slightly contaminated.

 figure: Fig. 4

Fig. 4 The reconstruction results for different segments in the FOV using ordinary FPM. (a1) and (a2) are the ideal HR intensity and phase profiles; (b1)–(b4) show the recovered HR intensity and phase profiles, the central parts of the recovered frequency spectrum and the frequency apertures’ positions respectively with positional misalignment when the recovered segment in in the center of the FOV; (c1)–(c4) show the recovered HR intensity and phase profiles, the central parts of the recovered frequency spectrum and the frequency apertures’ positions respectively with the same misalignment condition when the recovered segment is away from the center of the FOV with (100µm,200µm) shifting along x-axis and y-axis.

Download Full Size | PDF

Next, we utilize pcFPM to correct the positional misalignment in the FPM setup. Empirically, we define the maximum iteration times J = 12 and the maximum frequency-shifting Δuv = 0.02µm −1. Figure 5 presents the correction results under the same misalignment condition as in Fig. 4(c). Figures 5(a) and 5(b) show the recovered intensity and phase profiles after positional correction, while Figs. 5(c) and 5(d) show the recovered spectrum in the Fourier domain and the corrected illumination apertures’ positions in the spectrum respectively. In Fig. 5(d), red triangle-dots, green circular-dots and blue diamond-dots denote the uncorrected positions, the actual misaligned positions and the corrected positions in the Fourier domain respectively. It can be seen that all the misaligned LED elements have been positional corrected and the intensity [Fig. 5(a)] and phase [Fig. 5(b)] profiles are recovered perfectly comparing with Figs. 4(c1) and 4(c2). Furthermore, since in the first nine iterations 225 images are iterated (25 × 9 = 225), the time consumed in those nine initial iterations (about 7.3s) equals to the time consumed in one iteration of all the 225 captured images (7.2s) using a laptop PC (Intel Core i5-3320M CPU, 2.6 GHz). In other words, pcFPM could achieve complete positioning correction within only four ‘total’ iterations (28.9s), nine initial iterations of a small part of the LED array plus three iterations of the full LED array.

 figure: Fig. 5

Fig. 5 The recovered results using pcFPM under the same misalignment condition as in Fig. 4(c). (a)–(d) show the recovered HR intensity and phase profiles, the recovered frequency spectrum and the frequency apertures’ positions respectively with the same misalignment condition.

Download Full Size | PDF

To confirm the robustness of our proposed method, we also evaluate the performance of pcFPM under different noise conditions. Figures 6(a)–6(d) present the root-mean-square error (RMSE) of four position factors (θ, Δx, Δy, and h) during correcting iterations of pcFPM, under five noise conditions with additive white Gaussian noise standard deviation σ = 0,0.01,0.02,0.04,0.08. Under each noise condition, 100 random positional misalignments are simulated. In order to simulate the positional misalignment in a real FPM platform, the ranges of the randomly simulated positional misalignment factors are set as θ ∈ [−5°,5°], Δx ∈ [−1000µm,1000µm], Δy ∈ [−1000µm,1000µm], and h ∈ [−1000µm,1000µm]. When the four positioning factors exceed these ranges in a real FPM platform, the positional misalignment would be too obvious to be noticed and we can physically align the LED matrix before utilizing FPM. It is shown that the positional misalignment is corrected gradually during correcting iterations of pcFPM and after 12 iterations the positional misalignment is completely corrected when the captured images are free of noise. However, when the captured images are infected by noise, pcFPM cannot achieve perfect rectified. This is because that those DF images are very sensitive to noise and their corresponding apertures’ positions are difficult to be adjusted perfectly. However, the reconstructed image would not be distorted significantly even if the high-frequency apertures’ positions have not been completely corrected because the misplaced high-frequency components of an image would not affect its quality noticeably. Figures 6(e) and 6(f) show the RMSE of the recovered intensity and phase distributions (I and ϕ) under four reconstruction situations with noise increasing. When the LED array is perfectly aligned, best recovery quality can be achieved using FPM and the reconstruction quality degrades slightly with the noise increasing. However, if the LED matrix is misaligned, the recovery quality decreases significantly using conventional FPM algorithm, even if the captured images are free of noise. On the other hand, pcFPM guarantees a better recovery quality, which almost equals to the best recovery quality with the noise increasing. Thus, it is demonstrated that the residual positional misalignment after correction could hardly degrade the recovery quality using pcFPM.

 figure: Fig. 6

Fig. 6 The performance of pcFPM under different noise conditions. (a)–(d) show the RMSE of four position factors (θxy,h) during correcting iterations of pcFPM, under five noise conditions with standard deviation σ = 0,0.01,0.02,0.04,0.08; (e) and (f) show the RMSE of the reconstructed intensity and phase profiles (I and ϕ) under four reconstruction situations with noise increasing.

Download Full Size | PDF

4. Experiments

In order to evaluate the effectiveness of pcFPM experimentally, we compare the recovered intensity distributions of two segments in a USAF target using conventional FPM and pcFPM respectively.

We employ a light microscope (magnification 4×, NA = 0.1) as the imaging system and an LED matrix (15×15, incident wavelength λ = 632nm) as the light source for providing angle-varied illuminations. The distance between adjacent LED elements is 4mm, and the distance between the sample and LED matrix is about 66mm. A scientific CMOS (sCMOS) camera (PCO.edge 5.5) with the pixel size of 6.5µm is used for recording images under different incident angles. A set of 225 LR intensity images was captured using this setup, and the FOV of the USAF resolution board is presented in Fig. 7(a). Figures 7(b1) and 7(c1) show the enlargements of two different regions (50 × 50 pixels each) in the FOV. The same data set and pupil function reconstruction algorithm are employed to recover the HR image for each segment using the conventional FPM and pcFPM respectively. In addition, three iterations are conducted in the ordinary FPM while similarly three iterations of the entire LED array are performed in pcFPM. The only difference between the conventional FPM and pcFPM utilized in this paper is that pcFPM involves positioning correction procedure. Figures 7(b2) and 7(c2) present the recovered HR intensity images with conventional FPM corresponding to Figs. 7(b1) and 7(c1), respectively. It can be seen that the intensity profile in Fig. 7(b2) is extremely distorted because of the positional misalignment while some parts of Fig. 7(c2) are also distorted. This is because the segment in Fig. 7(b1) is the special region which is sensitive to positional misalignment, as we discussed in section 2.1. With the help of pcFPM, high-quality recovered intensity distributions are obtained, shown in Figs. 7(b3) and 7(c3). The distortion pattern in Fig. 7(b2) is removed completely and every resolution elements in Fig. 7(c3) are obviously recognizable.

 figure: Fig. 7

Fig. 7 Experimental results of two segments in a USAF target recovered with conventional FPM and pcFPM. (a) presents the FOV of the USAF resolution board recorded by the camera; (b1)–(b3) show the the enlargements of one small segment, the reconstructed HR intensity images with conventional FPM and pcFPM respectively; (c1)–(c3) show the the enlargements of another small segment, the reconstructed HR intensity images with conventional FPM and pcFPM respectively.

Download Full Size | PDF

In addition, we also test our approach for measuring a sample of stained human kidney vessel cells. Similarly, Fig. 8(a) presents the FOV of the specimen and Figs. 8(b1) and 8(c1) show the enlargements of two different segments (50×50 pixels each) in the FOV. The reconstructed results are presented in Figs. 8(b2) and 8(c2) using conventional FPM and high-quality recovered results are presented in Figs. 8(b3) and 8(c3) using pcFPM. We use the colorbar presented in Fig. 8 to illustrate the intensity and phase distributions within one image. Comparing with the conventional FPM, pcFPM provides the recovered HR images more distinct details and gets rid of the evident distortion patterns.

 figure: Fig. 8

Fig. 8 Experimental results of two segments in a sample of stained human kidney vessel cells reconstructed with conventional FPM and pcFPM. (a) presents the FOV of the specimen recorded by the camera; (b1)–(b3) show the the enlargements of one small segment, the reconstructed HR complex images with conventional FPM and pcFPM respectively; (c1)–(c3) show the the enlargements of another small segment, the reconstructed HR complex images with conventional FPM and pcFPM respectively.

Download Full Size | PDF

5. Conclusion

This paper has demonstrated both theoretically and experimentally that a high-quality, and noise-robust intensity and phase reconstruction can be obtained efficiently using pcFPM. Different from those developed positioning correction methods for the conventional ptychography, pcFPM first corrects the frequency apertures’ positions of several images with low illumination NAs with the SA algorithm and then obtain the more accurate initial solution of the global positional model through non-linear regression. Comparing with the ordinary FPM, pcFPM can numerically correct the positional misalignment of the LED matrix within iterative reconstructing procedure and improve the recovery quality significantly. Furthermore, pcFPM proves its efficiency and robustness by accurately correcting the misalignments within four ‘total’ iterations under different noise conditions.

Although pcFPM could improve the quality of the recovered object’s complex image and relax the LED array’s position accuracy requirement, its performance is limited by the SA algorithm. When the LED matrix is considerably misplaced, the SA algorithm may converge to a local optimum since the number of randomly frequency-shifting R for each LED element is limited. If we enlarge the searching number R for each LED element in the SA procedure, pcFPM would become a huge time-consuming task. This appears to be a limitation of our approach and it will be the subject of future work.

Acknowledgments

This work was supported by the National Natural Science Fund of China (11574152, 61505081), ‘Six Talent Peaks’ project (2015-DZXX-009, Jiangsu Province, China) and ‘333 Engineering’ research project (BRA2015294, Jiangsu Province, China), Fundamental Research Funds for the Central Universities (30915011318), and Open Research Fund of Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense (3092014012200417). C. Zuo thanks the support of the ‘Zijin Star’ program of Nanjing University of Science and Technology.

References and links

1. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

2. X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, “Quantitative phase imaging via Fourier ptychographic microscopy,” Opt. Lett. 38(22), 4845–4848 (2013). [CrossRef]   [PubMed]  

3. J. M. Rodenburg and H. M. L. Faulkner, “A phase retrieval algorithm for shifting illumination,” Appl. Phys. Lett. 85(20), 4795–4797 (2004). [CrossRef]  

4. P. Thibault, M. Dierolf, O. Bunk, A. Menzel, and F. Pfeiffer, “Probe retrieval in ptychographic coherent diffractive imaging,” Ultramicroscopy 109(4), 338–343 (2009). [CrossRef]   [PubMed]  

5. A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109(10), 1256–1262 (2009). [CrossRef]   [PubMed]  

6. C. J. Schwarz, Y. Kuznetsova, and S. R. Brueck, “Imaging interferometric microscopy,” Opt. Lett. 28(16), 1424–1426 (2003). [CrossRef]   [PubMed]  

7. S. A. Alexandrov, T. R. Hillman, T. Gutzler, and D. D. Sampson, “Synthetic aperture fourier holographic optical microscopy,” Phys. Rev. Lett. 97(16), 168102 (2006). [CrossRef]   [PubMed]  

8. V. Mico, Z. Zalevsky, P. Garcia-Martinez, and J. Garcia, “Synthetic aperture superresolution with multiple offaxis holograms,” J. Opt. Soc. Am. A 23(12), 3162–3170 (2006). [CrossRef]  

9. J. Di, J. Zhao, H. Jiang, P. Zhang, Q. Fan, and W. Sun, “High resolution digital holographic microscopy with a wide field of view based on a synthetic aperture technique and use of linear CCD scanning,” Appl. Opt. 47(30), 5654–5659 (2008). [CrossRef]   [PubMed]  

10. L. Granero, V. Mico, Z. Zalevsky, and J. Garcia, “Synthetic aperture superresolved microscopy in digital lensless Fourier holography by time and angular multiplexing of the object information,” Appl. Opt. 49(5), 845–857 (2010). [CrossRef]   [PubMed]  

11. T. Gutzler, T. R. Hillman, S. A. Alexandrov, and D. D. Sampson, “Coherent aperture-synthesis, wide-field, high-resolution holographic microscopy of biological tissue,” Opt. Lett. 35(8), 1136–1138 (2010). [CrossRef]   [PubMed]  

12. A. E. Tippie, A. Kumar, and J. R. Fienup, “High-resolution synthetic-aperture digital holography with digital phase and pupil correction,” Opt. Express 19(13), 12027–12038 (2011). [CrossRef]   [PubMed]  

13. R. A. Gonsalves, “Phase retrieval and diversity in adaptive optics,” Opt. Eng. 21, 215829 (1982). [CrossRef]  

14. J. R. Fienup, “Phase-retrieval algorithms for a complicated optical system,” Appl. Opt. 32(10), 1737–1746 (1993). [CrossRef]   [PubMed]  

15. L. Allen and M. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. 199(1), 65–75 (2001). [CrossRef]  

16. B. H. Dean and C. W. Bowers, “Diversity selection for phase-diverse phase retrieval,” J. Opt. Soc. Am. A 20(8), 1490–1504 (2003). [CrossRef]  

17. H. M. L. Faulkner and J. M. Rodenburg, “Movable aperture lensless transmission microscopy: A novel phase retrieval algorithm,” Phys. Rev. Lett. 93(2), 023903 (2004). [CrossRef]   [PubMed]  

18. P. Bao, F. Zhang, G. Pedrini, and W. Osten, “Phase retrieval using multiple illumination wavelengths,” Opt. Lett. 33(4), 309–311 (2008). [CrossRef]   [PubMed]  

19. M. Guizar-Sicairos and J. R. Fienup, “Phase retrieval with transverse translation diversity: a nonlinear optimization approach,” Opt. Express 16(10), 7264–7278 (2008). [CrossRef]   [PubMed]  

20. S. Pacheco, B. Salahieh, T. Milster, J. J. Rodriguez, and R. Liang, “Transfer function analysis in epi-illumination Fourier ptychography,” Opt. Lett. 40(22), 5343–5346 (2015). [CrossRef]   [PubMed]  

21. Z. Bian, S. Dong, and G. Zheng, “Adaptive system correction for robust Fourier ptychographic imaging,” Opt. Express 21(26), 32400–32410 (2013). [CrossRef]  

22. X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22(5), 4960–4972 (2014). [CrossRef]   [PubMed]  

23. L. Bian, J. Suo, G. Situ, G. Zheng, F. Chen, and Q. Dai, “Content adaptive illumination for Fourier ptychography,” Opt. Lett. 39(23), 6648–6651 (2014). [CrossRef]   [PubMed]  

24. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier Ptychography with an LED array microscope,” Biomed. Opt. Express 5(7), 2376–2389 (2014). [CrossRef]   [PubMed]  

25. S. Dong, R. Shiradkar, P. Nanda, and G. Zheng, “Spectral multiplexing and coherent-state decomposition in Fourier ptychographic imaging,” Biomed. Opt. Express 5(6), 1757–1767 (2014). [CrossRef]   [PubMed]  

26. J. Sun, Y. Zhang, C. Zuo, Q. Chen, S. Feng, Y. Hu, and J. Zhang, “Coded multi-angular illumination for Fourier ptychography based on Hadamard codes,” Proc. SPIE 9524, 95242C (2015).

27. L. Tian, Z. Liu, L. H. Yeh, M. Chen, J. Zhong, and L. Waller, “Computational illumination for high-speed in vitro Fourier ptychographic microscopy,” Optica 2(10), 904–911 (2015). [CrossRef]  

28. X. Ou, R. Horstmeyer, G. Zheng, and C. Yang, “High numerical aperture Fourier ptychography: principle, implementation and characterization,” Opt. Express 23(3), 3472–3491 (2015). [CrossRef]   [PubMed]  

29. A. Shenfield and J. M. Rodenburg, “Evolutionary determination of experimental parameters for ptychographical imaging,” J. Appl. Phys. 109(12), 124510 (2011). [CrossRef]  

30. A. M. Maiden, M. J. Humphry, M. C. Sarahan, B. Kraus, and J. M. Rodenburg, “An annealing algorithm to correct positioning errors in ptychography,” Ultramicroscopy 120, 64–72 (2012). [CrossRef]   [PubMed]  

31. M. Beckers, T. Senkbeil, T. Gorniak, K. Giewekemeyer, T. Salditt, and A. Rosenhahn, “Drift correction in ptychographic diffractive imaging,” Ultramicroscopy 126, 44–47 (2013). [CrossRef]   [PubMed]  

32. F. Zhang, I. Peterson, J. Vila-Comamala, A. Diaz, F. Berenguer, R. Bean, B. Chen, A. Menzel, I. K. Robinson, and J. M. Rodenburg, “Translation position determination in ptychographic coherent diffraction imaging,” Opt. Express 21(11), 13592–13606 (2013). [CrossRef]   [PubMed]  

33. G. A. F. Seber and C.J. Wild, “Nonlinear regression,” WileyNew York770, 376–385 (1989).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 An example of the segments in the FOV which are very sensitive to the positional misalignment in the FPM platform. (a) the frequency apertures’ positions in the Fourier domain; (b1)–(b3) are the captured LR image, the recovered HR intensity image, and the recovered HR phase distribution without positional misalignment; (c1)–(c3) are the captured LR image, the recovered HR intensity image, and the recovered HR phase distribution with positional misalignment.
Fig. 2
Fig. 2 The diagram of a misaligned FPM setup.
Fig. 3
Fig. 3 Block diagram of the pcFPM method.
Fig. 4
Fig. 4 The reconstruction results for different segments in the FOV using ordinary FPM. (a1) and (a2) are the ideal HR intensity and phase profiles; (b1)–(b4) show the recovered HR intensity and phase profiles, the central parts of the recovered frequency spectrum and the frequency apertures’ positions respectively with positional misalignment when the recovered segment in in the center of the FOV; (c1)–(c4) show the recovered HR intensity and phase profiles, the central parts of the recovered frequency spectrum and the frequency apertures’ positions respectively with the same misalignment condition when the recovered segment is away from the center of the FOV with (100µm,200µm) shifting along x-axis and y-axis.
Fig. 5
Fig. 5 The recovered results using pcFPM under the same misalignment condition as in Fig. 4(c). (a)–(d) show the recovered HR intensity and phase profiles, the recovered frequency spectrum and the frequency apertures’ positions respectively with the same misalignment condition.
Fig. 6
Fig. 6 The performance of pcFPM under different noise conditions. (a)–(d) show the RMSE of four position factors (θxy,h) during correcting iterations of pcFPM, under five noise conditions with standard deviation σ = 0,0.01,0.02,0.04,0.08; (e) and (f) show the RMSE of the reconstructed intensity and phase profiles (I and ϕ) under four reconstruction situations with noise increasing.
Fig. 7
Fig. 7 Experimental results of two segments in a USAF target recovered with conventional FPM and pcFPM. (a) presents the FOV of the USAF resolution board recorded by the camera; (b1)–(b3) show the the enlargements of one small segment, the reconstructed HR intensity images with conventional FPM and pcFPM respectively; (c1)–(c3) show the the enlargements of another small segment, the reconstructed HR intensity images with conventional FPM and pcFPM respectively.
Fig. 8
Fig. 8 Experimental results of two segments in a sample of stained human kidney vessel cells reconstructed with conventional FPM and pcFPM. (a) presents the FOV of the specimen recorded by the camera; (b1)–(b3) show the the enlargements of one small segment, the reconstructed HR complex images with conventional FPM and pcFPM respectively; (c1)–(c3) show the the enlargements of another small segment, the reconstructed HR complex images with conventional FPM and pcFPM respectively.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

x m , n i = d L E D [ cos ( θ ) m + sin ( θ ) n ] + Δ x , y m , n i = d L E D [ sin ( θ ) m + cos ( θ ) n ] + Δ x ,
u m , n = 2 π λ x o x m , n i ( x o x m , n i ) 2 + ( y o y m , n i ) 2 + h 2 , v m , n = 2 π λ y o y m , n i ( x o x m , n i ) 2 + ( y o y m , n i ) 2 + h 2 ,
S j = { { ( m , n ) | m = 2 , , 2 , n = 2 , , 2 } j 9 { ( m , n ) | m = 7 , , 7 , n = 7 , , 7 } else .
O r , j , m , n e ( u , v ) = O j ( u ( u m , n + Δ u r , m , n ) , v ( v m , n + Δ v r , m , n ) ) P j ( u , v ) ,
E ( r ) = x , y ( | o r , j , m , n e ( x , y ) | 2 I m , n c ( x , y ) ) 2 .
s = argmin [ E ( r ) ] , u m , n u = u m , n + Δ u s , m , n , v m , n u = v m , n + Δ v s , m , n .
Δ O m , n ( u , v ) = { I m , n c ( x , y ) o s , j , m , n e ( x , y ) | o s , j , m , n e ( x , y ) | } o s , j , m , n e ( u , v ) .
O j ( u u m , n , v v m , n ) = O j ( u u m , n u , v v m , n u ) + | P j ( u , v ) | P j * ( u , v ) | P j ( u , v ) | max ( | P j ( u , v ) | 2 + δ 1 ) Δ O m , n ( u , v ) , P j ( u , v ) = P j ( u , v ) + | O j ( u u m , n u , v v m , n u ) | O j * ( u u m , n u , v v m , n u ) | O j ( u u m , n u , v v m , n u ) | max ( | O j ( u u m , n u , v v m , n u ) | 2 + δ 2 ) Δ O m , n ( u , v ) .
Q ( θ , Δ x , Δ y , h ) = m , n [ ( u m , n ( θ , Δ x , Δ y , h ) u m , n u ) 2 + ( v m , n ( θ , Δ x , Δ y , h ) v m , n u ) 2 ] , ( θ , Δ x , Δ y , h ) u = argmin [ Q ( θ , Δ x , Δ y , h ) ] ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.