Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fast and robust misalignment correction of Fourier ptychographic microscopy for full field of view reconstruction

Open Access Open Access

Abstract

Fourier ptychographic microscopy (FPM) is a newly developed computational imaging technique that can provide gigapixel images with both high resolution (HR) and wide field of view (FOV). However, there are two possible reasons for position misalignment, which induce a degradation of the reconstructed image. The first one is the position misalignment of the LED array, which can largely be eliminated during the experimental system building process. The more important one is the segment-dependent position misalignment. Note that, this segment-dependent positional misalignment still exists, even after we correct the central coordinates of every small segment. In this paper, we carefully analyze this segment-dependent misalignment and find that this global shift matters more, compared with the rotational misalignments. According to this fact, we propose a robust and fast method to correct the two factors of position misalignment of the FPM, termed as misalignment correction for the FPM misalignment correction (mcFPM). Although different regions in the FOV have different sensitivities to the position misalignment, the experimental results show that the mcFPM is robust with respect to the elimination of each region. Compared with the state-of-the-art methods, the mcFPM is much faster.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

As we all know, almost all of the conventional microscope has a trade-off between its resolution and FOV. To solve this problem, a new computational imaging technique called FPM has been proposed [1, 2]. In a typical FPM system, a programmable LED array is used instead of the conventional microscope’s light source for providing angularly variant illumination. After capturing a sequence of low resolution (LR) images under different illumination angles, an iterative phase retrieval process [3–7] is used to stitch together those LR images in the Fourier space, and then a HR and high space-bandwidth product (SBP) complex field of the sample can be recovered. Compared with the conventional microscopy, the FPM can achieve HR, wide FOV and quantitative phase imaging [8]. Therefore, it has a great potential in a variety of applications, such as biomedical medicine [9–11], characterizing unknown optical aberrations of lens [12,13].

The FPM shares its roots of ptychography [14–16]. In the conventional ptychography, the mechanical scanning in the imaging process makes the position correction of the probe function essential. Similar to the conventional ptychography, the position misalignment in the FPM is of great importance [15]. Because the wave-vector of the illumination is decided by the positional difference between the LED position and the central coordinate of every small segments, this positional misalignment induces significant errors to the pupil function in the reconstruction process. In this case, there are two possible causes of position misalignment. The first one is the position misalignment of the LED array. For example, the translational and rotational misalignment between the LED array and the optical axis. Poorly adjustment of the FPM system would cause distortions to all the image regions, even including the center region. However, we experimentally demonstrate that both reasons can be largely eliminated through mechanical adjustment of the FPM system. Besides this, there is another cause of positional misalignment while we reconstruct the whole FOV images. From the reconstructed results, we find that the central regions are much better than that of the edge regions. In the edge regions, stripes are clearly observable. Based on this phenomenon, we carefully analyze this segment-dependent misalignment, and find that the global shift matters more compared with rotational misalignment. This global shift in the edge region is probably caused by the aberration of the objective lens. Because the magnification of the objective lens in the edge regions is different from in the center, it would cause the position in the camera plane can not map the correct position in all of the small segments. The wrongly mapped positions in the sample plane would cause the second source of position misalignment, because the wave-vector of the illumination is position dependent. Therefore, the most important misalignment source is this segment-dependent global shift.

In the conventional ptychography, a simulated annealing (SA) algorithm was adopted to correct the position errors of the probe function [17]. Similarly, to correct the position misalignment in the FPM, the SA algorithm has been introduced into each sub-iteration of the reconstruction algorithm. An optimal shift of the pupil function in the Fourier domain can be obtained during this process. However, this method may lead to an algorithmic disorder of the LED array, resulting in the degradation of the reconstruction [18]. To avoid this problem, a position correction approach, named as pcFPM [19], has been proposed. It is based on the SA algorithm and a non-linear regression technique. In the pcFPM, a global position misalignment model of the LED array was introduced to ensure the corrected LEDs are positionally ordered. Although the pcFPM can effectively eliminate the LED misalignment, the additional process of non-linear regression increases the algorithm complexity and computer load. Actually, during the FPM reconstruction process, it is required to divide the captured images into many small segments. However, with these existing methods [18, 19], we find that the positions corrected according to the central segment of the FOV may not work for all of the segments [20]. In our experimental observation, we find that the most important source of misalignment is the global shift for every small segment. Traditional SA based algorithm might correct this global shift, but they are time consuming and very unstable.

In this paper, we propose a fast and robust method, termed as misalignment correction FPM (mcFPM), to correct the global shift for every small segment with a global position misalignment model. Compared with other SA based algorithms, mcFPM uses a different iteration strategy which makes it very stable. Usually, conventional position correction algorithms, such as SA and pcFPM, are not that stable for the reason that every LED’s position is updated inside the FPM iterations. Different from that, we isolate the updating step of the misalignment parameters and the FPM iteration routines. The experimental results show that the proposed mcFPM is more robust and faster than the other methods. Compared with the state-of-the-art algorithm, i.e., pcFPM [19], the time cost can be decreased by a factor of 10.

This paper is arranged as this: in section 2 we show the principle of the proposed method, including the details about the FPM theory and experimental calibration in section 2.1 and 2.2, a global shift model and the principle of mcFPM in section 2.3 and 2.4. In section 3, we perform experiments to verify the effectiveness of the proposed mcFPM, and finally, we make a conclusion.

2. Principle of the mcFPM

2.1. Forward imaging model of the FPM

Before introducing the impact of the LED array position misalignment in the FPM, we first introduce its imaging model. As Fig. 1 shows, in the imaging process, the LEDs are switched on sequentially, and the intensity images of the sample under different illumination angles are captured. Let Aobject(x, y) represent the complex amplitude of the sample, Aoutput(x, y) represent the output complex amplitude of the sample, and h(x, y) represent the coherent point spread function. When the LED located at mth row and nth column is on, this process can be modeled as [21]:

Aoutput(x,y)=(Aobject(x,y)eikxm,nx+ikym,ny)h(x,y),
where (kxm,n,kym,n) represents the wave-vector of the illumination, and ⊗ is the 2D convolution operator. In the Fourier domain, Eq. (1) can be written as:
Goutput(kx,ky)=Gobject(kxkxm,n,kykym,n)H(kx,ky),
where Gobject(kx, ky) represents the object spectrum, Goutput(kx, ky) is the output spectrum of the microscope, and H(kx, ky) is the coherent transform function of the microscope. Thus, the image captured by the camera can be written as:
Icapturedm,n(x,y)=|1{Goutput(kx,ky)}|2
where ℱ−1 represents the inverse Fourier transform.

 figure: Fig. 1

Fig. 1 The imaging process of the FPM.

Download Full Size | PDF

With a sequence of LR images captured under different illumination angles, an HR image of the sample can be reconstructed. The recovery process of the FPM follows the strategy of the phase retrieval technique [3,4]. The algorithm switches between the spatial and Fourier domains. In the spatial domain, the LR intensity measurements are used as the object constraints to ensure the solution convergence. In the Fourier domain, the confined coherent transfer function of the objective lens is imposed as the support constraint. After several iterations, both HR complex field of object and the pupil function can be obtained.

2.2. Experimental demonstration and mechanical alignment of the FPM system

The system setup in this work is shown in Fig. 2(a). We built a FPM system by replacing the light source of an Olympus IX 73 inverted microscope with a programmable LED array (32 × 32 LEDs, 4 mm spacing) controlled by an Arduino. The LEDs have a central wavelength of 629 nm and a bandwidth of 20 nm. All samples were imaged with a 4 × 0.1 NA objective and a scientific complementary metal oxide semiconductor (sCMOS) camera (PCO. edge 4.2). The pixel size of the sCMOS camera is 6.5 µm. The distance between the sample and the LED array is 113.5 mm. In the experiment, the central 17 × 17 LEDs were switched on sequentially to capture 289 LR intensity images, in which only the LR images of the central 5 × 5 LEDs are within the bright field range of the objective.

 figure: Fig. 2

Fig. 2 The experimental setup (a) with the installation manual of the LED array on the microscope (b), and the adjust method of the central LED (c) along the optical axis.

Download Full Size | PDF

Coarse adjustment for the position of the LED array is first performed before the image acquisition. After capturing all the LR images, the proposed algorithm is applied in the image reconstruction process. Figure 2(b) shows the magnification of the parts with the yellow rectangle in Fig. 2(a). In order to avoid position error caused by the rotation of the LED array, we use a set of rods and buckles to fix it on the microscope stage. And then, a level instrument is used to make the sample plane and the LED array plane be parallel to each other. By doing this, we can ensure that x, y directions of the LED array are parallel to x, y directions of the microscope stage. During the process of installing the camera to the microscope, we also use a level instrument to make the top surface of the camera and the optical platform parallel. In this way, we can ensure that x, y directions of the camera’s plane are parallel to x, y directions of the microscope stage. After these processes, the rotation effect is almost eliminated. Figure 2(c) shows the principle of the coarse adjustment to make the central LED on the optical axis, which is z axis in Fig. 2(b). Firstly, we turn on the central LED without a sample placed on the stage. Then we adjust the stage along the x axis and y axis to find four critical positions x1, x2, y1, y2 respectively. The red box in Fig. 2(a) shows the center LED, and the light spot is tangential to the image borders of the camera at these positions, whose values are determined by x, y direction scales on the microscope stage. Finally, we move the stage to the coordinate of ((x1 + x2)/2, (y1 + y2)/2). By now, we finish the mechanical alignment of the setup.

To observe the influence of the position misalignment in the FPM, we reconstructed the segments of the sample in different regions within the FOV. The FPM reconstruction algorithm we used in this study is adapted from the open source code published in [22]. Figure 3 shows the reconstructed results using the FPM algorithm without any position correction. Figure 3(a) is the captured original image of a young plant root sample. Figure 3(b)3(e) are the reconstructed HR intensity images of different regions in Fig. 3(a). From the reconstructed results, we can observe the central regions shown in Fig. 3(d) and (e) look much better than that of the edge regions shown in Fig. 3(b) and (c). In the edge regions, obvious stripes are clearly observable. Based on the phenomenon shown by Fig. 3, we speculate that different global shift occurs at different regions of the FOV. In our coarse adjustment, we carefully aligned the LED array and the camera so that the central LED was imaged to the center of the camera, therefore the influence of global shift near the central regions of the reconstructed image is small. But for the edge regions, aberration of the objective may also affect the reconstructions. This global shift can not be corrected by the FPM algorithm even with a correct pupil function.

 figure: Fig. 3

Fig. 3 The reconstructions influenced by the global shift of the LED array in the FPM demonstrated by several segments reconstruction (b)–(e) located at different positions in the original captured image (a).

Download Full Size | PDF

2.3. Global shift model of the position misalignment in the FPM

To explain the stripes in the edge regions, we propose a global shift model of position misalignment. In FPM, it is required to divide the captured images into many small segments. The incident wave-vector (kxm,n,kym,n) for each segment can be written as [1]:

kxm,n=2πλxoxm,n(xoxm,n)2+(yoym,n)2+s2,kym,n=2πλyoym,n(xoxm,n)2+(yoym,n)2+s2,
where (xo, yo) is the central coordinate of each small segment in the sample plane, λ is the central wavelength of the LED, and s is the distance between the sample and the LED array. (xm,n, ym,n) represents the position of the LED at the mth row and nth column, and d is the distance between two adjacent LED elements. Normally, the central coordinate of each small segment in the sample plane is decided as x0 = xi/M where M is the magnification of the objective lens, and xi is the central coordinate of each segment in the camera plane. For an objective lens with a large FOV, the magnification in the edge regions might be different from the one in the center areas. It might map a wrong central coordinate of the small segment in the sample plane, especially in the edge regions, thus leads a wrong incident wave-vector. Therefore, we redefine the model as:
x0xm,n=xi/MmdΔx,y0ym,n=yi/MndΔy,
where (Δx, Δy) is the global shift that we aim to correct in this paper.

As described above, there are two causes of the global shift. One is the global shift of the LED array, which is segment-independent. It would cause degradation within all of the FOV. The other one is the global shift caused by the aberration of the imaging system. For an objective lens, especially an achromatic objective lens, the magnification for the outside segments is different from that in the central segments. As the segment approaches outside, the global shift caused by the aberration of objective lens accumulates, and it can not be neglected in the FPM calculation. Therefore, the global shift caused by the objective lens is segment-dependent, which mostly causes image degradation in the edge regions of the FOV.

From Eq. (4), we can find that both causes of the global shift affect the incident wave-vector. Equation (2) shows the error of the incident wave-vector can cause a dislocation of the object spectrum. In other words, during the FPM reconstruction process, the global shift can induce a shift error to the pupil function in the Fourier domain. Figure 4 is a simulation example. Figure 4(c) and 4(d) show the reconstructed amplitude and phase profiles of the object with a global shift. Compared with the original amplitude and phase profiles in Fig. 4(a) and 4(b), we can obviously observe a clear stripe in the reconstructed phase images. This is similar to the artifacts in the experimentally reconstructed images in the regions away from the optical axis that shown in Fig. 3(b)3(e).

 figure: Fig. 4

Fig. 4 A simulated reconstruction with a global shift in the FPM. Amplitude (a) and phase (b) profiles of the object and the reconstructed amplitude (c) and phase (d) profiles with a global shift.

Download Full Size | PDF

There are several algorithms proposed in ptychography to address the position misalignment, but none is specifically for this segment-dependent global shift. Traditional SA algorithm and pcFPM might solve this problem, but they have two weaknesses. One is the time cost, since they both apply a SA routine to each LED position. The other problem is the robustness. SA usually results in a disordered LED position for two reasons. Firstly, the wrong guess of the pupil function in the middle of the FPM algorithm leads to wrong LED position. Secondly, for many objects, some LR images carry nearly none information of the object since some areas of the sample spectrum is almost zero. Therefore, conventional SA usually leads a disordered LED position. pcFPM applies regression algorithm to map this disordered array to a rectangle. Since it is already disordered, the reconstructed misalignment parameters are unstable. To make it more stable, pcFPM only use center regions of the LED array firstly, but it will lose its efficiency. To address this global shift problem better, we propose this fast and robust algorithm.

2.4. Proposed misalignment correction method

The flowchart of the proposed mcFPM is shown in Fig. 5. Actually, the conventional FPM algorithm is a part of the proposed mcFPM, as the part within the yellow dashed rectangle in Fig. 5 shows. It is sketched as follows [1, 22]:

 figure: Fig. 5

Fig. 5 The flow chart of the mcFPM.

Download Full Size | PDF

Step 1. Initialize the Fourier spectrum of the reconstructed HR object Oj(kx, ky) and the pupil function Pj(kx, ky).

Step 2. Generate an LR image corresponding to the LED located at the mth row and nth column with the incident wave-vector of (kxm,n,kym,n) by the below equation:

ψjm,n(kx,ky)=Oj(kxkxm,n,kykym,n)Pj(kx,ky),
where ψjm,n(kx,ky) represents the Fourier spectrum of the LR image obtained by illuminating the sample with LED located at the mth row and nth column.

Step 3. Impose the intensity constraint with the captured images by:

ϕjm,n(x,y)=Icapturedm,n(x,y)|ψjm,n(x,y)|2ψjm,n(x,y),
where ϕjm,n(x,y) and ψjm,n(x,y) are the complex field of the LR images with and without the intensity constraint respectively, and
ψjm,n(x,y)=1{ψjm,n(kx,ky)}.
Now, the updated Fourier spectrum of the LR image is:
Φ(kx,ky)={ϕjm,n(x,y)}.
Step 4. Update the object and the pupil functions with:
{Oj+1(kx,ky)=Oj(kx,ky)+|Pj(kx+kxm,n,ky+kym,n)|Pj*(kx+kxm,n,ky+kym,n)|Pj(kx,ky)|max(|Pj(kx+kxm,n,ky+kym,n)|2+δ1)Δ1,Pj+1(kx,ky)=Pj(kx,ky)+|Oj(kxkxm,n,kykym,n)|Oj*(kxkxm,n,kykym,n)|Oj(kx,ky)|max(|Oj(kxkxm,n,kykym,n)|2+δ2)Δ2,
where δ1 and δ2 are two regularization constants used to ensure numerical stability, which are set as: δ1 = 1, δ2 = 1000 in this work, and Δ1 and Δ2 are defined as:
{Δ1=Φ(kx+kxm,n,ky+kym,n)Oj(kx,ky)Pj(kx+kxm,n,ky+kym,n),Δ2=Φ(kx,ky)Oj(kxkxm,n,kykym,n)Pj(kx,ky).

Step 5. Repeat step 2 to step 4 for all of the LEDs. The LED updating range is: S1 = {(m, n) | m = −(R1 + 1)/2, …(R1 + 1)/2, n = −(R1 + 1)/2, …(R1 + 1)/2}, where R1 is the number of the LEDs in each side of the LED array.

Step 6. Repeat step 2 to step 5 until the algorithm converges.

In order to illustrate the mcFPM better, we firstly introduce the conventional SA method [17, 18]. In the conventional SA method, it assumes that each LED has an independent shift. After step 2 of the above FPM algorithm, the SA module is added to search the deviation of the illumination wave vector Δkxm,n, Δkym,n, and the corresponding pupil function with the cost function is defined as:

E1=minΔkxm,n,Δkym,nx,y|Icapturedm,n(x,y)|ψjm,n(x,y,Δkxm,n,Δkym,n)|2|2
where ψjm,n(x,y,Δkxm,n,Δkym,n) is the calculated complex field of the LR image according to Eq. (8). During the SA process, the updated wave-vectors are:
{kxm,n=kxm,n+Δkxm,n,kym,n=kym,n+Δkym,n.
Because the conventional SA method assumes the shifting value of each LED is independent, it does not provide any constraint on the LEDs’ positions. After several iterations in the SA, the LED position coordinate may become disordered in the algorithm, especially in the edge regions of the LED array [18]. Besides, for each sub iterations, the SA process is used for every LED position correction, which is heavily time consuming.

In the proposed mcFPM, rather than correcting the wave-vector (kxm,n,kym,n) of each LED in the Fourier domain, we directly correct the global shift in Eq. (4).

The global shift in the mcFPM has been defined in section 2.3. In mcFPM, the initial values of Δx and Δy are 0 mm for the central segment. Considering the distance between two adjacent LEDs is d, the ranges of the global shift Δx and Δy are set to [−d, d]. In some cases, if there are large translational misalignment, the range of the global shift can be adjusted to a bigger value. For experimental datasets, we find that the global shift of each segment is close to the one of its neighbor segments. Therefore, for segments outside, the initial guess is set to the reconstructed values of their neighbor segments, the ranges is also set to [−d, d]. Although we only update the translational misalignment in the flowchart, it is easily to modify the mcFPM to correct the rotational misalignment affection. According to Eq. (5), the updated incident wave vectors of (kxm,n,kym,n) is:

{kxm,n=2πλxi/MmdΔx(xi/MmdΔx)2+(yi/MndΔy)2+s2,kym,n=2πλyi/MndΔy(xi/MmdΔx)2+(yi/MndΔy)2+s2.
To improve the efficiency, only the incident vectors (kxm,n,kym,n) within the bright field of the objective are calculated. The LED updating range is S2 = {(m, n) | m = −(R2 +1)/2, … (R2 +1)/2, n = −(R2 + 1)/2, … (R2 + 1)/2} during the FPM reconstruction process. The cost function for searching Δx and Δy is defined as:
E2=minΔx,Δym,nx,y|Icapturedm,n(x,y)IFPMm,n(x,y,Δx,Δy)|2,
where IFPMm,n(x,y,Δx,Δy) is the corresponding calculated intensity image using the conventional FPM algorithm (step 1 to 6) with a global shift. The procedure of the FPM reconstruction process in the mcFPM only iterates 5 times (J = 5). After the fast FPM reconstruction process finishes (j = J), the cost function E2 can be calculated according to Eq. (8). To minimize the cost function E2, there are several searching methods, such as SA algorithm and genetic algorithm. In order to compare with the existing methods, we use the SA algorithm to search Δx and Δy in our mcFPM. This loop continues until the cost function reaches a minimal value. After correcting the global shift, all LR images are used to reconstruct the HR images of the sample using the conventional FPM algorithm. Finally, the degradation of the reconstructed HR amplitude and phase caused by the LED misalignment can be eliminated. Compared with E1, E2 is the summation of all LEDs rather for independent LED. Thus, the time that the mcFPM calls the optimization algorithm is much less than the conventional method.

Compared with another SA based algorithms, mcFPM uses a different iteration strategy, thus is very stable. As described above, SA and pcFPM are not that stable because every LED’s position is updated inside the FPM iterations. Different from this, we isolate the updating step of the misalignment parameters and the FPM iteration routines. Our principle is simple, if we use the correct misalignment parameters, the full FPM algorithm will give us the best reconstructed image. Therefore, the summed difference between the reconstructed LR images and the captured LR images is least. In this case, the algorithm is more stable and robust.

There are three versions of mcFPM which use different settings. In the first version, mcFPM I, the value of R2 is determined by the number of rows or columns of those central LEDs that correspond to the bright field of the objective lens. In our setup, R2 = 5. The criterion is whether the bright region is over half of the FOV of the captured image. This is for the fact that these LR images already carry enough information of the sample for correcting the global shift. In the second version, mcFPM II, all LEDs (R1 = 17) are updated to calculate the cost function. The last version, mcFPM III, is used to address the small horizontal shift for each LED because of the manufacture error of the LED array. Hence, after mcFPM I, mcFPM III adds a single SA loop to each LED position. Details about the performance of mcFPM are shown in the following section.

3. Experimental verification

3.1. Performance verification of the proposed mcFPM

To verify the feasibility of the proposed method, we compare the reconstructed results of different samples with and without LED position correction. Figure 6 shows the recovered images of two segments of a USAF resolution target. Figure 6(d) is the original captured image, Fig. 6(a) and 6(e) show the enlargement of two different parts (128 × 128 pixels) of it. Figure 6(b) and 6(f) show the reconstructed HR intensity images using the FPM algorithm without any position correction. Figure 6(c) and 6(g) show the reconstructed results using the mcFPM. Compared with Fig. 6(b) and 6(f), Fig. 6(c) and 6(g) show much better image quality.

 figure: Fig. 6

Fig. 6 Experimental results of a USAF resolution target. Two segments (a) and (e), which are the enlargement of the parts within the yellow and blue box in the original captured image (d), and their corresponding reconstructions using the conventional FPM without position correction (b) and (f), and the proposed mcFPM (c) and (g).

Download Full Size | PDF

To test the robustness of the mcFPM in different segments of the FOV, we recovered an HR image with full FOV for a young plant root sample. Figure 7 shows the reconstructed full FOV images, which have a pixel resolution of 8112 × 8112. Figure 7(a) shows the reconstruction using the conventional FPM algorithm without any position corrections. The image quality in the center is nice but the stripes become obvious in the edge regions. Therefore, the ability of gigapixel imaging with the FPM is seriously decreased because of the regionally dependent global shift of the LED array. Figure 7(b) shows the reconstruction using the proposed mcFPM. Compared with Fig. 7(a), the stripes in the edge regions of the FOV are significantly eliminated, and the image quality is improved greatly as well. This clearly demonstrates the robustness of our proposed mcFPM.

 figure: Fig. 7

Fig. 7 The reconstructed wide FOV and HR images using the conventional FPM without any position correction (a) and the mcFPM (b).

Download Full Size | PDF

Furthermore, we compare the proposed mcFPM with the other existing techniques, including conventional SA and pcFPM. The experimental LR images in the edge regions of the sample were chosen as the test data. The pixel sizes of the LR images and the reconstructed HR images are 128 × 128 and 512 × 512 respectively. We performed the reconstruction with MATLAB R2015b on a Windows 10 Enterprise Edition operation system (Inter i7-6700 CPU @3.40Ghz, 8 GB DDR4 memory). The SA algorithm terminates when the average change of the cost function is less than 10−3, or the number of iterations exceeds 100. Figure 8 shows the reconstructed results. The left and right images of each sub figure are the reconstructed amplitude and phase profiles respectively. The time costs of different methods are shown in Table 1. Figure 8(a) shows stripes on both of the amplitude and phase. As shown from Fig. 8(b), conventional SA method can reconstruct a good HR amplitude, but the stripes are still exist in the phase image. Besides, it takes 246 s for one segment reconstruction. The reason is that it calls SA algorithm once per segment. As Fig. 8(c) shows, the pcFPM removes the stripes in both the amplitude and phase, however, the time cost is 728 s. Figure 8(d–e) shows the reconstructed results using three versions of mcFPM, the time cost are 40 s, 345 s, and 83 s respectively. Compared with mcFPM I, mcFPM II is more stable since the algorithm uses more images, however, the time cost is increased at the same time. For the experimental data, the reconstructed images with mcFPM I are almost the same as the ones reconstructed by the other two versions of mcFPM, because most information is carried by the widefield images. Therefore, in experiment, we suggest to use mcFPM I to reconstruct the whole FOV image.

 figure: Fig. 8

Fig. 8 The reconstructed results using FPM without position correction (a), the conventional SA method (b), pcFPM (c), mcFPM I (d), mcFPM II (e), and mcFPM III (f) respectively.

Download Full Size | PDF

Tables Icon

Table 1. Time Cost Comparison of the Methods in Fig. 8

Compared with the previous methods, the mcFPM is least time consuming. The time cost of mcFPM I is improved by a factor of 10 compared to pcFPM. On the whole, the conventional SA method is time consuming, and breaks the physical constraint of the LED array. The pcFPM takes the physical constraint into account, but it takes more time because of the additional optimization process. In general, our method is more faster and stable compared with the other position misalignment algorithms.

3.2. Correction of the rotational misalignments in mcFPM

In our model, we assume that the rotational misalignment of the LED array is small through using the mechanical alignment. So, in general mcFPM, we only correct the translational misalignment, and ignore the rotational misalignment. To prove the rationality of this assumption, we simply modify the mcFPM to correct the rotation misalignment. We use the same misalignment model as in pcFPM [19]. In the modified mcFPM, we search (Δx, Δy, θ) to minimize the cost function E2 rather than correcting the global shift only. We apply mcFPM to the four segments in Fig. 3(a). Table 2 shows the reconstructed parameters, it shows that the rotation angles approach to 0 degree for all these four segments. This demonstrates that our mechanical alignment largely eliminates the rotational misalignment. However, compared with the other factors, we observe a segment-dependent global shift. It is obvious that as the segments approaching to the edge part, the translational shifts become seriously. Note that, the translational shift in Table 2 is caused by two factors, the shift of the LED panel and the aberration of the objective lens. The segment dependence is mainly caused by the aberration of the objective lens. Besides, in Fig. 9, it is obvious that the reconstructed images using mcFPM and modified mcFPM are almost the same. Therefore, it is reasonable to ignore the rotational misalignment in the above analysis.

Tables Icon

Table 2. Reconstructed Misalignment Parameters of the Different Segments in Fig. 3, Using Modified mcFPM

 figure: Fig. 9

Fig. 9 The reconstructed results using (a) mcFPM and (b) modified mcFPM.

Download Full Size | PDF

4. Conclusion

In the FPM, the misalignment limits its capability to realize gigapixel imaging. By analyzing the experimental data, we found that different regions of the FOV have different global shift. This induces observable stripes in the reconstructed HR images. To eliminate the global shift, we have proposed the mcFPM algorithm. Rather than correcting the shift errors of the pupil function in the Fourier domain, we introduced a global position misalignment model with two factors, and then directly corrected the them. Experimental results have shown that the mcFPM performs robust in different regions of the FOV. The experiments have shown the mcFPM is more efficient than all of the state-of-the-art techniques.

Funding

National Natural Science Foundation of China (61705241, 61327902); Natural Science Foundation of Shanghai (17ZR1433800); National Research Foundation of Korea (Young Scientist Exchange Program between Korea and China); Chinese Academy of Sciences (QYZDB-SSW-JSC002).

Acknowledgments

The framework of international cooperation program managed by National Research Foundation of Korea (Young Scientist Exchange Program between Korea and China); and the Chinese Academy of Sciences (QYZDB-SSW-JSC002).

References

1. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7, 739–745 (2013). [CrossRef]  

2. G. Zheng, “Breakthroughs in photonics 2013: Fourier ptychographic imaging,” IEEE Photonics J. 6, 1–7 (2014).

3. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef]   [PubMed]  

4. A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109, 1256–1262 (2009). [CrossRef]   [PubMed]  

5. A. Zhou, N. Chen, and G. Situ, “Analysis of fourier ptychographic microscopy with half reduced images,” in 2017 International Conference on Optical Instruments and Technology: Optoelectronic Imaging/Spectroscopy and Signal Processing Technology, vol. 10620 (SPIE, Beijing, China, 2018), p. 10620.

6. N. Chen, J. Yeom, K. Hong, G. Li, and B. Lee, “Fast converging algorithm for wavefront reconstruction based on a sequence of diffracted intensity images,” J. Opt. Soc. Korea 18, 217–224 (2014). [CrossRef]  

7. A. Zhou, N. Chen, H. Wang, and G. Situ, “Analysis of fourier ptychographic microscopy with half of the captured images,” J. Opt. 20, 095701 (2018). [CrossRef]  

8. X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, “Quantitative phase imaging via Fourier ptychographic microscopy,” Opt. Lett. 38, 4845–4848 (2013). [CrossRef]   [PubMed]  

9. A. Williams, J. Chung, X. Ou, G. Zheng, S. Rawal, Z. Ao, R. Datar, C. Yang, and R. Cote, “Fourier ptychographic microscopy for filtration-based circulating tumor cell enumeration and analysis,” J. Biomed. Opt. 19, 066007 (2014). [CrossRef]   [PubMed]  

10. R. Horstmeyer, X. Ou, G. Zheng, P. Willems, and C. Yang, “Digital pathology with Fourier ptychography,” Comput. Med. Imaging Graph. 42, 38–43 (2015). [CrossRef]  

11. J. Chung, X. Ou, R. P. Kulkarni, and C. Yang, “Counting white blood cells from a blood smear using Fourier ptychographic microscopy,” PLOS One 10, e0133489 (2015). [CrossRef]   [PubMed]  

12. Z. Bian, S. Dong, and G. Zheng, “Adaptive system correction for robust Fourier ptychographic imaging,” Opt. Express 21, 32400–32410 (2013). [CrossRef]  

13. X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22, 4960–4972 (2014). [CrossRef]   [PubMed]  

14. J. M. Rodenburg and R. H. T. Bates, “The theory of super-resolution electron microscopy via wrigner-distribution deconvolution,” Philos. Transactions Royal Soc. A. 339, 521–553 (1992). [CrossRef]  

15. H. M. L. Faulkner and J. M. Rodenburg, “Movable aperture lensless transmission microscopy: a novel phase retrieval algorithm,” Phys. Rev. Lett. 93, 023903 (2004). [CrossRef]   [PubMed]  

16. J. M. Rodenburg, A. C. Hurst, A. G. Cullis, B. R. Dobson, F. Pfeiffer, O. Bunk, C. David, K. Jefimovs, and I. Johnson, “Hard-x-ray lensless imaging of extended objects,” Phys. Rev. Lett. 98, 034801 (2007). [CrossRef]   [PubMed]  

17. A. Maiden, M. Humphry, M. Sarahan, B. Kraus, and J. Rodenburg, “An annealing algorithm to correct positioning errors in ptychography,” Ultramicroscopy 120, 64–72 (2012). [CrossRef]   [PubMed]  

18. L.-H. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, and L. Waller, “Experimental robustness of Fourier ptychography phase retrieval algorithms,” Opt. Express 23, 33214–33240 (2015). [CrossRef]  

19. J. Sun, Q. Chen, Y. Zhang, and C. Zuo, “Efficient positional misalignment correction method for Fourier ptychographic microscopy,” Biomed. Opt. Express 7, 1336–1350 (2016). [CrossRef]   [PubMed]  

20. A. Zhou, W. Wang, N. Chen, and G. Situ, “Fast light source misalignment correction of Fourier ptychographic microscopy,” in Imaging and Applied Optics 2018, (Orlando, USA, 2018), p. JTh3A.5.

21. G. Zheng, “Fourier ptychographic imaging,” in Photonics Conference (IPC), 2015, (IEEE, 2015), pp. 20–21.

22. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier ptychography with an led array microscope,” Biomed. Opt. Express 5, 2376–2389 (2014). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 The imaging process of the FPM.
Fig. 2
Fig. 2 The experimental setup (a) with the installation manual of the LED array on the microscope (b), and the adjust method of the central LED (c) along the optical axis.
Fig. 3
Fig. 3 The reconstructions influenced by the global shift of the LED array in the FPM demonstrated by several segments reconstruction (b)–(e) located at different positions in the original captured image (a).
Fig. 4
Fig. 4 A simulated reconstruction with a global shift in the FPM. Amplitude (a) and phase (b) profiles of the object and the reconstructed amplitude (c) and phase (d) profiles with a global shift.
Fig. 5
Fig. 5 The flow chart of the mcFPM.
Fig. 6
Fig. 6 Experimental results of a USAF resolution target. Two segments (a) and (e), which are the enlargement of the parts within the yellow and blue box in the original captured image (d), and their corresponding reconstructions using the conventional FPM without position correction (b) and (f), and the proposed mcFPM (c) and (g).
Fig. 7
Fig. 7 The reconstructed wide FOV and HR images using the conventional FPM without any position correction (a) and the mcFPM (b).
Fig. 8
Fig. 8 The reconstructed results using FPM without position correction (a), the conventional SA method (b), pcFPM (c), mcFPM I (d), mcFPM II (e), and mcFPM III (f) respectively.
Fig. 9
Fig. 9 The reconstructed results using (a) mcFPM and (b) modified mcFPM.

Tables (2)

Tables Icon

Table 1 Time Cost Comparison of the Methods in Fig. 8

Tables Icon

Table 2 Reconstructed Misalignment Parameters of the Different Segments in Fig. 3, Using Modified mcFPM

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

A o u t p u t ( x , y ) = ( A o b j e c t ( x , y ) e i k x m , n x + i k y m , n y ) h ( x , y ) ,
G o u t p u t ( k x , k y ) = G o b j e c t ( k x k x m , n , k y k y m , n ) H ( k x , k y ) ,
I c a p t u r e d m , n ( x , y ) = | 1 { G o u t p u t ( k x , k y ) } | 2
k x m , n = 2 π λ x o x m , n ( x o x m , n ) 2 + ( y o y m , n ) 2 + s 2 , k y m , n = 2 π λ y o y m , n ( x o x m , n ) 2 + ( y o y m , n ) 2 + s 2 ,
x 0 x m , n = x i / M m d Δ x , y 0 y m , n = y i / M n d Δ y ,
ψ j m , n ( k x , k y ) = O j ( k x k x m , n , k y k y m , n ) P j ( k x , k y ) ,
ϕ j m , n ( x , y ) = I c a p t u r e d m , n ( x , y ) | ψ j m , n ( x , y ) | 2 ψ j m , n ( x , y ) ,
ψ j m , n ( x , y ) = 1 { ψ j m , n ( k x , k y ) } .
Φ ( k x , k y ) = { ϕ j m , n ( x , y ) } .
{ O j + 1 ( k x , k y ) = O j ( k x , k y ) + | P j ( k x + k x m , n , k y + k y m , n ) | P j * ( k x + k x m , n , k y + k y m , n ) | P j ( k x , k y ) | m a x ( | P j ( k x + k x m , n , k y + k y m , n ) | 2 + δ 1 ) Δ 1 , P j + 1 ( k x , k y ) = P j ( k x , k y ) + | O j ( k x k x m , n , k y k y m , n ) | O j * ( k x k x m , n , k y k y m , n ) | O j ( k x , k y ) | m a x ( | O j ( k x k x m , n , k y k y m , n ) | 2 + δ 2 ) Δ 2 ,
{ Δ 1 = Φ ( k x + k x m , n , k y + k y m , n ) O j ( k x , k y ) P j ( k x + k x m , n , k y + k y m , n ) , Δ 2 = Φ ( k x , k y ) O j ( k x k x m , n , k y k y m , n ) P j ( k x , k y ) .
E 1 = min Δ k x m , n , Δ k y m , n x , y | I c a p t u r e d m , n ( x , y ) | ψ j m , n ( x , y , Δ k x m , n , Δ k y m , n ) | 2 | 2
{ k x m , n = k x m , n + Δ k x m , n , k y m , n = k y m , n + Δ k y m , n .
{ k x m , n = 2 π λ x i / M m d Δ x ( x i / M m d Δ x ) 2 + ( y i / M n d Δ y ) 2 + s 2 , k y m , n = 2 π λ y i / M n d Δ y ( x i / M m d Δ x ) 2 + ( y i / M n d Δ y ) 2 + s 2 .
E 2 = min Δ x , Δ y m , n x , y | I c a p t u r e d m , n ( x , y ) I F P M m , n ( x , y , Δ x , Δ y ) | 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.