Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Sampling criteria for Fourier ptychographic microscopy in object space and frequency space

Open Access Open Access

Abstract

Fourier ptychographic microscopy (FPM) is a new computational super-resolution approach, which can obtain not only the correct object function, but also the pupil aberration, the LED misalignment, and beyond. Although many state-mixed FPM techniques have been proposed to achieve higher data acquisition efficiency and recovery accuracy in the past few years, little is known that their reconstruction performance highly depends on the data redundancy in both object and frequency domains. Generally, at least 35% aperture overlapping percentage in the Fourier domain is needed for a successful reconstruction using ordinary FPM method. However, the data redundancy requirements for those state-mixed FPM schemes are largely remained unexplored until now. In this paper, we explore the spatial and spectrum data redundancy requirements for the FPM recovery process to introduce sampling criteria for the conventional and state-mixed FPM techniques in both object and frequency space. Moreover, an upsampled FPM method is proposed to solve the pixel aliasing problem, and an alternative illumination-angle subsampled FPM scheme is introduced to get rid of the complexity of decoherence and achieve the expected recovery quality with reduced data quantity. All the proposed methods and sampling criteria are validated with both simulations and experiments, and our results show that state-mixed techniques cannot provide a significant performance advantage since they are much more sensitive to data redundancy. This paper provides both the guidelines for designing the most suitable FPM platform and the insights for the capabilities and limitations of the FPM approach.

© 2016 Optical Society of America

1. Introduction

Fourier ptychographic microscopy (FPM) [1, 2] is a recently developed computational imaging approach which surpasses the resolution barrier of a low numerical aperture (NA) imaging system. In the ordinary FPM technique, a set of low-resolution (LR) intensity images corresponding to different illumination angles, with the resolution determined by the NA of the objective lens, is acquired. Similar to the phase retrieval techniques [3–9], these intensity images are recorded for constraining the solution of FPM. Then, by iteratively combining these LR intensity images together in the Fourier domain, FPM recovers a complex high-resolution (HR) image of the sample, sharing its root with synthetic aperture concepts [10–16]. The final reconstruction resolution is determined by the sum of the objective lens and illumination NAs [17].

Despite of the remarkable expansion of the space-bandwidth product (SBP), a latent advantage of FPM is the huge data redundancy of overlapping frequency apertures. In conventional ptychography, this data redundancy has been validated to be useful in decoherence when partial coherence and imperfect detection need to be accounted for. As introduced by Thibault and Menzel [18], state mixtures often have to be used to accurately describe the statistical nature of the system under investigation, and the occurrence of mixed states can be categorized into three distinct groups: the partial coherence of the illumination wavefront, the varying states of the dynamic object and the imperfect detection of the imaging system. With the help of huge data redundancy in ptychography, these three kinds of state mixtures can be decorrelated successfully [18–21]. Lately, similar to conventional ptychography, several studies demonstrate that the data redundancy in FPM is able to provide the potential of not only obtaining the correct object function, but also acquiring the pupil aberration, correcting the LED misalignment, solving the pixel aliasing problem, and beyond [22–25]. As a result, some of the most stringent experimental conditions in FPM can be relaxed, and the susceptibility to imaging noise can be reduced. Furthermore, although decoherence is often an unwanted complication, an artificial superposition of states can be useful to improve the data acquisition efficiency of the original FPM technique [26–29]. For instance, imaging with a partially coherent wavefront in FPM by lighting up several LED elements simultaneously can reduce the measuring time of imaging process without sacrificing recovery quality, since the stationary mixed states of partially coherent illumination can be decorrelated accurately by taking advantage of data redundancy.

Although these significant progresses have been made in FPM for achieving higher data acquisition efficiency and recovery accuracy in the past few years, little is known that their reconstruction performance highly depends on the data redundancy in both object and frequency spaces. As reported in [24], approximately, at least 35% aperture overlapping percentage in the Fourier domain is needed for an accurate reconstruction of both intensity and phase information. However, when pixel aliasing problem occurs, the minimum data redundancy requirement has not been investigated yet. Besides, the data redundancy requirement of those multiplexed FPM schemes, which illuminate the object by linear combinations of light sources [26, 27], has also remained unexplored until now. Moreover, for all of these multiplexing techniques, there is an alternative imaging strategy that can achieve the expected data recovery efficiency directly without the need for any state mixture. For example, if some particular LED elements are selected from all the LED elements in the array to illuminate the object successively, the measuring time of data acquisition would be reduced likewise. With respect to this simple alternative strategy, it has not been confirmed that whether these multiplexing techniques, which involve complex decoherence process, could provide a significant performance improvement.

Aiming to analyze the performance advantage of those state-mixed FPM techniques, in this paper, we first explore the imaging resolution requirement of the FPM platform and introduce a spatial sampling criterion for the ordinary FPM technique. Then, we propose an upsampled FPM method to obtain the expected HR image with the largest FOV allowed by the objective lens by solving the pixel aliasing problem properly. Afterwards, we discuss the spectrum data redundancy requirement of the FPM recovery process and propose the spectrum sampling criteria for conventional and multiplexed FPM techniques. Furthermore, based on the spectrum sampling criteria, we introduce an alternative illumination-angle subsampled FPM scheme, which turns on only one LED element at a time, to achieve the expected recovery quality from the same (or even smaller) data quantity as used in the multiplexed FPM schemes. We demonstrate the effectiveness of our proposed methods and sampling criteria with both the simulations and experiments.

2. Sampling in the spatial domain

2.1. Pixel aliasing problem

In a conventional light microscope, the imaging resolution depends on the numerical aperture of the objective lens (NA), the illumination wavelength (λ), the magnification of the microscope (Mag), and the pixel size of the digital camera (Δxcam). Considering a light microscope with monochromatic illumination, the spatial cutoff frequency defined by the objective lens can be presented as fobj=NAλ, while the spatial cutoff frequency defined by the digital camera can be presented as fcam=Mag2Δxcam. Normally, the imaging resolution of a light microscope is limited by the objective lens because fobj is generally smaller than fcam. For example, if we employ an objective lens (Mag = 4, NA = 0.1), a charge-coupled device (CCD) camera (Δxcam = 6.5μm), and an LED array with monochromatic illumination (λ = 0.435μm) to build a microscopy platform, those spatial cutoff frequencies can be numerically determined (fobj = 0.230μm−1 < 0.308μm−1 = fcam). But if we utilize an additional 0.5× digital camera adapter to achieve a larger FOV, the system magnification would be reduced (Mag = 2) and the imaging resolution would be limited by the camera since fobj is larger than fcam in this situation (fobj = 0.230μm−1 > 0.154μm−1 = fcam). Here, we define a spatial-sampling-ratio of a digital microscopy imaging system as

Rcam=fcamfobj=λNAMag2Δxcam.

Notice that, MagΔxcam>2(NAλ) is the well-known Nyquist criterion. So, once Rcam < 1, the imaging resolution would be limited by the camera, which could result in pixel aliasing problem in the original FPM technique (see details in Appendix A). Figure 1 presents the pixel aliasing problem in both object and frequency spaces using simulations. The simulation parameters were chosen to realistically model a light microscope as discussed above. A schematic representation of the FPM platform we used is displayed in Appendix B. A programmable 41 × 41 LED array (2.5mm spacing) is placed at 87.5mm beneath the sample, and it is used for all the simulations reported here. Figures 1(a1) and 1(b1) present the captured LR images extracted from two data set with different magnification of the FPM setup. Comparing with Fig. 1(a1), when the 0.5× camera adapter is employed, the same region of the specimen is imaged on less pixels of the camera (specifically 2×2 pixels are combined into 1 pixel) as shown in Fig. 1(b1). Although the raw LR images seem similar, their frequency spectrums [Figs. 1(a2) and 1(b2)] are quite different. Comparing with Fig. 1(a2), there is not sufficient bandwidth to impose the circular pupil function in Fig. 1(b2) due to the pixel aliasing problem. Utilizing the conventional FPM technique, the HR images are recovered as presented in Figs. 1(c1) and 1(d1) while their corresponding spectrums are displayed in Figs. 1(c2) and 1(d2) respectively. As shown in Fig. 1(d1), the recovered result is corrupted significantly due to the pixel aliasing problem. Therefore, in order to achieve a high recovery quality using conventional FPM method, (λ, NA, Mag, Δxcam) these four parameters of the microscopy platform need to be chosen carefully to guarantee the Nyquist criterion (Rcam > 1).

 figure: Fig. 1

Fig. 1 Simulations of the pixel aliasing problem in the ordinary FPM method.

Download Full Size | PDF

2.2. Upsampled FPM scheme

In order to address the pixel aliasing problem and further expand the SBP of FPM, we introduce an upsampled FPM scheme in this section. It is also important to acknowledge that, a similar upsampling procedure has been discussed in the x-ray ptychography approach [20]. While the algorithm proposed here shares its principle with the conventional FPM algorithm, the major difference is to run the algorithm assuming a synthetic detector with the reduced pixel size to satisfy the Nyquist theorem. As is usual in the FPM algorithm, we start with an estimate of the object and pupil function using an LR image. Different from conventional FPM, we assume that the actual pixel size of the sensor is reduced by N times and those LR images are captured when pixel binning is enabled (N × N pixels are combined into one pixel). N is chosen to be the smallest positive integer to ensure N>1Rcam. Next, after obtaining the upsampled LR image |om,n|2 from the upsampled object’s spectrum of one incident angle, a pixel binning process needs to be complemented. In this pixel binning process, N × N pixels are combined into one pixel to realistically simulate the pixel aliasing problem in the experimental FPM platform. After getting the subsampled estimation image |om,n|s2, the updating matrix Cm,n can be obtained as Cm,n=Ubilinear{|om,n|s2Im,n}, where Ubilinear{}˙ denotes the bilinear interpolation and Im,n denotes the captured LR image. Next, the updated LR image om,nu=Cm,nom,n is used to update the upsampled object’s spectrum and pupil function. Then, the spectrums corresponding to other incident angles will be updated and the entire optimization process will be iterated several times until the result converges.

In Fig. 2, we validate the upsampled FPM scheme using simulations and we still use the same simulation parameters as mentioned in Section 2.1. The ideal HR intensity and phase images and the recovered results utilizing original FPM are presented in Figs. 2(a1), 2(a2), 2(b1), and 2(b2) respectively. With the help of upsampled FPM, the recovered intensity and phase profiles shown in Figs. 2(d1) and 2(d2) are free of those distortion patterns. We also compare its performance with the subsampled FPM method introduced in [24]. Although subsampled FPM could remove those distortion patterns in the reconstructed results [Figs. 2(c1) and 2(c2)], upsampled FPM provides a better recovery quality in details. In addition, the root-mean-square error (RMSE) of the recovered intensity and phase images with different Rcam utilizing three different FPM schemes is presented in Figs. 2(e1) and 2(e2). Obviously, comparing with conventional FPM and subsampled FPM, upsampled FPM is able to achieve the best recovery quality even when Rcam < 1. This is because that the subsampling model in subsampled FPM cannot simulate the pixels combining procedure in pixels aliasing problem properly. However, if we use a chromatic single-CCD (or CMOS) camera with a Bayer filter to implement color FPM, the subsampling model may be exactly suitable in this case because, considering a Bayer filter, only one-fourth of each pixel is used for detecting the color information of one monochromatic channel.

 figure: Fig. 2

Fig. 2 Comparison of the ordinary FPM, subsampled FPM, upsampled FPM schemes using simulations.

Download Full Size | PDF

We also illustrate the performance of the upsampled FPM experimentally. Here, a USAF resolution target is used as the sample. We use the same parameters of the imaging system as mentioned in Section 2.1. But, instead of a 41 × 41 LED array, a programmable 21 × 21 LED array (2.5mm spacing, 87.5mm beneath the sample) is used for all the real experiments. Figure 3(a1) shows the raw intensity image of the sample and Fig. 3(a2) presents its spectrum. The reconstructed images using three different FPM methods are shown in Fig. 3(b1), 3(c1), and 3(d1), while their spectrums are shown in Fig. 3(b2), 3(c2), and 3(d2) correspondingly. It is obvious that, the recovered results using original FPM are corrupted by the pixel aliasing problem. On the other hand, the upsampled FPM is able to reconstruct an artifact-free image with more distinct details.

 figure: Fig. 3

Fig. 3 Experimental results of the ordinary FPM, subsampled FPM, upsampled FPM schemes.

Download Full Size | PDF

Thus, to achieve the largest FOV allowed by the objective lens in FPM, the magnification of the microscope (Mag), the row number and the pixel size of the imaging sensor (Nrow and Δxcam) should be chosen carefully to ensure NrowΔxcamMag>FNMagobj, where FN is the field number of the objective lens. One simple way is choosing a digital camera with a larger imaging sensor to increase NrowΔxcam. If only one camera is available for the practitioners, another way is utilizing an additional 0.5× (or even smaller) digital camera adapter to reduce Mag. But in this way, pixel aliasing problem may occur when ΔxcamMag>λ2NA. Therefore, in this case, we suggest upsampled FPM as a useful method to recover the expected high-quality complex images of the specimen and expand the SBP in FPM. But, does this expansion of SBP require no more other critical conditions? In other words, is it possible to accurately recover an HR image with large FOV from a few pixels by utilizing upsampled FPM, when serious pixels aliasing problem occurs and Rcam ≪ 1? To answer these questions, we then investigate the spectrum sampling criteria for ordinary and other developed FPM techniques in the next section.

3. Sampling in the frequency space

3.1. Spectrum sampling criteria for ordinary and upsampled FPM

Despite of the spatial-sampling-ratio of the imaging system, the recovery quality can also be influenced by the spectrum-sampling-ratio of the angular-varying illumination. In FPM platform, the light from each LED can be accurately treated as a plane wave for each small image region of the specimen. The minimum tilted illumination angle is fLED=1λDLED(DLED)2+h2, where DLED denotes the distance between adjacent LED elements and h is the distance (at the z direction) between the LED array and specimen. Multiplication of the plane wave illumination in the spatial domain is equivalent to shifting the sample spectrum in the Fourier domain. Thus, images with different plane wave illuminations correspond to different spectrum regions in the Fourier space. Intuitively, a certain amount of spectrum overlapping between successive acquisitions is needed to connect all acquired images in the Fourier space. Therefore, we define a spectrum-sampling-ratio RLED to denote the amount of spectrum scanning density in the Fourier space. Thus, the spatial-sampling-ratio can be presented as

RLED=fobjfLED=NA(DLED)2+h2DLED.

In FPM, RLED is always set to be larger than 12, because there would be no overlapping between these frequency apertures if fLED > 2 fobj and FPM reduces to a conventional phase retrieval procedure where each image can be processed independently. In addition, we define the aperture-overlapping-rate Roverlap to present the spectrum overlapping percentage between two neighbouring apertures in the center of the entire spectrum (see details in Appendix A). This parameter has been widely used to evaluate the data redundancy requirement in conventional and Fourier ptychography [1, 7, 9, 24]. After a simple deduction in geometry, it is found out that Roverlap can be directly determined by RLED, as

Roverlap={1π[2arccos(12RLED)1RLED1(12RLED)2],RLED>120,else.

In Fig. 4, we illustrate the recovery quality with different spatial-sampling-ratio Rcam and aperture-overlapping-rate Roverlap using simulations. Here, we employ upsampled FPM to reconstruct HR images since upsampled FPM could achieve the best recovery accuracy whether or not Rcam > 1.

 figure: Fig. 4

Fig. 4 Comparison of the recovery qualities with different spatial-sampling-ratio Rcam and aperture-overlapping-rate Roverlap using simulations.

Download Full Size | PDF

Figures 4(a1)–4(a4) present the recovered intensity images using upsampled FPM with the Roverlap increasing. On the other hand, the recovered results are reconstructed using ordinary FPM with different spatial sampling density, as presented in Figs. 4(b1)–4(b4). Furthermore, RMSE between these recovered results and the input ideal images is calculated. As shown in Figs. 4(c1) and 4(c2), the RMSE of the recovered intensity and phase images increases obviously with Roverlap decreasing below 31.81% when Rcam > 1. The requirement of a minimum spectrum overlapping percentage suggests that we need at least two times data quantities to recover the missing phase information from the captured intensity images. Similar conclusion has been also reported in conventional ptychography [19]. In fact, it is found that the total frequency area of all the illumination apertures needs to be approximately two times larger than the frequency area of the final synthetic aperture to achieve a high-quality reconstruction. If Roverlap < 31.81%, the spectrum redundancy would be insufficient for accurately recovering the intensity and phase information simultaneously. Thus, for conventional FPM without pixel aliasing problem, the distance between adjacent LED elements (DLED) and the distance between the LED array and specimen (h) need to be selected carefully to guarantee Roverlap ≥ 31.81%.

Moreover, it is noticeable that, for those FPM platforms with pixel aliasing problem (Rcam < 1), much more spectrum overlapping is required to achieve the expected recovery quality. This inspiring observation could exactly answer the questions which we left at the end of Section 2.2. Briefly speaking, a huge spectrum sampling density is required for compensating the lack of spatial sampling rate. In other words, to obtain the HR complex images with the same FOV from fewer pixels, larger number of recorded images are needed since the scanning step between two neighbouring frequency apertures needs to be much smaller. For example, comparing with Fig. 4(b2), Fig. 4(a4) has a four times larger SBP since Fig. 4(a4) is recovered from Fig. 1(b1) (32 × 32 pixels) and Fig. 4(b2) is recovered from Fig. 1(a1) (64 × 64 pixels), while they have the same reconstruction resolution and recovered image size. However, considering the 121 LR images used for recovering Fig. 4(b2), Fig. 4(a4) requires 441 images, nearly four times to 121 images, to achieve the same reconstruction quality. Therefore, it is impossible to expand the SBP in FPM unlimitedly without sacrificing data acquisition efficiency or acquiring soaring data quantity.

We also demonstrate the importance of the data redundancy experimentally. We utilize the same experiment parameters as used in Fig. 3 except for the illumination wavelength λ = 632nm. In this section, we extract several images from the entire dataset to organize some illumination-angle subsampled data sets. Figures 5(a2) and 5(b2) display the small regions of the raw LR images [Figs. 5(a1) and 5(b1)] when Rcam = 0.97,1.94. Figures 5(c1)–5(c3) and 5(d1)–5(d3) present the recovered HR intensity images of the USAF target with different Rcam and Roverlap. It is obvious that, once the Roverlap decreases below 31.81% when Rcam > 1, the recovery accuracy reduces significantly. On the other hand, for those recovery results with Roverlap > 31.81%, most of the resolution features are recognizable. In addition, for those FPM platforms with Rcam < 1, much more spectrum overlapping is required to achieve the expected recovery quality, for example 64.18% aperture overlapping is needed for Rcam = 0.97.

 figure: Fig. 5

Fig. 5 Experimental recovered results of a USAF target with different spatial-sampling-ratio Rcam and aperture-overlapping-rate Roverlap.

Download Full Size | PDF

3.2. Spectrum sampling criterion for illumination-angle multiplexed FPM

As introduced in [26], several LED elements with same color are lighted up together and illuminate the sample from different angles simultaneously, termed illumination-angle multiplexed FPM, and the data acquisition efficiency can be enhanced significantly. However, this illumination-angle multiplexed FPM method also could be affected by the aperture-overlapping-rate imperceptibly.

To illustrate the importance of the spectrum overlapping percentage in multiplexed FPM, we first employ the illumination-angle multiplexed FPM technique using simulations with the same Roverlap, but with different number of illumination sources to be multiplexed. In Figs. 6(a1)–6(a4), we show our recovered HR images under four different multiplexing conditions. Their frequency spectrums are shown in Figs. 6(b1)–6(b4) respectively. Obviously, with the amount of the multiplexed LEDs increasing, the recovery quality of the intensity images does not degrade noticeably. But their spectrums demonstrate that a lot of high-frequency information is lost. Then, instead of using illumination-angle multiplexed FPM, we introduce an illumination-angle subsampled FPM scheme to recover the HR image with reduced Roverlap by increasing the distance between adjacent LEDs (see details in Appendix B). Reconstructed HR images and spectrums are presented in Figs. 6(c1)–6(c4) and Figs. 6(d1)–6(d4) respectively. Next, with the Roverlap decreasing in the illumination-angle subsampled FPM method, the numbers of captured LR images in Figs. 6(a1)–6(a4) and Figs. 6(c1)–6(c4) are nearly the same correspondingly. As can be seen, illumination-angle subsampled FPM is able to obtain a better recovery quality with more high-frequency information from the same data quantity.

 figure: Fig. 6

Fig. 6 Comparison of the illumination-angle multiplexed FPM with differnt multiplexing strategies and the illumination-angle subsampled FPM with different aperture-overlapping-rate Roverlap using simulations.

Download Full Size | PDF

We also experimentally evaluate the performance of the illumination-angle multiplexed FPM and the illumination-angle subsampled FPM. The recovered HR images and spectrums using illumination-angle multiplexed FPM are displayed in Figs. 7(a1)–7(a4) and Figs. 7(b1)–7(b4) with the amount of the multiplexed LEDs increasing, while the recovered HR images and spectrums using illumination-angle subsampled FPM are displayed in Figs. 7(c1)–7(c4) and Figs. 7(d1)–7(d4) with Roverlap decreasing. Since the amount of iterations would significantly affect the recovery quality for those multiplexed FPM techniques, in this paper all the FPM algorithms are iterated 20 times to make sure that all the updating processes converge to stable solutions. Comparing with Fig. 7(d1)–7(d4), illumination-angle multiplexed FPM losses a lot of high frequency information with the data quantity reducing [Fig. 7(b1)–7(b4)]. This observation inspires us that, getting rid of the complexity of the multiplexing, illumination-angle subsampled FPM may be a much simpler approach which could reduce the entire data quantity acquired and achieve high recovery accuracy simultaneously.

 figure: Fig. 7

Fig. 7 Experimental results a USAF target recovered using the illumination-angle multiplexed FPM with differnt multiplexing strategies and the illumination-angle subsampled FPM with different aperture-overlapping-rate Roverlap respectively.

Download Full Size | PDF

3.3. Spectrum sampling criterion for wavelength multiplexed FPM

Besides illumination-angle multiplexed FPM, R/G/B three channels of the LEDs can be lit up simultaneously in wavelength multiplexed FPM and the HR color image of the object can be recovered from LR monochromatic images [27]. However, this wavelength multiplexed FPM method cannot accomplish correct reconstruction without a priori knowledge of the intensity mean values in R/G/B channels. Simply imaging, if we measure a color filter (assuming its transmittances in R/G/B three channels are 1, 0, 0 respectively) and an attenuator (assuming its transmittances in R/G/B three channels are 13, 13, 13 respectively), the acquired data would be exactly the same using white light illumination. Thus, it would be impossible to achieve the correct recovery of two different things from one dataset. Figure 8 presents a typical example of the incorrect reconstruction using wavelength multiplexed FPM in simulations.

 figure: Fig. 8

Fig. 8 Simulation results of the ordinary wavelength multiplexed FPM and corrected wavelength multiplexed FPM.

Download Full Size | PDF

The ideal HR color image is presented in Fig. 8(a1) and its three monochrome images in R/G/B channels are shown in Fig. 8(a2)–8(a4). By utilizing the wavelength multiplexed FPM method, the recovered HR color image and its three monochrome images in R/G/B channels are shown in Fig. 8(b1)–8(b4) respectively. As can be seen, the recovered color image unfortunately converges to white image. This is because the mean values of those three monochrome images in R/G/B channels [Fig. 8(b2)–8(b4)] are actually different, but they unexpectedly converge to a same value using wavelength multiplexed FPM. Therefore, to address this problem, we propose a corrected wavelength multiplexed FPM scheme in this section. The only difference between the corrected and ordinary wavelength multiplexed FPM is that three extra LR monochrome images are recorded for R/G/B monochrome illuminations respectively in the corrected wavelength multiplexed FPM and they are used to correct the mean values of the R/G/B channels during iterative reconstruction process. By utilizing the corrected wavelength multiplexed FPM, the recovered HR color image and its three monochrome images in R/G/B channels are shown in Fig. 8(c1)–8(c4) respectively. Obviously, the corrected wavelength multiplexed FPM provides a better color recovery quality compared with the ordinary wavelength multiplexed FPM.

Although the corrected wavelength multiplexed FPM could compensate the color recovery error to some scale, this wavelength multiplexed technique also could be affected by the spectrum redundancy remarkably. To illustrate the importance of the spectrum overlapping in corrected wavelength multiplexed FPM, we first employ the corrected wavelength multiplexed FPM technique with different Roverlap using simulations. Figures 9(a) and 9(b) display the captured LR images with R/G/B monochrome illuminations and white light illumination respectively. In Figs. 9(c1)–9(c4), we show our recovered HR images using corrected wavelength multiplexed FPM under four different Roverlap when Rcam = 1.94. As can be seen, when the aperture-overlapping-rate Roverlap degrades below 81.88%, the Red channel of the object’s image can not be recovered accurately. Then, similar to Section 3.2, we also utilize the reported illumination-angle subsampled FPM to recover the HR color image with monochrome illuminations and the reconstructed color images are presented in Figs. 9(d1)–9(d4). Here we also degrade Roverlap to ensure the numbers of the captured LR images in Figs. 9(c1)–9(c4) and Figs. 9(d1)–9(d4) are nearly the same correspondingly. As can be seen, comparing with corrected wavelength multiplexed FPM, illumination-angle subsampled FPM provides a better color preserving capability from the same data quantity, even there is no overlapping between those frequency apertures in the Fourier domain.

 figure: Fig. 9

Fig. 9 Comparison of the corrected wavelength multiplexed FPM and the illumination-angle subsampled FPM with different aperture-overlapping-rate Roverlap using simulations.

Download Full Size | PDF

We also experimentally evaluate the performance of the corrected wavelength multiplexed FPM and illumination-angle subsampled FPM techniques with different Roverlap. In the experiments, a sample of stained human kidney vessel cells is used as the object. Figures 10(a1) and 10(b1) display the small regions of the captured LR images with R/G/B monochrome illuminations and white light illumination respectively. Figures 10(a2) and 10(b2) present the full FOV raw LR images with green and white illuminations. In Figs. 10(c1)–10(c3), we show our recovered HR images using corrected wavelength multiplexed FPM under four different Roverlap when Rcam = 1.94. As can be seen, the green channel of the object’s image cannot be recovered accurately and the recovered images appear pinky. On the other hand, by employing illumination-angle subsampled FPM, correct recovery results are obtained with the same data quantity, as shown in Figs. 10(d1)–10(d3). Therefore, comparing with the HR image taken by a conventional microscope with 20× objective lens [Fig. 10(e)], these computational reconstruction results suggest that illumination-angle subsampled FPM may be a more accurate approach for achieving color FPM with the smallest data quantity.

 figure: Fig. 10

Fig. 10 Experimental results a sample of stained human kidney vessel cells recovered using the corrected wavelength multiplexed FPM and the illumination-angle subsampled FPM with different aperture-overlapping-rate Roverlap respectively.

Download Full Size | PDF

4. Conclusion

In this paper, we investigate the spatial and spectrum data redundancy requirements of the FPM recovery process and introduce sampling criteria for the conventional and state-mixed FPM techniques in both object and frequency spaces. Briefly speaking, for the ordinary FPM method without state mixtures, the total frequency area of all the illumination apertures needs to be approximately two times larger than the frequency area of the final synthetic aperture to achieve the high-quality reconstruction of both intensity and phase. Specifically, when the Nyquist sampling criterion is satisfied in the FPM platform, a minimum 31.81% frequency aperture overlapping percentage is needed for a successful reconstruction using conventional FPM method. On the other hand, for those FPM techniques plagued with state mixtures, much more data redundancy is needed to achieve the expected recovery accuracy.

In particular, for the mixed state of imperfect detection, we propose an upsampled FPM scheme to solve the pixel aliasing problem and obtain the best recovery quality with the largest FOV allowed by the objective lens. However, it is impossible to expand the SBP in FPM unlimitedly without sacrificing data acquisition efficiency or acquiring soaring data quantity. In other words, the spatial-spectrum sampling trade-off suggests that a huge spectrum sampling density would be required for compensating the lack of spatial resolution. Furthermore, for the mixed state of partially coherent illumination, we introduce an alternative illumination-angle subsampled FPM scheme, which gets rid of the complexity of decoherence, to reduce the number of acquired LR images while guaranteeing the recovery quality. Simulation and experimental results demonstrate that state-mixed techniques cannot provide a significant performance advantage since their performances are greatly limited by the data redundancy requirement of FPM. Therefore, illumination-angle subsampled FPM may be a better choice for accomplishing successful recovery with the same (or even smaller) data quantity as used in the multiplexed FPM schemes. This paper not only gives guidelines for designing the most suitable FPM platform, but also provides insights for the capabilities and limitations of the FPM approach.

Appendix A definitions of Rcam and Roverlap

In order to explain the definitions of Rcam and Roverlap more clearly, we present two diagrams of the definitions of Rcam and Roverlap in Figs. 11(a1) and 11(b1). Figure 11(a1) shows an objective lens’s frequency aperture (brown circle) in a Fourier spectrum of an image (green square). fobj and fcam in Fig. 11(a1) denote the resolution limits of the pupil and the captured image respectively. In this paper, the spatial-sampling-ratio is defined as Rcam=fcamfobj and four diagrammatic sketches of different Rcam are displayed in Figs. 11(a2)–11(a5). As can be seen, once Rcam < 1, some of the high-frequency information transported by the objective lens will exceed the imaging resolution limitation (the red part outside the square) and these frequency components will blend into the image spectrum, which is the well-known pixel aliasing problem.

 figure: Fig. 11

Fig. 11 Diagrams of the definitions of Rcam and Roverlap and examples of different Roverlap and Roverlap.

Download Full Size | PDF

In addition, we also present a diagram of the definition of Roverlap in Fig. 11(b1). The red and blue circles are two frequency apertures of the objective lens in the Fourier domain corresponding to two nearest LED elements at the centre of the LED array. fLED in Fig. 11(b1) denotes the distance between the circle centres of those two apertures which could expand with DLED increasing. Here we define the area of the purple overlapping part between those two circles as Soverlap and the area of the red frequency aperture as Sobj. So the aperture-overlapping-rate can be defined as Roverlap=SoverlapSobj. After a simple deduction in geometry, it is found out that Roverlap can be directly determined by fobj and fLED. Furthermore, six diagrammatic sketches of different Roverlap are displayed in Figs. 11(b2)–11(b7). As can be seen, with fLED increasing, the area of the purple overlapping part reduces correspondingly.

Appendix B diagrams of the FPM platform and different illumination-angle subsampling conditions

Figure 12(a) shows the diagram of a conventional LED array based FPM platform used in this paper. In addition, in order to account for different illumination-angle subsampling conditions, we present four diagrammatic sketches of the central part of the LED array in Figs. 12(b1)–12(b4). Figure 12(b1) shows the LED array with no black LED elements, which means all the LEDs will be lighted up successively (the red LED is at the centre of the LED matrix). On the other hand, those black LED elements in Figs. 12(b2)–12(b4) represent that these LEDs will not be lighted up during the FPM measurement process. In other words, we select some particular LEDs in the entire LED array to illuminate the object successively with an expanded DLED. Therefore, when the imaging system parameters (λ, NA, Mag, Δxcam) and the distance between the LED array and the sample (h) remain unchanged, the aperture-overlapping-rate Roverlap will reduce with the DLED increasing.

 figure: Fig. 12

Fig. 12 Diagrams of the FPM platform and different illumination-angle subsampling conditions.

Download Full Size | PDF

Acknowledgments

This work was supported by the National Natural Science Fund of China (11574152, 61505081), ‘Six Talent Peaks’ project (2015-DZXX-009, Jiangsu Province, China) and ‘333 Engineering’ research project (BRA2015294, Jiangsu Province, China), Fundamental Research Funds for the Central Universities (30915011318, 30916011322), and Open Research Fund of Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense (3092014012200417). C. Zuo thanks the support of the ‘Zijin Star’ program of Nanjing University of Science and Technology.

References and links

1. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

2. X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, “Quantitative phase imaging via Fourier ptychographic microscopy,” Opt. Lett. 38(22), 4845–4848 (2013). [CrossRef]   [PubMed]  

3. R. A. Gonsalves, “Phase retrieval and diversity in adaptive optics,” Opt. Eng. 21, 215829 (1982). [CrossRef]  

4. J. R. Fienup, “Phase-retrieval algorithms for a complicated optical system,” Appl. Opt. 32(10), 1737–1746 (1993). [CrossRef]   [PubMed]  

5. L. Allen and M. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. 199(1), 65–75 (2001). [CrossRef]  

6. B. H. Dean and C. W. Bowers, “Diversity selection for phase-diverse phase retrieval,” J. Opt. Soc. Am. A 20(8), 1490–1504 (2003). [CrossRef]  

7. H. M. L. Faulkner and J. M. Rodenburg, “Movable aperture lensless transmission microscopy: A novel phase retrieval algorithm,” Phys. Rev. Lett. 93(2), 023903 (2004). [CrossRef]   [PubMed]  

8. P. Bao, F. Zhang, G. Pedrini, and W. Osten, “Phase retrieval using multiple illumination wavelengths,” Opt. Lett. 33(4), 309–311 (2008). [CrossRef]   [PubMed]  

9. M. Guizar-Sicairos and J. R. Fienup, “Phase retrieval with transverse translation diversity: a nonlinear optimization approach,” Opt. Express 16(10), 7264–7278 (2008). [CrossRef]   [PubMed]  

10. C. J. Schwarz, Y. Kuznetsova, and S. R. Brueck, “Imaging interferometric microscopy,” Opt. Lett. 28(16), 1424–1426 (2003). [CrossRef]   [PubMed]  

11. S. A. Alexandrov, T. R. Hillman, T. Gutzler, and D. D. Sampson, “Synthetic aperture fourier holographic optical microscopy,” Phys. Rev. Lett. 97(16), 168102 (2006). [CrossRef]   [PubMed]  

12. V. Mico, Z. Zalevsky, P. Garcła-Martłnez, and J. Garcła, “Synthetic aperture superresolution with multiple offaxis holograms,” J. Opt. Soc. Am. A 23(12), 3162–3170 (2006). [CrossRef]  

13. J. Di, J. Zhao, H. Jiang, P. Zhang, Q. Fan, and W. Sun, “High resolution digital holographic microscopy with a wide field of view based on a synthetic aperture technique and use of linear CCD scanning,” Appl. Opt. 47(30), 5654–5659 (2008). [CrossRef]   [PubMed]  

14. L. Granero, V. Mic, Z. Zalevsky, and J. Garcła, “Synthetic aperture superresolved microscopy in digital lensless Fourier holography by time and angular multiplexing of the object information,” Appl. Opt. 49(5), 845–857 (2010). [CrossRef]   [PubMed]  

15. T. Gutzler, T. R. Hillman, S. A. Alexandrov, and D. D. Sampson, “Coherent aperture-synthesis, wide-field, high-resolution holographic microscopy of biological tissue,” Opt. Lett. 35(8), 1136–1138 (2010). [CrossRef]   [PubMed]  

16. A. E. Tippie, A. Kumar, and J. R. Fienup, “High-resolution synthetic-aperture digital holography with digital phase and pupil correction,” Opt. Express 19(13), 12027–12038 (2011). [CrossRef]   [PubMed]  

17. S. Pacheco, B. Salahieh, T. Milster, J. J. Rodriguez, and R. Liang, “Transfer function analysis in epi-illumination Fourier ptychography,” Opt. Lett. 40(22), 5343–5346 (2015). [CrossRef]   [PubMed]  

18. P. Thibault and A. Menzel, “Reconstructing state mixtures from diffraction measurements,” Nature 494(7435), 68–71 (2013). [CrossRef]   [PubMed]  

19. T. B. Edo, D. J. Batey, A. M. Maiden, C. Rau, U. Wagner, Z. D. Pešić, T. A. Waigh, and J. M. Rodenburg, “Sampling in x-ray ptychography,” Phys. Rev. A 87(5), 053850 (2013). [CrossRef]  

20. D. J. Batey, T. B. Edo, C. Rau, U. Wagner, Z. D. Pešić, T. A. Waigh, and J. M. Rodenburg, “Reciprocal-space up-sampling from real-space oversampling in x-ray ptychography,” Phys. Rev. A 89(4), 043812 (2014). [CrossRef]  

21. D. J. Batey, D. Claus, and J. M. Rodenburg, “Information multiplexing in ptychography,” Ultramicroscopy 138, 13–21 (2014). [CrossRef]   [PubMed]  

22. Z. Bian, S. Dong, and G. Zheng, “Adaptive system correction for robust Fourier ptychographic imaging,” Opt. Express 21(26), 32400–32410 (2013). [CrossRef]  

23. X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22(5), 4960–4972 (2014). [CrossRef]   [PubMed]  

24. S. Dong, Z. Bian, R. Shiradkar, and G. Zheng, “Sparsely sampled Fourier ptychography,” Opt. Express 22(5), 5455–5464 (2014). [CrossRef]   [PubMed]  

25. J. Sun, Q. Chen, Y. Zhang, and C. Zuo, “Efficient positional misalignment correction method for Fourier ptychographic microscopy,” Biomed. Opt. Express 7(4), 1336–1350 (2016). [CrossRef]  

26. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier Ptychography with an LED array microscope,” Biomed. Opt. Express 5(7), 2376–2389 (2014). [CrossRef]   [PubMed]  

27. S. Dong, R. Shiradkar, P. Nanda, and G. Zheng, “Spectral multiplexing and coherent-state decomposition in Fourier ptychographic imaging,” Biomed. Opt. Express 5(6), 1757–1767 (2014). [CrossRef]   [PubMed]  

28. J. Sun, Y. Zhang, C. Zuo, Q. Chen, S. Feng, Y. Hu, and J. Zhang, “Coded multi-angular illumination for Fourier ptychography based on Hadamard codes,” Proc. SPIE 9524, 95242C (2015).

29. L. Tian, Z. Liu, L. H. Yeh, M. Chen, J. Zhong, and L. Waller, “Computational illumination for high-speed in vitro Fourier ptychographic microscopy,” Optica 2(10), 904–911 (2015). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 Simulations of the pixel aliasing problem in the ordinary FPM method.
Fig. 2
Fig. 2 Comparison of the ordinary FPM, subsampled FPM, upsampled FPM schemes using simulations.
Fig. 3
Fig. 3 Experimental results of the ordinary FPM, subsampled FPM, upsampled FPM schemes.
Fig. 4
Fig. 4 Comparison of the recovery qualities with different spatial-sampling-ratio Rcam and aperture-overlapping-rate Roverlap using simulations.
Fig. 5
Fig. 5 Experimental recovered results of a USAF target with different spatial-sampling-ratio Rcam and aperture-overlapping-rate Roverlap.
Fig. 6
Fig. 6 Comparison of the illumination-angle multiplexed FPM with differnt multiplexing strategies and the illumination-angle subsampled FPM with different aperture-overlapping-rate Roverlap using simulations.
Fig. 7
Fig. 7 Experimental results a USAF target recovered using the illumination-angle multiplexed FPM with differnt multiplexing strategies and the illumination-angle subsampled FPM with different aperture-overlapping-rate Roverlap respectively.
Fig. 8
Fig. 8 Simulation results of the ordinary wavelength multiplexed FPM and corrected wavelength multiplexed FPM.
Fig. 9
Fig. 9 Comparison of the corrected wavelength multiplexed FPM and the illumination-angle subsampled FPM with different aperture-overlapping-rate Roverlap using simulations.
Fig. 10
Fig. 10 Experimental results a sample of stained human kidney vessel cells recovered using the corrected wavelength multiplexed FPM and the illumination-angle subsampled FPM with different aperture-overlapping-rate Roverlap respectively.
Fig. 11
Fig. 11 Diagrams of the definitions of Rcam and Roverlap and examples of different Roverlap and Roverlap.
Fig. 12
Fig. 12 Diagrams of the FPM platform and different illumination-angle subsampling conditions.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

R c a m = f c a m f o b j = λ N A M a g 2 Δ x c a m .
R L E D = f o b j f L E D = N A ( D L E D ) 2 + h 2 D L E D .
R overlap = { 1 π [ 2 a r c c o s ( 1 2 R L E D ) 1 R L E D 1 ( 1 2 R L E D ) 2 ] , R L E D > 1 2 0 , else .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.