Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Wide-field Fourier ptychographic microscopy using laser illumination source

Open Access Open Access

Abstract

Fourier ptychographic (FP) microscopy is a coherent imaging method that can synthesize an image with a higher bandwidth using multiple low-bandwidth images captured at different spatial frequency regions. The method’s demand for multiple images drives the need for a brighter illumination scheme and a high-frame-rate camera for a faster acquisition. We report the use of a guided laser beam as an illumination source for an FP microscope. It uses a mirror array and a 2-dimensional scanning Galvo mirror system to provide a sample with plane-wave illuminations at diverse incidence angles. The use of a laser presents speckles in the image capturing process due to reflections between glass surfaces in the system. They appear as slowly varying background fluctuations in the final reconstructed image. We are able to mitigate these artifacts by including a phase image obtained by differential phase contrast (DPC) deconvolution in the FP algorithm. We use a 1-Watt laser configured to provide a collimated beam with 150 mW of power and beam diameter of 1 cm to allow for the total capturing time of 0.96 seconds for 96 raw FPM input images in our system, with the camera sensor’s frame rate being the bottleneck for speed. We demonstrate a factor of 4 resolution improvement using a 0.1 NA objective lens over the full camera field-of-view of 2.7 mm by 1.5 mm.

© 2016 Optical Society of America

1. Introduction

Fourier ptychographic microscopy (FPM) is a recently developed computational imaging system capable of acquiring the complex and quantitative field distribution of a sample [1, 2]. It borrows from the field of ptychography, which was originally developed in 1970s to reconstruct the complex information about a sample with intensity measurements of its electron diffraction patterns generated by scanning an illumination field over the sample region [3]. Unlike conventional microscopes that can only image the intensity distribution, FPM’s complex sample field contains both its amplitude and phase information. FPM achieves this by a simple modification in sample illumination without the need for a separate reference beam or mechanical movement within the system as in other phase imaging systems. It uses a coherent light source to image different components of the sample’s Fourier spectrum, and uses a phase retrieval algorithm to synthesize these images into a high-resolution complex field distribution. Effectively, it can linearly improve the numerical aperture of the imaging lens by the illumination NA.

There have been various improvements and applications of FPM. Numerical aperture of over 1 for a conventional microscope, usually only achievable by using some immersion medium between the objective lens and the sample, was realized with a low NA objective and an arrangement of LEDs allowing for steep illumination angles [4]. The high-resolution and wide field-of-view (FOV) of FPM showed potential applications in white-blood-cell counting [5] and resource-limited imaging scenarios [6–8]. Multiplexed illumination patterns allowed for high-resolution and high-speed phase imaging of unlabeled in-vitro cells [9]. Borrowing from the simultaneous probe retrieval in X-ray ptychography [10], an iterative algorithm that reconstructs the aberration of the microscope system simultaneously with the sample spectrum was developed to allow for removal of spatially varying aberrations throughout the microscope’s field of view [11], making FPM particularly suitable for imaging samples with uneven surfaces [12]. The characterized aberration function further allowed for removing spatially varying aberrations from fluorescence images for even performance across the FOV [13]. Insights from FPM carried over to incoherent imaging to improve the resolution of fluorescence images [14]. There also have been numerous efforts in improving the Fourier ptychographic (FP) reconstruction by adopting more noise-robust algorithms [15–18]. Alternative FPM modalities involving aperture scanning instead of angular illuminations were demonstrated, which allowed for imaging the complex field of a thick specimen [19, 20] and estimating optical aberrations [21]. Imaging a thick specimen with angular illuminations also became possible by employing the first Born approximation [22] or multislice coherent model [23].

With a wider adoption of FPM for imaging and the need to image fast dynamics, faster capturing speed is desired. There have been several efforts in this respect. Using LEDs, the required number of captured images can be reduced by optimizing LED illumination arrangements based on sparsity [18,24,25], or illuminating multiple LEDs of either the same color [26] or different colors [27, 28]. Ref. [29] demonstrated for the first time using a high-power laser beam coupled with a DMD which allowed for shot-noise-limited image capturing process, overcoming the power limitation of LEDs. All these methods address the slow capture issue, but they are not without downsides. By reducing the number of captured images via multiplexing, one increases the shot-noise per individual sub-spectrum of sample. In Ref. [29], although the power of illumination is easily scalable by using a stronger laser, the on-state mirrors only constitute a small portion of the entire DMD area. Given n desired illumination angles, only 1/n of the DMD-incident collimated laser beam is utilized per illumination angle. The rest of the area which are in the off-state deflects a large portion of the input laser power to a beam dump or in a certain angle which scatters strongly in the optical path and contributes negatively to captured images. Also, the FOV was limited to around 50 μm by 50 μm for a 0.04 NA 1.25× objective, which is much smaller than the FOV typically offered by such an objective lens. Another feature overlooked by many FPM illumination schemes implemented so far is the efficient usage of the illumination beam to improve capturing speed: an LED’s radiation profile typically follows a Lambertian distribution [30] and a small portion of it actually ends up illuminating the sample, though there has been an effort to minimize the loss by arranging LEDs in a dome [8]; and a DMD only utilizes a small fraction of the input laser beam for each sample illumination angle.

Here, we present an FPM setup illuminated by a laser guided by a Galvo mirror and a mirror array to achieve efficient illumination. We are able to utilize 15% of the total laser output power for sample illumination with the proposed collimation setup. The illumination power is increased by more than 3 orders of magnitude as observed by the decrease in the required camera exposure time of from several seconds per illumination angle with an LED [1] to 500 microseconds in our setup. The utilization ratio can be increased by using a single-mode laser and optimized collimation optical parts.

The benefit of using a mirror array over a condenser lens and a relay system such as done in Ref. [29] or a set of microlens arrays as suggested by Ref. [31] to guide the illumination beam is that the illuminating wavefront does not suffer from additional aberrations induced by the additional lenses needed for f-theta scanning. All lenses have some level of field curvatures, including f-theta lenses. Different angular plane waves provided via a scanned point at the image plane of f-theta lenses would have different distorted wavefronts due to the field curvature on the image plane and negatively impact FP reconstruction if not properly corrected for. The amount of angular scanning range is essentially limited by this fact. However, with a Galvo system and a mirror array, the illumination beam quality is only determined by the flatness of the mirrors, and the angular scanning range is defined just by the geometrical location of the mirrors.

We demonstrate that our system can reconstruct the quantitative phase image of a sample and image the sample’s complex field with 4 times the lateral spatial resolution over what is conventionally feasible with the employed objective lens under a coherent illumination. We obtain the phase image of a microbeads sample to demonstrate the quantitative phase imaging capability, and image both phase and amplitude Siemens star targets as suggested by Ref. [32] to show the lateral spatial resolution improvement.

The coherence of the laser leads to speckle artifacts that present a challenge in our reconstruction process, the majority of which originates from the strong unscattered laser beam multiply reflected between glass surfaces in the optical path. Although several physical methods of reducing the laser speckles exist (e.g. rotating diffuser), we observe that the speckles in our system are mostly slowly varying fluctuations existing predominantly in the bright-field region and thus are easily mitigated by phase information obtained with the differential phase contrast (DPC) deconvolution method [33]. It effectively rejects out-of-focus signals, reconstructs quantitative phase image within the bright-field region, and does not require any additional hardware or captured images. Incorporating a DPC phase in FPM was previously tested in [9] during FPM initialization for better reconstruction of low-frequency phase information. We show that low spatial frequency artifacts due to speckles are effectively removed from the reconstructed phase of the sample. Overall, our laser FPM demonstrates wide FOV and high quality image reconstruction with a guided collimated laser beam.

2. Principle and algorithm

2.1. Principle of FPM

Our FPM algorithm operates on the principle that the sample to be imaged is very thin [4]. This essentially turns it into a two dimensional sample, similar to a thin transparent film with an absorption and phase profile on it. When the sample on the stage of a 4f microscope is perpendicularly illuminated by a light source that is coherent both temporally (i.e. monochromatic) and spatially (i.e. plane wave), such as a collimated laser or an LED placed sufficiently far away [1], the complex field transmitted through the sample containing both amplitude and phase is Fourier transformed when it passes through the objective lens and arrives at the objective’s back-focal plane. The field is then Fourier transformed again as it propagates through the microscope’s tube lens to be imaged onto a camera sensor or in a microscopist’s eyes. The amount of the sample’s detail the microscope can capture is defined by the objective’s numerical aperture (NAobj) which physically limits the extent of the sample’s Fourier spectrum being transmitted to the camera. Thus, the NAobj acts as a low-pass filter in a 4f imaging system with a coherent illumination source.

In the following, we limit our discussion to a one dimensional case for simplicity. Extending to two dimensions for a thin sample is direct. Under the illumination of the same light source but at an angle θ with respect to the sample’s normal, the field at the sample plane, ψoblique(x), can be described as:

ψoblique(x)=ψsample(x)exp(jk0xsinθ)
where ψsample(x) is the sample’s complex spatial distribution, x is a one dimensional spatial coordinate, and k0 is given by 2π/λ where λ is the illumination wavelength. This field is Fourier transformed by the objective lens, becoming:
Ψoblique(k)=ψsample(x)exp(jk0xsinθ)exp(jkx)dx=Ψsample(kk0sinθ)
at the objective’s back-focal plane, where Ψoblique and Ψsample are the Fourier transforms of ψsample and ψoblique, respectively, and k is a one dimensional coordinate in k-space. Ψsample(k) is shown to be laterally shifted at the objective’s back-focal plane by k0 sin θ. Because NAobj is physically fixed, a different sub-region of Ψsample(k) is relayed down the imaging system. Thus, we are able to acquire more regions of Ψsample(k) by capturing many images under varying illumination angles than we would by only capturing one image under a normal illumination.

Each sub-sampled Fourier spectrum from the objective’s back-focal plane is Fourier transformed again by the tube lens, and the field’s intensity value is captured by the camera sensor:

Ioblique(x)=|1{Ψoblique(k)P(k)}|2
where 1 is the inverse Fourier transform operator and P(k) is the pupil function defined by the objective’s NA. Due to the loss of phase information in the intensity measurement, the sub-sampled images cannot be directly combined in the Fourier domain. We use a Fourier ptychographic (FP) algorithm [1], essentially a phase retrieval algorithm, to reconstruct the phase and amplitude of the expanded Fourier spectrum. The algorithm requires the low-passed images to be captured so that each image contains some overlapping region in the Fourier domain [18]. We allow a 60% overlap between images, and this redundancy allows for the FP algorithm to infer the missing phase information through an iterative method which is described in the following section.

2.2. DPC Algorithm and update process

The high temporal and spatial coherence of the collimated laser beam makes the system sensitive to any optical imperfections in the illumination path that cause coherent scattering [33]. For example, the light multiply reflected from glass surfaces such as those of a microscope slide can constructively and destructively interfere to give fluctuating background signals in the captured images. These fluctuations can contribute negatively in the FP reconstruction since slowly fluctuating background in the captured intensity images translate into slowly fluctuating phase in the reconstruction process. To mitigate this, we include an additional updating step involving DPC images in the FP iterative procedure. We use DPC-deconvolved phase image to remove the speckle artifacts in our reconstructed image because of two reasons: 1. acquiring DPC-deconvolved phase does not require any additional hardware or measurements; and 2. the slowly varying speckle artifacts mostly influence the bright-field images in our captured dataset, suggesting that they can be removed by DPC-deconvolution method which can generate a speckle-free phase image within the bright-field region. Incorporating DPC phase in FPM was first employed in Ref. [9] for robust phase initialization and better algorithm convergence.

DPC imaging via asymmetric illumination is immune to these coherence artifacts because it is a partially coherent method which achieves better depth sectioning [33]. Instead of considering coherent images captured under a single illumination angle or a single point source, it considers multiple point sources’ illumination wavefronts arriving from multiple angles that add incoherently with each other to be captured into an image. Because the fluctuating background signals originating from unwanted reflections related to each point source are from out-of-focal-plane regions in the optical path, they are averaged out in the incoherent addition process. The partial coherence of illumination thus effectively reduces the coherence artifacts.

A DPC image of a sample are captured with multiple asymmetrical illumination patterns, which are usually, but not limited to, the top and bottom half of a monochromatic, spatially incoherent illuminator being switched on at a time [33]. In the partially coherent imaging model, it is assumed that the illumination source is a collection of point sources that are incoherent with each other [34]. Each point source placed sufficiently far away from the sample is assumed to provide a plane wave illumination to the sample in the formulation of the DPC imaging model [33]. What this assumption entails is that we may as well actually have multiple plane waves illuminating the sample in asymmetric patterns to form DPC images. Given an object’s complex function, ψsample(x), one of the plane waves i within, say, the top half of the asymmetric illumination pattern, provides an incident field of exp(jk0x sin θi) to the object, and the complex field is relayed by the 4f system to form a complex image on the detector plane:

ψi(x)=1{P(k){ψsample(x)exp(jk0xsinθi)}}
where is the Fourier transform operation by the lenses in the 4f system. Complex images formed by other plane waves in the illumination pattern add up in intensity because each point source is incoherent with others, and this summation is captured by the detector:
Itop=itop|ψi(x)|2

As evident in the above equation, the asymmetrically illuminated images required for the generation of a DPC image can be formed not only by providing asymmetrical illumination during the capturing process, but also by summing up individual intensity images of the sample under different single plane wave illuminations. Since we already capture multiple images of a sample with a plane wave incident at varied angles, we do not need to capture any additional images but only introduce a minor computational overhead to generate DPC images.

We follow the derivation in Ref. [33] to obtain quantitative DPC with our experimental setup. The method is based on the assumption that the sample’s absorption and phase are small such that the sample’s complex transmission function, ψ(x) = exp(−μ(x) + (x)), can be approximated as: ψ(x) ≈ 1 − μ(x) + (x) [35]. Under this condition, performing simple arithmetic operations on the images captured under different illumination angles generates multiple-axis DPC images and the transfer function associated with the sample’s phase and the DPC images [33]. More information can be found in Appendix A. Deconvolving the transfer function from the DPC images results in the quantitative phase image of the sample with the spatial frequency information that can extend up to 2k0NAobj in k-space. The quantitative phase is accurate as long as the object obeys the weak object approximation. We observe by our successful FPM reconstruction results that this phase information indeed provides a good initialization step.

In our modified FP algorithm, the reconstruction of a high-resolution complex image of a sample begins by initializing the image with the low-resolution image captured under a normal illumination. As an additional step to remove the coherence artifacts from laser illumination, we update the phase of our initial guess with the DPC-deconvolved quantitative phase as follows:

ψDPC(x)=|{Ψ(k)PDPC(k)}|exp(jθDPC)
where Ψ(k) is the high-resolution Fourier spectrum of a sample being reconstructed, PDPC(k) is the low-pass filter with the spatial frequency extent given by the DPC transfer function mask in k-space as shown in Fig. 8(d), θDPC is the quantitative phase obtained from DPC deconvolution, and ψDPC is the simulated image with its phase updated with θDPC. Unlike intensity image updates in FP, an update with the phase from DPC deconvolution requires us to use a pupil function defined by the DPC transfer function mask instead of the objective’s pupil function because the deconvolved phase contains information within the region defined by the DPC transfer function mask in the Fourier domain. Intensity images captured at different angles are used to update the high-resolution Fourier spectrum extending to k0NAsys in k-space and the pupil function of the microscope as done in the original FP algorithm found in Ref. [1,11]. The generation of DPC phase and the Fourier spectrum update process involving the DPC phase and the intensity images constitute one iteration. DPC phase needs to be recalculated at the beginning of each iteration because the pupil function of the microscope changes during the pupil function update procedure.

The overall algorithm is summarized in Fig. 1. We use the pupil function update procedure described in Ref. [11] called embedded pupil function recovery (EPRY) algorithm to simultaneously characterize the microscope’s aberration and remove it. For the reconstruction to converge, we delay the pupil function updating procedure as is widely done in the ptychography community [10,36]. We conduct 25 iterations without updating the pupil function and 15 with, resulting in 40 iterations in total. In the end, we obtain the high-resolution complex field of the sample and the imaging system’s pupil function.

 figure: Fig. 1

Fig. 1 Modified FP algorithm to include DPC-generated phase into the iteration. The reconstruction begins with the raw image captured with the illumination from the center mirror element as an initial guess of the sample field. The iteration process starts by forming the sample’s quantitative phase image via DPC deconvolution with the resolution defined by the DPC transfer function. The phase of the sample field with the corresponding resolution is updated. Images captured under varying illuminations are used to update the pupil function and the sample’s Fourier spectrum up to NAsys resolution, just as in the original FP algorithm. The updated pupil function is used to generate an updated DPC-deconvolved phase image for the update process, and the iteration process repeats until convergence. In the end, we reconstruct the complex field of the sample and the pupil function.

Download Full Size | PDF

3. Experiments and results

3.1. Setup

The imaging setup is a 4f system consisting of a 0.1 NA objective lens (Olympus 4×), 200-mm-focal-length tube lens (Thorlabs), and a 16bit sCMOS sensor (PCO.edge 5.5). The sensor has a pixel size of 6.5 μm and a maximum frame-rate of 100 Hz at 1920×1080 resolution for a global shutter mode. The sensor size limits the available FOV of a sample to be 2.7 mm by 1.5 mm. On the illumination side, 457 nm 1 W laser beam is pinhole-filtered and collimated. A set of mirrors guides the beam such that the central part of its Gaussian profile (about 40% of total output area) is incident on the input of 2D galvo mirror device (GVS 212) for a uniform beam intensity distribution at its output. Galvo then guides the beam to individual mirror elements on the 3D-printed array, as shown in Fig. 2. The output beam arriving at the sample plane is 1 cm in diameter and 150 mW in power. The entire beam can be used for sample illumination if the sensor size and the objective’s field number are not limited. Although the beam diameter could be reduced to match the size of the FOV and maximize the incident power per area, the larger beam diameter compared to the FOV is maintained to allow for easy alignment of the illumination setup and high tolerance for any small angular imperfections in the mirror array design. Galvo has an angular resolution of 0.0008° which is sufficient for providing accurate angular illumination for FPM. Each mirror element is a 19 mm × 19 mm first-surface mirror attached to a 3D-printed rectangular tower. The mirrors have surface flatness of 4–6λ and the 3D-printed array has precision of 11 μm. The tower’s top surface is sloped at a certain angle such that the beam from Galvo is reflected towards the sample’s location. Thus, the element’s spatial location relative to the sample determines the illumination angle of the beam. The mirror array consists of 95 elements arranged to provide illumination angles such that contiguous elements produce 60% overlap of the sample’s spectrum in the Fourier domain, as shown in Fig. 3. The total illumination NA corresponds to NAti = 0.325 with the resulting system NA being NAsys = NAobj + NAti = 0.1 + 0.325 = 0.425, effectively increasing the microscope’s NA by a factor of 4.25.

 figure: Fig. 2

Fig. 2 Experimental setup. It consists of a 4f system with the 2D Galvo mirror system and the mirror array guiding the laser illumination direction. The beam diameter is about 1 cm, covering the entire FOV captured by the camera (2.7 mm by 1.5 mm after magnification). The objective lens has an NA of 0.1 and the total illumination NA is 0.325, resulting in NAsys = 0.425.

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 The Fourier spectrum region covered by the angularly varying illumination and the layout of the mirror array to achieve the desired coverage. With the objective NA of 0.1 and one normal plane wave illumination, the spatial frequency acquired by the system is delineated by the black circle in the Fourier domain. With varying illumination angles, we can expand the extent of the captured spatial frequency, as indicated by the red circle with the NA of 0.425. The mirror array is 30 cm wide and is placed 40 cm away from the sample plane. Each circular bandpass in the Fourier domain, with its size defined by NAobj and its location by the illumination angle provided by each mirror element, has 60% overlap with the contiguous one.

Download Full Size | PDF

To achieve the maximum frame rate of the sCMOS sensor in the image capturing process, the exposure time is set to its minimum, at 500 microseconds. Due to the overhead of the camera associated with, for example, storing the captured data and resetting the sensor, we achieve 100 Hz frame rate, which is much lower than the ideal frame rate at the given exposure (i.e. 1/(500μs)). The sensor and Galvo are externally triggered every 10 milliseconds, resulting in 0.96 seconds of total capturing time for 95 sample images and 1 dark noise image. Maintaining the same exposure time for all images presents a small challenge: the SNR of the images are drastically different between images captured in the bright-field illumination (NAillum < NAobj) and the ones in the dark-field (NAillum > NAobj) because the unscattered laser beam comprises the most of the signal from the sample, especially for naturally occuring samples such as neurons [37]. Adjusting the laser intensity for the proper exposure of the bright-field images would result in low signal values in dark field images given the same laser intensity level and camera exposure time. As a result, the dark field images would tend to be more affected by dark noise. To account for this, a neutral density filter is placed on each bright-field illumination mirror elements. This allows for increasing the input laser intensity to obtain higher SNR in dark field images while preventing the bright field images from over-exposure.

3.2. Spatial resolution

We image Siemens star targets to quantify our system’s resolution limit. The use of Siemens star resolution targets was recently proposed by Ref. [32] due to the ambiguity of resolution metric arising from the diversity of imaging methods being developed and utilized today. The target is a spoke pattern consisting of 36 periodic line patterns extending radially, with the inner diameter of 11.46 μm and the outer diameter of 45.84 μm. The inner circumference thus corresponds to 1 μm periodicity while the outer circumference corresponds to 4 μm periodicity of line patterns. Because our system is capable of both amplitude and phase imaging, we need to quantify the resolution performance in both regimes. Two Siemens star targets are fabricated, one for amplitude spatial resolution and the other for phase spatial resolution. For the amplitude Siemens star target, a 100-nm-thick gold layer on a standard microscope glass slide is etched by focused ion beam (FIB) to form the Siemens star pattern. For the phase Siemens star target, the entire square area of the Siemens star pattern is further etched with the same exposure so that all the gold layer is removed within the square area and the glass surface is further etched 230 nm deep in the shape of the Siemens star pattern to produce different phase delays in the light field transmitted through the target. The patterns show up as periodic ’dark’ and ’bright’ regions in images obtained with our system. In quantifying the resolution, we find the smallest radial distance from the target’s center at which the periodic pattern along its circumference is barely resolvable, i.e. the values in a ’dark’ region do not exceed the values in the ’bright’ regions next to it [32]. We define the periodicity of the pattern at that radius to be the spatial resolution limit of the imaging system.

Fig. 4 shows the Siemens star images before FPM reconstruction and the reconstructed Siemens star measurement at 3 different FOV locations each for amplitude and phase imaging. The observed resolution is 1.10 μm for amplitude imaging at the center and 0.58 mm away from the center, and phase imaging at the center and 0.49 mm away from the center. The amplitude resolution at 1.29 mm away from the center and the phase resolution at 1.31 mm away from the center are slightly worse, at 1.20 μm periodicity. This may be due to the increased optical aberration in off-center regions of the system’s FOV and EPRY’s less-than-adequate characterization of the system’s pupil function in these regions. The observed resolution closely matches with the theoretical resolution of λNAsys=1.08μm periodicity. Considering that the theoretical resolution of a 0.1 NA objective lens under a coherent illumination is λNAobj=4.57μm, we achieve about 4 times the resolution improvement with FPM over the system’s entire FOV.

 figure: Fig. 4

Fig. 4 Resolution measurement for amplitude and phase imaging of our laser FPM setup. Both amplitude and phase Siemens star targets are imaged at 3 different locations in the system’s total FOV of 2.7 mm by 1.5 mm for a thorough quantification of the system’s resolution. Normal illumination raw images show the captured image of the corresponding Siemens star target under the illumination by the center mirror element before FP reconstruction. The red circular trace in each reconstructed target corresponds to the smallest circumference at which the spokes pattern are barely observable as shown in the angle (degree) vs. magnitude plot next to each image. The observed resolution of 1.2 μm for 1.29 mm and 1.31 mm away from the center and 1.1 μm for others match closely with the theoretical resolution of λNAsys=1.08μm periodicity.

Download Full Size | PDF

3.3. Quantitative phase

We use a microscope slide with microspheres to demonstrate quantitative phase imaging of our laser FPM. The sample consists of 4.5-μm-diameter polystyrene bead from Polysciences, Inc. (index of refraction ns = 1.61 @ 457 nm) immersed in oil (index of refraction no = 1.53 @ 457 nm). The beads’ diameter and the indices of refraction for the beads and oil are carefully chosen so that they satisfy the requirement for successful quantitative phase imaging as presented in Ref. [2]. The maximum phase gradient generated by the microbead should not exceed the maximum resolvable spatial frequency of our FPM system as a complex field, exp(jk · r), in the spatial domain is directly mapped to k in the Fourier domain, where r and k are the two dimensional spatial and frequency coordinates, respectively. This is independent of the sensitivity of phase gradient detection, which is proportional to the SNR of the captured images. Our optimized setup for achieving high SNR for all the bright field and dark field images ensures that our system is sensitive.

In Fig. 5, we show the importance of including the additional step of DPC-deconvolved phase image update into our algorithm. Fig. 5(b) shows the FP reconstructed phase image before and after the modification in the algorithm. Background noise in the original FP reconstruction is mainly due to the coherence artifacts, as seen in Fig. 5(a), originating from the mostly unscattered laser beam interfering with itself in the optical setup. The noise contributes to the final reconstructed Fourier spectrum as a slowly varying phase signal. By incorporating the DPC deconvolved quantitative phase image in the update scheme, we are able to remove the coherence noise. The DPC-deconvolved phase image is free from the influence of the low-fluctuating speckle, as shown in Fig. 5(b).

 figure: Fig. 5

Fig. 5 Images of 4.5-μm-diameter microspheres sample. (a) Within the bright-field illumination angular region (NAillum < NAobj) which corresponds to the center 7 mirror elements in Fig. 3, the captured images show fluctuating backgrounds due to coherence artifacts from imperfections in the optical path. (b) Without an additional DPC-deconvolved phase update, the reconstructed phase image shows an uneven background. After the modification, the reconstructed phase is free from the background noise and the resulting phase is also quantitative.

Download Full Size | PDF

In Fig. 5(b), one representative micro-bead, as indicated by the dashed line in the original FP and updated FP reconstruction phase images, is compared with the theoretical bead profile. The reconstructed phase of the bead is unwrapped and converted into the bead thickness using the given refractive indices of oil and beads. The measured bead diameter is constant between the original FP and updated FP algorithms. The updated FP algorithm measures the bead diameter to be 4.15 μm, which is within the 10% tolerance of the theoretical value of 4.5 μm.

3.4. Imaging of biological samples

We first image a blood smear sample prepared on a microscope slide stained with Hema 3 stain set (Wright-Giemsa). We notice coherence artifacts in the background of the low-resolution captured images. Without the DPC algorithm, the phase image suffers from uneven background signals due to the artifacts, as shown in Fig. 6. After the DPC update, we observe the phase clears up significantly.

 figure: Fig. 6

Fig. 6 Blood smear images, before and after modification in FP algorithm. Without the additional DPC-deconvolved phase update in the reconstruction process, the resulting phase of the sample shows an uneven background signal that also influences the cells’ phase amplitude. After the modification, the background is uniform and the red blood cells show similar phase values. Note, the modification has little or no affect on the amplitude image.

Download Full Size | PDF

To demonstrate the wide-field performance of our laser FPM, we capture a full FOV image of an H&E stained histology sample, as shown in Fig. 7. We first segment the entire region into small square tiles (370 μm by 370 μm) to account for the spatially varying aberration of our imaging system. Then we apply our modified FP algorithm to each tile to reconstruct a high resolution image of the entire FOV. In the end, we are able to correct for the spatially varying aberration and obtain a wide-field and high-resolution image, just as in the original FPM with LEDs as the illumination source, but at a much higher capturing speed.

 figure: Fig. 7

Fig. 7 Wide FOV histology image. (a)–(c) show FP reconstructed amplitudes of the sub-regions in the full FOV image in (d). Simultaneous to the sample field reconstruction, FP algorithm also characterizes the pupil function’s amplitude and phase of each sub-region to reconstruct aberration-free high-resolution images.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 (a–c) DPC transfer functions for 3 pairs of asymmetrical illumination patterns. (d) The spatial frequency extent covered by the DPC-deconvolved phase image. The big circle in the images indicate the 2NA spatial frequency boundary, and the small red circle in (d) indicates NA boundary.

Download Full Size | PDF

4. Discussion

We demonstrate that an FPM setup involving a collimated laser beam as the illumination source is capable of providing a both wide FOV and high-resolution image at a high capturing speed. Its most obvious benefits over a conventional mechanically-scanning wide FOV microscope include the lack of a moving stage and faster capturing speed. Although the higher temporal and spatial coherence of laser compared to those of an LED or other incoherent light sources leads to coherence artifacts, appropriately including the additional constraint by the DPC-deconvolved phase in our FP algorithm is able to mitigate the negative influences on the final reconstructed image. Although using a weak object approximation for a part of the Fourier spectrum in the algorithm seems counter-intuitive, the information is still quantitative [33], and the fact that we still update the region with captured images as in a standard FP procedure allows for the reconstruction to successfully converge. However, we acknowledge that this modification is not a complete solution to the coherence artifacts because 1) DPC deconvolution assumes a weakly absorbing and weakly scattering sample, so the phase may be inaccurate for other samples; and 2) it relies on averaging images to reduce the influence of coherence artifacts instead of directly removing it. In order to significantly reduce the artifacts which originate from out-of-focal-plane imperfections in the optical path, all the glass surfaces in the optical system would need to have anti-reflective coatings suitable for the laser’s wavelength and be free from any defect.

In our spatial resolution measurement, we observe that our system does not perform as well near the edge of the FOV as seen by some distortions in the Siemens star target images in Fig. 4. We accredit this to inadequate pupil function characterization by EPRY and the more stringent requirement of Siemens star targets compared to a USAF target previously used in Ref. [11]. Better pupil characterization via a global-minimum search method similar to the convex optimization approach by Ref. [16] will make the system’s performance more uniform throughout its FOV.

The use of 3D-printed mirror elements allows for intuitive optical setup, but it is not as flexible as other illumination schemes when different objective lenses are used and a different amount of NA gain is desired in the imaging system. An entirely new array may be required to satisfy the desired resolution gain and the appropriate overlap in the Fourier domain between captured images. A modular or adjustable design of the array would make the system more flexible in different imaging scenarios.

Use of Galvo mirrors to direct the collimated laser beam allows for the efficient usage of the laser power, and precludes the illumination source from being the bottleneck of FPM’s capturing speed. With a faster camera sensor that can reach 1000’s of frames per second easily available in the market today and an easily adjustable illumination arrangement, imaging faster dynamic samples such as live microscopic organisms at various magnifications with FPM will be possible. Moreover, the proposed setup can accommodate lasers with any wavelengths that are compatible with the optical elements and the sensor by simply coupling the laser output to the collimation optics. Hyperspectral imaging can be realized by utilizing a multispectral laser or multiple lasers as the illumination source and allow for studying spectral signatures of various proteins and organelles in biological samples. With LEDs employed in other FPM setups, spectral imaging would be limited by the small selection of available wavelengths for LEDs. Our setup, in principle, can accommodate the wide range of laser wavelengths ranging from deep UV to far IR as long as the appropriate lenses, mirrors, and camera are used. The fast frame rate and the variety of laser wavelengths available make FPM more attractive for a wider usage.

5. Appendix: phase transfer function calculation

We are interested in a thin sample, ψ(r) with absorption and phase distribution, μ(r) and ϕ(r) respectively: ψ(r) = exp(−μ(r) + (r)), where r is a two dimensional position vector. The sample is illuminated by a plane wave q(k) with a wave vector k and intensity S(k): q(k)=S(k)exp(jkr). To form DPC images, we first generate Ii (r) that combines different obliquely illuminated images which is equivalent to capturing an image under a desired asymmetric illumination pattern, i, as done in Eq. (5). Under the weak object approximation and following the derivations in Ref. [33], the image can be expressed in the 2 dimensional k-space (i.e. Fourier domain) as:

I˜i(k)=Biδ(k)+Habs,i(k)μ˜(k)+Hph,i(k)ϕ˜(k),
where μ̃(k) and ϕ̃(k) are the Fourier transforms of μ(r) and ϕ(r), respectively, and
Bi=kpatterniS(k)|P(k)|2,
Habs,i(k)=kpatterni[S(k)P*(k)P(k+k)+S(k)P*(k)P(kk)],
Hph,i(k)=jkpatterni[S(k)P*(k)P(k+k)S(k)P*(k)P(kk)],
where P(k′) is the pupil function of the objective lens. From these equations, it is clear that Habs,i (k) and Hph,i (k) have zero values for k that lies outside the boundary of P(k). This means that asymmetric illumination patterns may be limited to illumination angles within the numerical aperture of the objective lens. These correspond to the center 7 mirror elements in the mirror array shown in Fig. 3.

A DPC image can be formed by simple arithmetic operations on images of a sample under a pair of asymmetric illumination patterns which, for example, can be the top and bottom half of the 7 mirror elements. Setting the images captured by the top half of the illuminator and the bottom half as Itop(r) and Ibot(r), respectively, a DPC image is formed by:

IDPC,1(r)=Itop(r)Ibot(r)Itop(r)+Ibot(r)

For the case where the pupil is a circular function with no phase, Habs is canceled in IDPC’s numerator in the Fourier domain [33]. Approximating the denominator as Btop + Bbot for a weak object, the IDPC in the Fourier domain is:

I˜DPC,1(k)=Hph,top(k)Hph,bot(k)Btop+Bbotϕ˜(k)=HDPC,1(k)ϕ˜(k),
where HDPC,1(k) is the DPC transfer function for this asymmetric illumination pattern. Assuming a flat pupil function and equal illumination intensity for different angles, we plot this transfer function here along with 2 other transfer functions obtained with different asymmetric illumination patterns in Fig. 8. We use the DPC images and associated transfer functions from the 3 asymmetric illumination pairs, ĨDPC,n(k) and HDPC,n(k) respectively, to obtain a quantitative phase image via Tikhonov regularization and inverse Fourier transformation:
ϕtik(r)=1{nHDPC,n(k)I˜DPC,n(k)n|HDPC,n(k)|2+α},
where n ∈ 1, 2, 3, and α is a regularization parameter to prevent division by zero.

Funding

National Institute of Health (NIH) Agency Award: R01 AI096226; Caltech Innovation Initiative (CII): 25570015.

Acknowledgments

We thank Daniel Martin for fabricating the Siemens star targets, and Mooseok Jang and Haowen Ruan for helpful discussions.

References and links

1. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

2. X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, “Quantitative phase imaging via Fourier ptychographic microscopy,” Opt. Lett. 38(22), 4845–4848 (2013). [CrossRef]   [PubMed]  

3. R. Hegerl and W. Hoppe, “Dynamic theory of crystal structure analysis by electron diffraction in the inhomogeneous primary radiation wave field,” Ber. Bunsenges. Phys. Chem. 74(11), 1148–1154 (1970). [CrossRef]  

4. X. Ou, R. Horstmeyer, G. Zheng, and C. Yang, “High numerical aperture Fourier ptychography: principle, implementation and characterization,” Opt. Express 23(3), 3472–3491 (2015). [CrossRef]   [PubMed]  

5. J. Chung, X. Ou, R. P. Kulkarni, and C. Yang, “Counting White Blood Cells from a Blood Smear Using Fourier Ptychographic Microscopy,” PloS ONE 10(7), e0133489 (2015). [CrossRef]   [PubMed]  

6. S. Dong, K. Guo, P. Nanda, R. Shiradkar, and G. Zheng, “FPscope: a field-portable high-resolution microscope using a cellphone lens,” Biomed. Opt. Express 5(10), 3305–3310 (2014). [CrossRef]   [PubMed]  

7. K. Guo, Z. Bian, S. Dong, P. Nanda, Y. M. Wang, and G. Zheng, “Microscopy illumination engineering using a low-cost liquid crystal display,” Biomed. Opt. Express 6(2), 574–579 (2015). [CrossRef]   [PubMed]  

8. Z. F. Phillips, M. V. D’Ambrosio, L. Tian, J. J. Rulison, H. S. Patel, N. Sandras, A. V. Gande, N. A. Switz, D. A. Fletcher, and L. Waller, “Multi-Contrast Imaging and Digital Refocusing on a Mobile Microscope with a Domed LED Array,” PloS ONE 10(5), e0124938 (2015). [CrossRef]   [PubMed]  

9. L. Tian, Z. Liu, L.-H. Yeh, M. Chen, J. Zhong, and L. Waller, “Computational illumination for high-speed in vitro Fourier ptychographic microscopy,” Optica 2(10), 904–911 (2015). [CrossRef]  

10. A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109(10), 1256–1262 (2009). [CrossRef]   [PubMed]  

11. X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22(5), 4960–4972 (2014). [CrossRef]   [PubMed]  

12. A. Williams, J. Chung, X. Ou, G. Zheng, S. Rawal, Z. Ao, R. Datar, C. Yang, and R. Cote, “Fourier ptychographic microscopy for filtration-based circulating tumor cell enumeration and analysis,” J. Biomed. Opt. 19(6), 066007 (2014). [CrossRef]   [PubMed]  

13. J. Chung, J. Kim, X. Ou, R. Horstmeyer, and C. Yang, “Wide field-of-view fluorescence image deconvolution with aberration-estimation from Fourier ptychography,” Biomed. Opt. Express 7(2), 352–368 (2016). [CrossRef]   [PubMed]  

14. S. Dong, P. Nanda, R. Shiradkar, K. Guo, and G. Zheng, “High-resolution fluorescence imaging via pattern-illuminated Fourier ptychography,” Opt. Express 22(17), 20856–20870 (2014). [CrossRef]   [PubMed]  

15. L. Bian, J. Suo, G. Zheng, K. Guo, F. Chen, and Q. Dai, “Fourier ptychographic reconstruction using Wirtinger flow optimization,” Opt. Express 23(4), 4856–4866 (2015). [CrossRef]   [PubMed]  

16. R. Horstmeyer, R. Y. Chen, X. Ou, B. Ames, J. A. Tropp, and C. Yang, “Solving ptychography with a convex relaxation,” New J. Phys. 17(5), 053044 (2015). [CrossRef]   [PubMed]  

17. Z. Bian, S. Dong, and G. Zheng, “Adaptive system correction for robust Fourier ptychographic imaging,” Opt. Express 21(26), 32400–32410 (2013). [CrossRef]  

18. S. Dong, Z. Bian, R. Shiradkar, and G. Zheng, “Sparsely sampled Fourier ptychography,” Opt. Express 22(5), 5455–5464 (2014). [CrossRef]   [PubMed]  

19. X. Ou, J. Chung, R. Horstmeyer, and C. Yang, “Aperture scanning Fourier ptychographic microscopy,” Biomed. Opt. Express 7(8), 3140–3150 (2016). [CrossRef]   [PubMed]  

20. S. Dong, R. Horstmeyer, R. Shiradkar, K. Guo, X. Ou, Z. Bian, H. Xin, and G. Zheng, “Aperture-scanning Fourier ptychography for 3D refocusing and super-resolution macroscopic imaging,” Opt. Express 22(11), 13586–13599 (2014). [CrossRef]   [PubMed]  

21. R. Horstmeyer, X. Ou, J. Chung, G. Zheng, and C. Yang, “Overlapped Fourier coding for optical aberration removal,” Opt. Express 22(20), 24062–24080 (2014). [CrossRef]   [PubMed]  

22. R. Horstmeyer, J. Chung, X. Ou, G. Zheng, and C. Yang, “Diffraction tomography with Fourier ptychography,” Optica 3(8), 827–835 (2016). [CrossRef]  

23. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2(2), 104–111 (2015). [CrossRef]  

24. K. Guo, S. Dong, P. Nanda, and G. Zheng, “Optimization of sampling pattern and the design of Fourier ptychographic illuminator,” Opt. Express 23(5), 6171–6180 (2015). [CrossRef]   [PubMed]  

25. L. Bian, J. Suo, G. Situ, G. Zheng, F. Chen, and Q. Dai, “Content adaptive illumination for Fourier ptychography,” Opt. Letter 39(23), 6648–6651 (2014). [CrossRef]  

26. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier Ptychography with an LED array microscope,” Biomed. Opt. Express 5(7), 2376–2389 (2014). [CrossRef]   [PubMed]  

27. S. Dong, R. Shiradkar, P. Nanda, and G. Zheng, “Spectrum multiplexing and coherent-state decomposition in Fourier ptychographic imaging,” Biomed. Opt. Express 5(6), 1757–1767 (2014). [CrossRef]   [PubMed]  

28. S. Dong, K. Guo, S. Jiang, and G. Zheng, “Recovering higher dimensional image data using multiplexed structured illumination,” Opt. Express 23(23), 30393–30398 (2015). [CrossRef]   [PubMed]  

29. C. Kuang, Y. Ma, R. Zhou, J. Lee, G. Barbastathis, R. R. Dasari, Z. Yaqoob, and P. T. C. So, “Digital micromirror device-based laser-illumination Fourier ptychographic microscopy,” Opt. Express 23(21), 26999–27010 (2015). [CrossRef]   [PubMed]  

30. F. Nguyen, B. Terao, and J. Laski, “Realizing LED Illumination Lighting Applications,” Proc. SPIE 5941, 594105 (2005). [CrossRef]  

31. P. Sidorenko and O. Cohen, “Single-shot ptychography,” Optica 3(1), 9–14 (2016). [CrossRef]  

32. R. Horstmeyer, R. Heintzmann, G. Popescu, L. Waller, and C. Yang, “Standardizing the resolution claims for coherent microscopy,” Nat. Photonics 10(2), 68–71 (2016). [CrossRef]  

33. L. Tian and L. Waller, “Quantitative differential phase contrast imaging in an LED array microscope,” Opt. Express 23(9), 11394–11403 (2015). [CrossRef]   [PubMed]  

34. M. Born and E. Wolf, Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light (Cambridge University Press, 1999) [CrossRef]  

35. N. Streibl, “Three-dimensional imaging by a microscope,” J. Opt. Soc. Am. A 2(2), 121–127 (1985). [CrossRef]  

36. P. Thibault, M. Dierolf, O. Bunk, A. Menzel, and F. Pfeiffer, “Probe retrieval in ptychographic coherent diffractive imaging,” Ultramicroscopy 109(4), 338–343 (2009). [CrossRef]   [PubMed]  

37. R. Heintzmann, “Estimating missing information by maximum likelihood deconvolution,” Micron 38, 136–144 (2007). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Modified FP algorithm to include DPC-generated phase into the iteration. The reconstruction begins with the raw image captured with the illumination from the center mirror element as an initial guess of the sample field. The iteration process starts by forming the sample’s quantitative phase image via DPC deconvolution with the resolution defined by the DPC transfer function. The phase of the sample field with the corresponding resolution is updated. Images captured under varying illuminations are used to update the pupil function and the sample’s Fourier spectrum up to NAsys resolution, just as in the original FP algorithm. The updated pupil function is used to generate an updated DPC-deconvolved phase image for the update process, and the iteration process repeats until convergence. In the end, we reconstruct the complex field of the sample and the pupil function.
Fig. 2
Fig. 2 Experimental setup. It consists of a 4f system with the 2D Galvo mirror system and the mirror array guiding the laser illumination direction. The beam diameter is about 1 cm, covering the entire FOV captured by the camera (2.7 mm by 1.5 mm after magnification). The objective lens has an NA of 0.1 and the total illumination NA is 0.325, resulting in NAsys = 0.425.
Fig. 3
Fig. 3 The Fourier spectrum region covered by the angularly varying illumination and the layout of the mirror array to achieve the desired coverage. With the objective NA of 0.1 and one normal plane wave illumination, the spatial frequency acquired by the system is delineated by the black circle in the Fourier domain. With varying illumination angles, we can expand the extent of the captured spatial frequency, as indicated by the red circle with the NA of 0.425. The mirror array is 30 cm wide and is placed 40 cm away from the sample plane. Each circular bandpass in the Fourier domain, with its size defined by NAobj and its location by the illumination angle provided by each mirror element, has 60% overlap with the contiguous one.
Fig. 4
Fig. 4 Resolution measurement for amplitude and phase imaging of our laser FPM setup. Both amplitude and phase Siemens star targets are imaged at 3 different locations in the system’s total FOV of 2.7 mm by 1.5 mm for a thorough quantification of the system’s resolution. Normal illumination raw images show the captured image of the corresponding Siemens star target under the illumination by the center mirror element before FP reconstruction. The red circular trace in each reconstructed target corresponds to the smallest circumference at which the spokes pattern are barely observable as shown in the angle (degree) vs. magnitude plot next to each image. The observed resolution of 1.2 μm for 1.29 mm and 1.31 mm away from the center and 1.1 μm for others match closely with the theoretical resolution of λ NA sys = 1.08 μ m periodicity.
Fig. 5
Fig. 5 Images of 4.5-μm-diameter microspheres sample. (a) Within the bright-field illumination angular region (NAillum < NAobj) which corresponds to the center 7 mirror elements in Fig. 3, the captured images show fluctuating backgrounds due to coherence artifacts from imperfections in the optical path. (b) Without an additional DPC-deconvolved phase update, the reconstructed phase image shows an uneven background. After the modification, the reconstructed phase is free from the background noise and the resulting phase is also quantitative.
Fig. 6
Fig. 6 Blood smear images, before and after modification in FP algorithm. Without the additional DPC-deconvolved phase update in the reconstruction process, the resulting phase of the sample shows an uneven background signal that also influences the cells’ phase amplitude. After the modification, the background is uniform and the red blood cells show similar phase values. Note, the modification has little or no affect on the amplitude image.
Fig. 7
Fig. 7 Wide FOV histology image. (a)–(c) show FP reconstructed amplitudes of the sub-regions in the full FOV image in (d). Simultaneous to the sample field reconstruction, FP algorithm also characterizes the pupil function’s amplitude and phase of each sub-region to reconstruct aberration-free high-resolution images.
Fig. 8
Fig. 8 (a–c) DPC transfer functions for 3 pairs of asymmetrical illumination patterns. (d) The spatial frequency extent covered by the DPC-deconvolved phase image. The big circle in the images indicate the 2NA spatial frequency boundary, and the small red circle in (d) indicates NA boundary.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

ψ oblique ( x ) = ψ sample ( x ) exp ( j k 0 x sin θ )
Ψ oblique ( k ) = ψ sample ( x ) exp ( j k 0 x sin θ ) exp ( j k x ) dx = Ψ sample ( k k 0 sin θ )
I oblique ( x ) = | 1 { Ψ oblique ( k ) P ( k ) } | 2
ψ i ( x ) = 1 { P ( k ) { ψ sample ( x ) exp ( j k 0 x sin θ i ) } }
I top = i top | ψ i ( x ) | 2
ψ DPC ( x ) = | { Ψ ( k ) P DPC ( k ) } | exp ( j θ DPC )
I ˜ i ( k ) = B i δ ( k ) + H abs , i ( k ) μ ˜ ( k ) + H ph , i ( k ) ϕ ˜ ( k ) ,
B i = k pattern i S ( k ) | P ( k ) | 2 ,
H abs , i ( k ) = k pattern i [ S ( k ) P * ( k ) P ( k + k ) + S ( k ) P * ( k ) P ( k k ) ] ,
H ph , i ( k ) = j k pattern i [ S ( k ) P * ( k ) P ( k + k ) S ( k ) P * ( k ) P ( k k ) ] ,
I DPC , 1 ( r ) = I top ( r ) I bot ( r ) I top ( r ) + I bot ( r )
I ˜ DPC , 1 ( k ) = H ph , top ( k ) H ph , bot ( k ) B top + B bot ϕ ˜ ( k ) = H DPC , 1 ( k ) ϕ ˜ ( k ) ,
ϕ tik ( r ) = 1 { n H DPC , n ( k ) I ˜ DPC , n ( k ) n | H DPC , n ( k ) | 2 + α } ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.