Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

FPM-WSI: Fourier ptychographic whole slide imaging via feature-domain backdiffraction

Open Access Open Access

Abstract

Fourier ptychographic microscopy (FPM) theoretically provides a solution to the trade-off between spatial resolution and field of view (FOV), and has promising prospects in digital pathology. However, block reconstruction and then stitching has become an unavoidable procedure for reconstruction of large FOV due to vignetting artifacts. This introduces digital stitching artifacts, as the existing image-domain optimization algorithms are highly sensitive to systematic errors. Such obstacles significantly impede the advancement and practical implementation of FPM, explaining why, despite a decade of development, FPM has not gained widespread recognition in the field of biomedicine. We report a feature-domain FPM (FD-FPM) based on the structure-aware forward model to realize stitching-free, full-FOV reconstruction. The loss function is uniquely formulated in the feature domain of images, which bypasses the troublesome vignetting effect and algorithmic vulnerability via feature-domain backdiffraction. Through massive simulations and experiments, we show that FD-FPM effectively eliminates vignetting artifacts for full-FOV reconstruction, and still achieves impressive reconstructions despite the presence of various systematic errors. We also found it has great potential in recovering the data with a lower spectrum overlapping rate, and in realizing digital refocusing without a prior defocus distance. With FD-FPM, we achieved full-color and high-throughput imaging (4.7 mm diameter FOV, 336 nm resolution in the blue channel) free of blocking-and-stitching procedures on a self-developed Fourier ptychographic microscopy whole slide imaging platform. The reported FD-FPM shows the value of FPM for various experimental circumstances, and offers physical insights useful for the developments of models for other computational imaging techniques. The reported platform demonstrates high-quality, high-speed imaging and low cost, and could find applications in many fields of biomedical research, as well as in clinical applications.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. INTRODUCTION

For decades, using the conventional optical microscopy for pathological analysis has been the gold standard of disease detection and grading, where pathologists assess several tissue slides to attain precise observation of cellular features and growth pattern. Ruling out subjective factors, the accuracy of diagnosis profoundly depends on the throughput of the imaging system. Hence, high-throughput microscopic imaging is of great significance for the study of pathological mechanisms and effective therapy of diseases, which is also intensively explored in applications like hematology [1], immunohistochemistry, and neuroanatomy [24].

The throughput of an optical imaging system is determined fundamentally by its space-bandwidth product (SBP), which is defined as the number of resolvable pixels in the imaging field of view (FOV). However, the achievable SBP is in essence restricted by the scale-dependent geometric aberrations of the optical elements, leading to a trade-off between image resolution and FOV. A natural solution, from the point of optical design, is to optimize the aberrations caused by large-scale elements, but the resultant utilization of multiple lenses considerably escalates the system volume and complexity. The demand for high-SBP microscopic systems in the field of pathology and biomedicine has spurred the development and commercialization of whole slide imaging (WSI). Instead of manually examining glass slides with a microscope eyepiece, WSI digitalizes the entire FOV of a histological or biological specimen at high resolution (HR) for pathologists, researchers, and clinicians to observe and analyze on a computer screen [5,6]. The workflow of existing WSI systems generally consists of two parts: the first entails a specialized high-precision scanner to capture a series of HR images corresponding to different regions of the slide; the second stitches together these image segments into a full-FOV image of the slide by professional software. However, inevitable errors in mechanical scanning easily cause misalignment in the stitched image despite a sufficient overlapping rate (see Supplement 1, Note 1). Uneven illumination of light sources will also lead to uneven distribution of brightness or even stitching stripe-artifacts, which not only deteriorates the quality of stitched images, but also affects the quantitative analysis of downstream applications. For example, one study suggested that ignoring illumination correction resulted in a 35% increase of false and missed detection of yeast cell images [7]. Both conventional [7] and deep-learning [8] methods attempt to eliminate the artifacts based on post processing, not yet tackling the problem fundamentally.

Inspired by the concept of synthetic apertures [9,10], Fourier ptychographic microscopy (FPM) provided a brand-new perspective for the seeking of high-SBP microscope systems [11]. With the multiple-angle illumination of an LED array, FPM acquires corresponding low-resolution (LR) images and stitches them in the Fourier domain to reconstruct an HR complex amplitude image of the sample. As the used low-numerical-aperture (NA) objective has an innate large FOV, FPM enables high-SBP imaging without mechanical scanning, and thus can bypass the artifacts caused by image stitching. In practical implementations, since the presence of half-bright and half-dark vignetting images violates the forward model of FPM, the quality of full-FOV reconstruction is severely degraded by wrinkle artifacts appearing from the center to the edge of the image. As the validity of the forward model is confined to the limited area of FOV, block reconstruction [12,13] and then stitching effectively addresses the vignetting artifacts when the entire FOV is segmented into smaller image patches. The requirement of plane wave illumination [14], coherence, and reducing computational load also makes block processing almost a compulsory option. Unfortunately, existing studies [15,16] are all designed based on the ptychographic iterative engine (PIE) and its variations, whose loss function is formulated in the image domain of the raw data. This makes FPM reconstruction particularly susceptible to systematic errors (e.g., deviation of illumination positions [1719], unavoidable noise [15,20,21], and intensity fluctuations of LEDs [22,23]). The resultant color inconsistency of different image segments introduces extra digital stitching artifacts (distinguished from mechanical ones) to the reconstructions. As there has not been any FPM schemes that are efficient enough to address the vignetting effect, or to solve the algorithmic vulnerability to a series of systematic errors, the majority of studies merely focused on the display of sub-region reconstruction. Some exceptions [11,2426] have reported their impressive performance of stitched full-FOV reconstructions, while the mentioned flaws can still be slightly detected. The lack of full-FOV reconstruction with high quality fundamentally explains why the advancement and landing of FPM in digital pathology faces numerous obstacles, and why FPM has not been widely accepted by the field of biomedicine even after 10 years of development.

In this paper, we report a feature-domain FPM (FD-FPM) to realize direct and non-blocked full-FOV reconstruction. The FD-FPM is in principle distinct from conventional FPM in three aspects: first, FD-FPM introduces a structure-aware forward model to describe the formation of raw images. Such a design is widely adopted in the field of pattern recognition for real-valued signals [27,28], and has shown its remarkable virtues in separating certain information from images in the feature domain. Second, FD-FPM uniquely formulates the loss function of the FPM inverse problem in the feature domain of images, where the challenging vignetting effect and sensitivity to systematic errors that are thorny to deal with in the image domain can be tactfully bypassed. A pioneer work from the author group in [29] also briefly explored such phenomena. Finally, FD-FPM performs a digital backdiffraction procedure with adaptive acceleration to complete reconstruction of complex amplitude and aberration compensation. In general, FD-FPM provides an elegant computational framework for simultaneously solving the vignetting effect in a physical aspect and reduces the algorithmic vulnerability of image-domain optimization to system parameters in a mathematical aspect. We have experimentally conducted reconstruction of the entire FOV to prove the effectiveness of FD-FPM in removing vignetting artifacts. Intensive simulations and experiments also verified that FD-FPM relaxed the precise requirement for system parameters, and still achieved prominent reconstructions in the case of various systematic errors. Interestingly, FD-FPM also presents impressive performance in recovering data with a lower overlapping rate of the spectrum and completing digital refocusing without prior knowledge of defocus distance.

We further applied FD-FPM to a self-developed Fourier ptychographic microscopy WSI (FPM-WSI) platform, and realized full-color, high-throughput imaging free of blocking-and-stitching procedures (4.7 mm diameter FOV, 336 nm resolution in the blue channel). The high-brightness LED source enables data acquisition within 4 s for a single slide. A $z$ axis driver and an $x {\text -} y$ axis electric displacement stage are respectively equipped for autofocusing and for automatic shift of batch samples (four slides). On the software side, the platform provides optional colorization schemes, a precise autofocusing method, and user-friendly operation interface. The proposed FD-FPM together with the FPM-WSI instance is expected to break the bottleneck that has long been constraining the development of FPM, making FPM more broadly accepted and utilized in the field of biomedical research and clinical applications. The brand-new model formulation of FD-FPM based on the feature domain will also provide beneficial inspirations for other computational imaging techniques and broader scientific community associated with optimization.

2. RESULTS

To examine the full-FOV reconstruction performance of FD-FPM, we conducted this method on our WSI platform to reconstruct a pathology slide (human colorectal carcinoma section), as shown in Fig. 1(a). The programmable LED array was placed 70 mm above the slide to provide angular-varying illumination. We then utilized a Nikon ${4} \times /0.1{\rm NA}$ apochromatic objective lens and a 16-bit scientific complementary metal oxide semiconductor (sCMOS) camera (Hamamatsu C13440, 6.5um pixel size) to obtain raw images under R/G/B LED illumination. Figures 1(b1)–1(b3) show the first 25 images in raw data of three color channels influenced by the vignetting effect.

 figure: Fig. 1.

Fig. 1. Full-color FPM constructions for a pathology slide. (a) Human colorectal carcinoma section. (b1)–(b3) Raw data of three color channels affected by vignetting effect. (c1) Direct full-FOV reconstruction using FD-FPM. (c2) Non-blocked and stitched full-FOV reconstruction using conventional FPM (AS-EPRY). (d1), (e1) Zoomed-in images of two RIOs in (c1). (d2), (e2) and (d3), (e3) are corresponding images captured by a color image sensor using a $\times 20$ and $\times 4$ objective lens, respectively, for comparison. (f1) Sub-region with severe vignetting artifacts in non-blocked reconstruction of AS-EPRY. (g1) Sub-region with color difference and stitching artifact in stitched reconstruction of AS-EPRY. (f2), (g2) Sub-region images of FD-FPM reconstruction corresponding to (f1), (g1).

Download Full Size | PDF

Figure 1(c1) demonstrates the full-FOV color image of the slide by fusing reconstructed results of three color channels using FD-FPM. The camera sensor provides an imaging area of ${2048} \times {2048}$ pixels and obtains a resultant FOV of $3.3 \times 3.3\;{{\rm mm}^2}$, corresponding to the black square marked in Fig. 1(a). Figures 1(d1) and 1(e1) provide the zoomed-in view of two regions of interest (ROI). Switching the light source to a halogen lamp, we also used a color charge coupled device (CCD) camera (ImagingSource DFK 23U445) to capture the images of corresponding regions with a ${20} \times /0.4{\rm NA}$ and a ${4} \times /0.1{\rm NA}$ objective lens for comparison, as shown in Figs. 1(d2) and 1(e2) and Figs. 1(d3) and 1(e3). The large-FOV advantage of FPM without mechanical scanning can be intuitively reflected by the contrast of the red circle and blue circle in Fig. 1(a). The maximum synthetic NA for this experimental setup achieves 0.68 set by the angle between the optical axis and the LED located at the outermost edge. Here, FD-FPM realizes a resolution improvement of approximately seven times compared with simply imaging via a 0.1NA objective. As we can see, the details of densely distributed cells can be clearly identified in FD-FPM reconstruction. The 0.4NA imaging theoretically provides a close spatial resolution, but presents poor contrast of image display. We reconstructed the same data using a conventional method (AS-EPRY: EPRY [30] combined with adaptive step-size [20] strategy), and the results of direct reconstruction and stitched reconstruction are demonstrated in Fig. 1(c2). For the direct reconstruction, dramatic vignetting artifacts greatly corrupt the image quality. The artifacts extend inward from the edge of FOV, and only a limited area marked by the yellow dashed circle remains unaffected. The comparison of ROI in Figs. 1(f1) and 1(f2) suggests that the artifacts of conventional FPM have severely hindered the observation of slide details, while FD-FPM completely removes the artifacts and provides a neat background. For the stitched reconstruction, we divided each full-FOV raw image into smaller image segments ($256 \times 256$ pixels) with an overlapping rate of 25%. Each of these segments was then independently processed by AS-EPRY, and finally stitched to create the full-FOV reconstruction by the image processing software ImageJ (see Supplement 1, Note 2 for more details). It can be found that the brightness of the central image visually surpasses that of surrounding areas. Besides, the entire image has an obvious color difference even after white balance, which leads to the existence of abrupt stitching artifacts, as shown in Fig. 1(g1), for example. Here, we only performed simple thresholding on the noise of raw data following the denoising step in [31], and the above-mentioned defects can be attributed to insufficient robustness against other systematic errors. In contrast, FD-FPM realizes uniformly distributed reconstruction without any error-correction procedures, as shown in Fig. 1(g2). We have uploaded the gigapixel image of FD-FPM reconstruction to an open-source platform https://www.gigapan.com/gigapans/233966, and readers may look up it for comprehensive comparison.

 figure: Fig. 2.

Fig. 2. FD-FPM procedures and experimental setup. (a) Overall architecture of FPM-WSI platform generally consisting of microscopic imaging system, automatic control system, and a host computer; (b1) $19 \times 19$ programmable LED array for sample illumination; (b2) packaged appearance of LED array with the central LED lightning; (b3) $z$ axis driver holding the objective lens for autofocusing; (b4) $x {\text -} y$ axis displacement stage for mechanical movement. (b) Flowchart for FD-FPM involving six steps. Step 1: generation of predicted images based on current estimations. Step 2: feature extraction for the predicted images and corresponding observations. Step 3: calculation of feature-domain error between model prediction and observations. Step 4: error-backdiffraction to yield the complex gradient. Step 5: management of an optimizer. Step 6: update of system parameters. (d), (e) Statistical analysis of pixel intensity distributions for images in (c) in the image domain and feature domain.

Download Full Size | PDF

Given the above, FD-FPM realizes direct and non-blocked reconstruction of the entire FOV without vignetting artifacts. Since the reconstruction no longer depends on a blocking-and-stitching procedure, the color difference of image segments and the consequent stitching artifacts can also be avoided. As a matter of fact, the various problems in block reconstruction are fundamentally caused by the mismatch of the image-domain forward model. Based on the structure-aware forward model and feature-domain optimization, FD-FPM efficiently addresses the algorithmic sensitivity to many common systematic errors, which will be demonstrated in the section of discussion in detail.

3. MATERIAL AND METHODS

A. FPM-WSI Platform

Figure 2(a) shows the system integration of our high-throughput automatic WSI platform, and Figs. 2(b1)–2(b4) present the details of different components marked in Fig. 2(a). Visualization 1 gives the overall display of the platform. The platform can generally be divided into four parts: illumination source, automatic control system, main body of the microscope imaging system, and a host computer.

The illumination source for FPM should meet the basic requirements of high brightness and high refreshing rate, in order to reduce the time for data acquisition. Accordingly, we designed a programmable LED array containing ${19} \times {19}$ surface-mounted full-color LEDs [Fig. 2(b1)], and the distance between two adjacent LEDs is 4 mm (refer to Section 4.A for the selection of LED parameters). The central wavelengths of three color channels are 631.23 nm (red), 538.86 nm (green), and 456.70 nm (blue). In practical use with a 16-bit sCMOS camera, the exposure time for both brightfield and darkfield images can be controlled properly at 2 ms. The frame rate of our camera is 100 fps, resulting in an acquisition time of 10 ms for each raw image and totally less than 4 s for a single slide (see Visualization 2). Notably, as shown in Fig. 2(b2), the LED array is packaged into an opaque sealed environment, except that the side facing the slide is exposed to the surroundings. Together with the high-brightness illumination, the majority of stray light can be suppressed, and there is no need for the platform to work under strict darkroom conditions. Compared with our previous work on a monochromatic hemispherical illuminator [32], the light source produces further improvement in acquisition efficiency, whose flat-panel structure also possesses significant advantages in standardization and pipeline production.

The automatic control system consists of a $z$ axis driver [Fig. 2(b3)] and an $x {\text -} y$ axis electric displacement stage [Fig. 2(b4)]. The operation of the system can be seen in Visualization 3. The $z$ axis drive is used to control the mechanical movement of the objective lens for autofocusing. The $x {\text -} y$ axis electric displacement stage enables precise positioning and automatic shift between a batch of four slides; thus we customized a rectangle aluminum alloy plate embedded with four slide slots and fixed it on the upper surface of the displacement stage.

The main body of the microscope imaging system was adapted from a trinocular inverted microscope, whose optical path is shown in Supplement 1, Note 3. The trinocular design supports wide-angle observation through the eyepiece. In addition, the system is highly flexible with extensibility. A built-in halogen light source allows us to switch between the FPM imaging mode and the regular brightfield imaging mode. Other microscopy techniques such as polarization imaging and fluorescence imaging can also be implemented as the polarizer and ultraviolet lamp are optional to be equipped.

The host computer (NVIDA RTX3090, 72 GB RAM) is mainly responsible for data storage and processing, control of automatic devices, and user access. Here, we would like to highlight three characteristics in terms of software. First, instead of widely used focus-map-based methods [33], the platform realizes autofocusing using a hill-climbing algorithm where two symmetric LED units with different wavelengths light up and the degree of focus is determined based on evaluation of spectrum energy (see Supplement 1, Note 4). Second, the platform additionally incorporates a state-of-the-art colorization method named color-transfer filtering FPM (CFFPM) [34] as an alternative to the R/G/B fusion scheme, which sacrifices minimal precision imperceptible to human vision while tripling the acquisition efficiency. Third, all software modules of the platform have been integrated into MATLAB 2022, and we also designed a user-friendly operation interface (see Supplement 1, Note 3) to facilitate the implementation of a complete workflow.

B. Feature-Domain FPM

According to the basic principles of FPM, illuminating an object with a tilted plane wave is equivalent to shifting its Fourier spectrum transversely towards the corresponding illumination direction. The shifted Fourier spectrum is then downsampled by the pupil function, and inverse Fourier transformed to form a series of LR intensity observations on the camera sensor. This process can be described as

$${\textbf{I}_n} = {\left| {{\textbf{F}^\dagger}\textbf{P}{\textbf{M}_n}\textbf{FU}} \right|^2} + {\boldsymbol \epsilon},$$
where $\textbf{U}$ is the complex amplitude of the sample to be reconstructed, and ${\textbf{M}_n}$ is the selection matrix of the $n$-th LED illumination among totally $N$ LEDs, with $n \in \{1,\;2,\;3, \ldots \;N\}$. $\textbf{F}$ and ${\textbf{F}^\dagger}$ respectively denote the Fourier transform and its inverse form. $\textbf{P}$ represents the pupil function of the imaging system, which is normally considered as a circular aperture with finite size. $\epsilon$ denotes the image-domain noise signal.

FPM reconstruction can be regarded as a maximum a posteriori (MAP) estimate task [35,36], in which the optimal estimation of parameters is found to best explain the observations through the forward model Eq. (1). Conventional methods based on PIE maximize the Gaussian-likelihood, or in other words, minimize the ${L_2}$-distance (Euclidean distance) between the amplitude of model prediction and observations in the image domain [15], given as ${{\cal L}_{{\rm Conventional}}}({\textbf{U},\textbf{P}}) = \left\| {\sqrt {{\textbf{I}_n}} - | {{\textbf{F}^\dagger}\textbf{P}{\textbf{M}_n}\textbf{FU}} |} \right\|_2^2$. However, the reconstructions are highly susceptible to the vignetting effect, noise signals, and systematic errors. Some studies attempted to maximize the Poisson-likelihood [37], yet could not fundamentally address the problem as the root lies in the mismatch of the forward model. In fact, the conventional FPM forward model in Eq. (1) based on the theory of Fourier optics [38] is a simplified version of real experimental conditions, which only holds for approximated linear space-invariant (LSI) coherent microscope systems producing either brightfield or darkfield images. It is not applicable to modeling the half-bright and half-dark vignetting images in practical FPM implementations. Therefore, the validity of Eq. (1) cannot be guaranteed for the entire FOV. As the forward model fails to fully explain the observations, the estimations of object function $\textbf{U}$ and pupil function $\textbf{P}$ cannot be appropriately learned from the raw data, resulting in error-prone reconstructions, explaining why full-FOV reconstruction of conventional FPM suffers from wrinkle artifacts at the edge of an image. Block reconstruction with potential discarding of “invalid” raw images [12] suppresses the artifacts, as Eq. (1) is forced to be established when the entire FOV is divided into several smaller segments.

As an alternative, we propose a structure-aware forward model, which integrates the concept of pattern recognition into the original physical principle, given as

$${\cal K}\textbf{I}_n^\gamma = {\cal K}{\left| {{\textbf{F}^\dagger}\textbf{P}{\textbf{M}_n}\textbf{FU}} \right|^{2\gamma}} + {\epsilon _{\cal K}},$$
where ${\cal K}$ denotes feature extraction of image structures, such as lines, corners, and dots, using a learned convolution kernel or a trained network, and ${\epsilon _{\cal K}}$ represents the noise signals in the feature domain. The physical connotation of this model has been concealed, as the feature extraction is performed on the raw data once the camera finishes the capture. The positive exponent $\gamma$ refers to Gamma correction for input images, which adjusts the proportion of dark pixels and bright pixels. For $\gamma \lt 1$, darkfield images with lower signal strength are highlighted; thus the reconstruction will learn more from darkfield images than brightfield images. Normally speaking, the reconstruction quality using an amplitude-based loss function ($\gamma = 0.5$) is superior to that with an intensity-based loss function ($\gamma = 1$). Note that the value of $\gamma$ should not be too small as darkfield images are also more susceptible to noise contamination (see Supplement 1, Note 5 for the discussion of $\gamma$).

Based on the structure-aware forward model, FD-FPM minimizes the ${L_1}$-distance (Manhattan distance) between the features of model predictions and experimental observations, and the feature-domain loss function is given as

$${{\cal L}_{{\rm FD} - {\rm FPM}}}\left({\textbf{U},\textbf{P}} \right) = \sum\limits_{n = 1}^m {\left\| {{\cal K}\textbf{I}_n^\gamma - {\cal K}{{\left| {{\textbf{F}^\dagger}\textbf{P}{\textbf{M}_n}\textbf{FU}} \right|}^{2\gamma}}} \right\|_1}.$$

The ${L_1}$-distance is selected due to the statistical fact that the image features generally follow heavy-tail distribution [39], which can be approximated by Laplacian distribution. Moreover, the sparsity-promoting advantage of ${L_1}$-distance favors the fitting performance of this forward model [40]. Here, we simply use the first-order edge of images to represent the features where ${\cal K} = {[{{\nabla _x},{\nabla _y}}]^ \top}$, and let $\gamma = 0.5$ to implement FD-FPM. In this case, the impact of the vignetting effect can be effectively suppressed due to its different statistical properties in the image domain and in the feature domain. Specifically, the vignetting effect, which lacks sharp edges, manifests mainly in the image domain, allowing us to separate it in the feature domain of images.

For example, let $\sqrt {{I_{{\rm ideal}}}}$ and $\sqrt {{I_{{\rm vignet}}}}$ respectively be the amplitude of an ideal brightfield prediction and a vignetting observation, as shown in Fig. 2(c). As plotted in Fig. 2(d), their pixel intensity distributions are not consistent, and $\sqrt {{I_{{\rm vignet}}}}$ has unexpectedly more dark pixels from the vignetting area than that of $\sqrt {{I_{{\rm ideal}}}}$. The substantial discrepancy of intensity will lead to a failure for the model to learn parameters $\textbf{U}$ and $\textbf{P}$, which introduces undesired low-frequency components into the reconstruction and forms severe wrinkle artifacts. While taking the first-order gradient of images, the discrepancy between $\nabla \sqrt {{I_{{\rm ideal}}}}$ and $\nabla \sqrt {{I_{{\rm vignet}}}}$ becomes very small because the vignetting effect with slow spatial variation contributes far less to the image’s gradient value than valid structures of the object. As given in Fig. 2(e), the pixel intensity distributions after feature extraction become close to each other, indicating that the feature-domain forward model is more suited to describe the formation of images than the one in the image domain. As such, the model parameters can be more effectively learned from the observation data to produce artifact-free reconstructions. Similarly, the effect of LED positional deviation and intensity fluctuations on the observations is merely a variation of pixel intensity distribution; thus FD-FPM is also capable of solving these systematic errors.

Besides, the framework of FD-FPM is embedded with an adaptive acceleration strategy. The variable $m$ in Eq. (3) can be any integer ranging from one to the total number of raw images $N$. This flexibility allows feature extraction and gradient computation for a mini-batch of images randomly selected from the raw data. While employing all raw images at one time leads to a global gradient descent, selecting a smaller batch enhances the possibility of avoiding local minima, especially given the severely non-convex nature of the FD-FPM loss function. The detailed procedures of FD-FPM implementation are illustrated in Fig. 2(c). The initial guess of $\textbf{U}$ is given by up-sampling the brightfield image under the illumination of the central LED. At the beginning of reconstruction, the forward model generates a mini-batch of predicted images based on the current estimation of parameters and selected LED illuminations using Eq. (1). The predicted images and their corresponding observations are then processed by the feature extractor according to Eq. (2). Based on the loss function given in Eq. (3), the feature-domain error between model prediction and observations is calculated and backdiffracted throughout the optical system to obtain the complex gradient via $\mathbb{C}\mathbb{R}$-calculus [41]. It is noteworthy that the ${L_1}$-norm is actually non-differentiable at the origin, while in this case the value of the loss function is zero, and the complex gradient will not be updated. The complex gradient is further managed by the optimizer with potential first-order and second-order moments to accelerate the convergence of the algorithm, and finally updates the parameters. Common optimizers that can be chosen include Adam [42], RMSprop, and YOGI [43]. Supplement 1, Note 6 provides some necessary details for completing the whole procedure of FD-FPM.

From a mathematical perspective, backdiffraction generally refers to the Hermitian transpose of linear operators in Eq. (3). It can be physically considered as inverting the corresponding optical process. In image-domain FPM reconstruction, for example, the wavefront existing from the object plane is propagated through a tube lens to reach the spectrum plane, where aperture synthesis can be completed. The wavefront will then pass through another tube lens to realize data acquisition in the sensor plane. The above two parts are actually equivalent to Fourier transform and its inverse form, and inverse Fourier transform can be regarded as a typical kind of backdiffraction that propagates the wavefront from the spectrum plane back to spatial plane. In FD-FPM, inverse Fourier transform has been encapsulated by feature extraction, and the propagated target turns into the gradient of images. In conclusion, Table 1 compares the properties of conventional FPM and FD-FPM in terms of principles and their robustness to common experimental challenges.

Tables Icon

Table 1. Comparison of Conventional FPM and FD-FPM

 figure: Fig. 3.

Fig. 3. Full-FOV reconstruction of USAF resolution target with the illumination of $25 \times 25$ LEDs. (a) Comparison of reconstructions using AS-EPRY and FD-FPM. (b) Magnified view of groups 6–11 elements in the central brightfield raw image, marked by yellow box in (a). (c) Fourier spectrum for AS-EPRY and FD-FPM reconstruction. (d1), (d2) Reconstructed amplitudes of AS-EPRY and FD-FPM corresponding to (b). (e1), (e2) Magnified images of the region marked by the orange box in (d2). (e) Intensity profiles along the dashed lines in (e1), (e2). (f1), (f2) Intensity profiles along lines ${l_1}$ and ${l_2}$, respectively.

Download Full Size | PDF

4. DISCUSSION

A. Resolution of Experimental Platform

To determine the design parameters of the illumination source, we manufactured a programmable $25 \times 25$ LED array in advance and experimentally examined its limit resolution on the data of a USAF target with green channel illumination. Considering the size of the selected LED unit (${3.5}\;{\rm mm} \times {3.5}\;{\rm mm}$) and the manufacturing technology, the distance between two adjacent LEDs was set as 4 mm. The experimental conditions basically remained unchanged as in Section 2, except that a larger number of LEDs were used to provide an extended synthetic NA (the theoretical value is approximately 0.8, defined by the sum of objective NA and illumination NA).

According to Fig. 3(b), the line structures of group 7, element 5 can be clearly identified with the illumination of the central LED. Figures 3(d1) and 3(d2) compare the reconstructed amplitude of AS-EPRY and FD-FPM for the marked region in Fig. 3(a). Figures 3(e1) and 3(e2) demonstrate the corresponding magnified view of elements in group 10. After the synthetic aperture was completed, both of them obtain a great improvement of resolution with resolvable manifestation of group 10, element 3 (388 nm half-pitch resolution), as evidenced by Fig. 3(f1) and 3(f2). The achievable resolution of an FPM system is jointly determined by the illumination wavelength and synthetic NA. In this implementation, however, the practical synthetic NA is only about 0.7, which roughly equals the value that can be achieved by $19 \times 19$ LEDs (${{\rm NA}_{{\rm syn}}} = 0.68$). It is indicated that the LEDs exceeding the illumination NA of 0.6 (${{\rm NA}_{{\rm obj}}} = 0.1$) can no longer provide effective information of the sample for their corresponding raw images. This is the reason why a programmable ${19} \times {19}$ LED array was designed for our FPM-WSI platform with a fixed illumination height of 70 mm. Under this configuration, the highest half-pitch resolution of gray images (with blue channel illumination) achieved by this platform reaches 336 nm.

Although AS-EPRY reaches a comparable level of reconstructed resolution with FD-FPM, the full-FOV result is severely degraded by wrinkle artifacts due to vignetting. The middle region of the image is free of artifacts, but the background is distributed with uneven patches, as demonstrated in Fig. 3(d1). This experiment offered substantial evidence for high experimental robustness and data fidelity of FD-FPM in direct full-FOV reconstruction.

B. Recovery for Low Overlapping Rate Data

Previous work [44] has indicated that at least a 35% overlapping rate of sub-apertures in the Fourier domain is required for a successful reconstruction using conventional FPM algorithms. As mentioned in Section 3.B, the edge information of images is sparsely distributed with heavy-tailed properties, which leads to the preferred design of ${L_1}$-distance in Eq. (3). Moreover, the ${L_1}$-distance makes more efficient use of data information, thus enabling robust FPM reconstruction even at a low overlapping rate of the spectrum. According to the redundant information model for FPM [45], the utilization rate of FD-FPM is approximately calibrated as 30%, higher than that of conventional gradient descent algorithms (24%).

To validate the inference, we created a group of simulated data with an overlapping rate of the spectrum down to 22.5%, with a cameraman image as the amplitude. Figures 4(a1) and 4(a2) and Figs. 4(b1) and 4(b2) compare the reconstructed amplitudes and their spectrum using AS-EPRY and FD-FPM. AS-EPRY fails to reconstruct the amplitude in high quality, generating obvious crosstalk with phase information. In contrast, FD-FPM still works effectively with clear reconstruction. Benefited from the processing of an optimizer, FD-FPM with superior performance of convergence also facilitates the update of the spectrum exceeding the synthetic aperture, as illustrated in Fig. 4(b2). Peak signal-to noise ratio (PSNR) and structural similarity (SSIM) are selected as the quantitative criteria to evaluate the reconstructions. By adjusting the illumination height, Figs. 4(d1) and 4(d2) plot the curves of PNSR and SSIM with the variation of overlapping rate. It is obviously demonstrated that a lower overlapping rate of the spectrum decreases the precision of reconstruction. When the overlapping rate is larger than 22%, the values of SSIM for both methods become basically stable. According to Figs. 4(d1) and 4(d2), considering the range from 11% to 22%, where the SSIM of FD-FPM reconstruction exceeds 0.9 and constantly changes, FD-FPM always takes the lead and provides reconstruction with higher quality. We also experimentally examined the reconstruction performance of AS-EPRY and FD-FPM on the data of a USAF target, and the result of comparison is shown in Figs. 4(e) and 4(f). Here, we set the illumination height as 30 mm, and the overlapping rate of spectrum is calculated as 22.47%. The amplitude of AS-EPRY reconstruction has been severely corrupted by irregularly distributed stripes, and it seems that the spectrum of sub-apertures is not efficiently stitched, while the reconstruction of FD-FPM equally outperforms AS-EPRY as in the simulations.

 figure: Fig. 4.

Fig. 4. Reconstructions for data with lower overlapping rate of spectrum. (a1), (a2) Reconstructed amplitudes for simulated data with 22.5% overlapping rate of spectrum using AS-EPRY and FD-FPM. (b1), (b2) Reconstructed spectrum corresponding to (a1), (a2). (c1), (c2) PNSR and SSIM values with simulated overlapping rate of 11% and 22%. (d1), (d2) Plot of PNSR and SSIM with the variation of illumination height. (e), (f) Experimental results for the reconstruction of USAF target using AS-EPRY and FD-FPM, respectively, when the overlapping rate of spectrum is 22.47%.

Download Full Size | PDF

C. Experimental Robustness

Existing FPM algorithms highly rely on the precise knowledge about LED positions to realize desirable reconstruction, and are not robust enough to LED positional shifting. However, accurate positioning of hundreds of LEDs on the panel is not realistic in terms of hardware design and manufacturing, as multiple degrees of freedom should be taken into consideration including three-dimensional translations and rotations. Related correction algorithms have been intensively discussed and developed [18,19,25,31,46], but they are still bound to the framework of image-domain optimization. Their performance will be limited by noise signals and other systematic errors, and the convergence may be easily stuck in local minima.

Due to the forward model more matching with raw data, FD-FPM is more robust to LED misalignment compared with conventional reconstruction algorithms. We performed verification on simulated data as shown in Fig. 5. The ideal distance between two adjacent LEDs is 4 mm, and we added random shifting to each LED position with a maximum amplitude of 1 mm as demonstrated in Fig. 5(a). Such randomly shifted LEDs typically occur in customized LED arrays. Figures 5(c) and 5(d) show the reconstructed amplitude and spectrum using AS-EPRY and FD-FPM. The reconstruction of FD-FPM is quite similar to the ground truth in Fig. 5(b), providing a neat background distribution. In contrast, the quality of AS-EPRY reconstruction is severely degraded by wrinkle artifacts. We also calculated the values of SSIM and PSNR based on the ground truth to quantitatively evaluate the reconstruction performance. As listed in Fig. 5(e), FD-FPM obtains a higher score than AS-EPRY in terms of both criteria. Results of massive simulations on up to 500 datasets with different degrees of LED positional shifting are plotted in Fig. 5(f1) for PSNR and Fig. 5(f2) for SSIM, which equally suggest that FD-FPM suffers less from the deviation of LED positions. We compared FD-FPM with another three state-of-the-art methods, adaptive step-size FPM [20], ADMM-FPM [47], and momentum-PIE [48], as noted in Supplement 1, Note 7. Additional simulation studies are also included regarding noise interference and LED intensity fluctuations, and the superiority of FD-FPM has also been verified.

 figure: Fig. 5.

Fig. 5. Comparison of experimental robustness for conventional FPM algorithm and FD-FPM. (a) Simulated LED positional shifting in the LED array. (b)–(d) Simulated ground truth, reconstructed amplitude, and spectrum. (e) Values of PSNR and SSIM for two methods. (f1), (f2) Values of PSNR and SSIM for 500 groups of simulation with different degrees of LED positional shifting. (g) Raw data obtained with the illumination of first $12 \times 12$ LEDs, where vignetting effect can be found in the central $4 \times 4$ images. (h) Magnified view of ROI in the raw data. (i) Spectrum of reconstruction using AS-EPRY and FD-FPM. (j1), (j2) Reconstructed amplitude of USAF target and magnified view of ROI. (k) Quantitative profile along lines ${l_1}$ and ${l_2}$. Scale bars in (h) and (j2) denote 14 µm.

Download Full Size | PDF

Given the reduced dependency of FD-FPM on precise LED positioning, it becomes feasible to implement FPM using a squared LED array with an even number of LEDs. We collected the LR images of a USAF target using $12 \times 12$ LEDs, as depicted in Fig. 5(g), where an obvious vignetting effect can be found with many half-bright and half-dark images. In this case, placed right above the slide is not the central LED but the middle area between two adjacent LEDs, which makes the aligning of the LED array more difficult. Figures 5(j1) and 5(j2) show the reconstructed amplitude of AS-EPRY and FD-FPM with their corresponding magnified view of ROI. Despite the great challenge associated with LED alignment, FD-FPM obtains a full-FOV reconstructed image with enhanced resolution compared with the raw data in Fig. 5(h). According to the quantitative plots illustrated in Fig. 5(k), the feature of group 9, element 3 on the target can be clearly resolved. The result of AS-EPRY, however, is significantly distorted due to the vignetting effect as well as potential misalignment of LED positions.

 figure: Fig. 6.

Fig. 6. Embedded pupil function recovery and digital refocusing for FPM reconstruction. (a1) Stitched reconstruction for a USAF target consisting of 16 image segments. (a2) Zoomed-in image of the region marked by the yellow box in (a1). (a3) Reconstructed spatially varying pupil functions for each segment. (b1) Central brightfield raw image of a defocused USAF target with unknown defocus distance. (b2) Reconstructed amplitudes using AS-EPRY and FD-FPM. (b3), (c) Reconstructed pupil function of FD-FPM and its Zernike smoothed output. (d) First 13 coefficients of the Zernike polynomial listed by fringe index.

Download Full Size | PDF

D. Pupil Function Recovery

Local aberration recovery. The use of objective lenses in FPM inherently introduces aberrations. Conventional reconstruction methods address this by incorporating updates to the pupil function within iterative phase retrieval processes, enabling correction of these aberrations. The fidelity function of FD-FPM gives mathematical reciprocity to both $\textbf{P}$ and $\textbf{U}$, and thus their positions can be exchanged. We can then recover the pupil function by calculating the derivative w.r.t. $\textbf{P}$ from Eq. (3) during the optimization. As shown in Fig. 6(a1), we divided the full-FOV raw data of the USAF target into 64 small segments, reconstructed them, and finally stitched reconstructions of all image segments. Each of them can be assigned a specific aberration-correction pupil function that is assumed to be unchanged in that region. The recovery of a spatially varying pupil function for each image segment is demonstrated in Fig. 6(a3). Figure 6(a2) shows the magnified view of one image segment, and its corresponding result of pupil recovery is marked by a blue box in Fig. 6(a3).

Computational refocusing. Sometimes, the high magnification objectives used in conventional WSI systems cannot capture precisely focused images within a narrow focal range due to the three-dimensional aspect or thick nature of slides. Layered scanning along the $z$-axis plane, also known as “$z$-stack,” has become an increasingly prevalent practice to deal with this, by which a series of images is captured at multiple focal planes and then digitally combined to form a clearly focused composite. The number of scanning layers should be determined according to the evaluation of features in each image segment. This optimization of image capture through broadening of focus is time-consuming, and leads to the overall reduction of digitization speed considering the subsequent image stitching.

Sample defocus is equivalent to introducing a defocus phase factor (fourth-order Zernike function) to the pupil plane. Therefore, we can safely regard defocus as a special type of optical aberration (that is, a defocus aberration). The depth of focus (DOF) of the imaging system can be extended beyond that of the objective lens. In the initial FPM implementation [11], digital refocusing was dependent on adding a predefined defocused wavefront to correct the pupil function. When the defocus distance is unknown, ergodic reconstruction is necessary followed by identifying the sharpest image either manually or using softwares. For a tilted sample, this approach achieves acuity for different regions of the entire image, and stitches these focused regions together to complete refocusing. Thereafter, improved algorithms never broke the constraint that the defocus distance should be known. Even some network-based methods require training to learn a prior over $z$-slices [49] or interpolation along the $z$-axis [50].

As shown in Fig. 6(b1), the USAF target is placed off the focus plane with an unknown defocus distance. After FD-FPM reconstruction, both HR amplitude in Fig. 6(b2) and pupil function in Fig. 6(b3) can be directly recovered. The Zernike fitting image of the pupil function given in Fig. 6(c) has three $2\pi$ jumping implying very large aberration values. The plot of Zernike coefficients denotes that defocus aberration and tilting aberration exist in the imaging system. The reconstruction of AS-EPRY fails without a prior defocus distance for aberration compensation as the algorithm falls into local minima due to large aberrations. The possibility of realizing non-prior digital refocusing using FD-FPM can be attributed to the combination of two aspects. For one thing, the loss function formulated in the feature domain is more robust to noise signals, which makes our method take full advantage of valid information in the images; for another, the utilization of an optimizer helps to escape from the local minima so that the optimal parameters can be found.

E. Current Limitations and Future Works

Time and space complexity. Frequently performing fast Fourier transform (FFT) for large-scale images is the most time-consuming part in the implementation of FD-FPM. For an image with $n \times n$ pixels, the time complexity of FFT is ${n^2}\log {n^2}$. It is calculated that the FFT for an image with $n = 2048$ consumes 384-fold time compared with that for an image with $n = 256$. Therefore, if the original image with $2048 \times 2048$ pixels is uniformly divided into $8 \times 8$ segments without overlapping ($256 \times 256$ pixels for each segment) for conventional FPM reconstruction, it will take FD-FPM 8-fold time to reconstruct the full FOV without parallel computation. Therefore, when operated on a CPU-based platform, FD-FPM has a disadvantage in terms of time cost but is significantly superior in reconstruction quality.

Considering parallel processing with GPU acceleration, the gap of efficiency can be largely bridged. As the spectrum in conventional FPM is relatedly updated and cannot be separated, the efficiency ceiling depends on the number of image segments. For FD-FPM, the maximum number of parallel threads equals the number of raw images, since the adaptive acceleration strategy theoretically allows a mini-batch involving all data. In our experiments, the time required for FD-FPM to reconstruct the entire FOV, based on totally 361 raw images with $2048 \times 2048$ pixels and upsampling rate of 10, is approximately 8–10 min, and it takes about 6–8 min for conventional FPM to complete stitched full-FOV reconstruction. For a single image segment, the efficiency of FD-FPM is likely to surpass conventional FPM. For an image segment with $256 \times 256$ pixels as an example, the time consumption of reconstruction for FD-FPM and conventional FPM is 1–1.5 min and 1.2–1.4 min, respectively.

The mini-batch processing brings significant improvement of efficiency and convergence performance, but also occupies more space than conventional FPM by ${N_{{\rm batch}}}$-fold, where ${N_{{\rm batch}}}$ denotes the number of raw images in the mini-batch. The space complexity of FD-FPM increases the requirement for hardware with more memories, limiting its applications in low-end computational platforms. Decreasing ${N_{{\rm batch}}}$ can release the space complexity of FD-FPM, while losing the virtues of batch-gradient descent in reconstruction speed, quality, and robustness.

Selection of optimizers. Current implementation of FD-FPM might struggle to choose a proper optimizer to solve the loss function. The convergence performance of optimizers depends not only on their own structures and the loss function, but is also closely related to the statistical properties of input data. Unfortunately, there is no clear guidance on which type of optimizer helps to realize the best reconstruction [51]. We have discussed the properties of different optimizers for reference in Supplement 1, Note 8, and readers may make appropriate adjustments based on finely tuned hyper-parameters for a given optimizer to obtain satisfying results. In addition, we do not add any penalty terms to the loss function; thus FD-FPM is still likely to become noise-sensitive especially for low-SNR datasets. To our delight, the high-brightness LED illumination, a more suitable forward model, the embedded adaptive acceleration strategy, and processing of an optimizer jointly tackle the noise interference in our work.

Future developments. More modifications and extensions on the framework can be implemented in the future. The existing framework utilizes the first-order gradient to extract edge features from images, addressing the vignetting effect, noise signals, and a series of systematic errors in the feature domain. While the detection of first-order edge is proved to be effective, the precious role that the second-order gradient of images plays in overcoming the vignetting effect cannot be neglected. However, calculating the second-order gradient of images can inadvertently amplify noise signals. To isolate the valid features of images while suppressing the noise, we believe that training various band-limited edge extractors through dictionary learning or the development of neural networks can be a good solution. These trained extractors are designed to have a bandwidth that precisely matches the width of the pupil function. Please refer to Supplement 1, Note 5 for performance of FD-FPM using different feature extractors. Future extensions of FD-FPM will include, but are not limited to, reflective FPM [52,53], near-field FPM [54], and tomographic FPM [5557]. Imaging at a more macroscopic scale, such as remote sensing [58], might also draw some meaningful inspirations from our method.

5. CONCLUSIONS

In this paper, we have reported an efficient FPM computational framework, termed FD-FPM, for stitching-free and full-FOV reconstruction. This framework constructs a forward model based on feature extraction, which accords more to the captured observations, and formulates the loss function of optimization on the feature domain of images. The feature-domain error, after backdiffraction through the optical system, is processed by an optimizer for the update of the object function involving complex amplitude properties and a pupil function. Such a design allows us to completely bypass the challenging vignetting effect in typical FPM systems, and high-quality reconstruction no longer depends greatly on the procedure of blocking reconstruction and then stitching. Besides, FD-FPM effectively deals with deviation of LED positions and LED intensity fluctuations, which reduces the requirement of precise calibration of systematic errors. For some certain experimental conditions, which are tough for conventional algorithms to conduct successful reconstructions, FD-FPM can also obtain impressive performances, such as digital refocusing without a prior defocus distance and reconstruction of the data with a lower overlapping rate of the spectrum.

We further developed an FPM-WSI platform based on this framework, realizing full-color and high-throughput imaging with 4.7 mm diameter FOV and 336 nm achievable resolution. Typical characteristics and advantages of the platform are summarized as follows: (1) high-speed data acquisition (within 4 s for a single slide); (2) automatic and batch processing of four slides; (3) extension for multiple imaging modes and techniques; (4) user-friendly workflow with optional colorization schemes. The reported platform is expected to promote the widely accepted application of FPM in the field of digital pathology.

It is believed that the improvement of hardware capabilities can adapt to more complex application scenarios. For example, the FOV of reconstruction demonstrated in Section 2 only occupies a small portion of the entire sample. The employment of cameras with larger sensor areas will enable observation covering the whole range of slides, and provide essential and more comprehensive references for users’ assessment. The time required for acquisition of each raw image can ascend to the limit value 2 ms using cameras with higher readout speeds, which is particularly suited to intraoperative examination of pathological slides and facilitates prompt formulation of operation plans for clinicians. Furthermore, in applications of large-scale pathological imaging and biomedical research, designing a large-capacity storage and transportation device guarantees efficient operations.

Funding

National Natural Science Foundation of China (12104500); Key Research and Development Projects of Shaanxi Province (2023-YBSF-263).

Acknowledgment

An Pan thanks Jiurun Chen (Tsinghua University, China) for his contributions to the design of the FPM-WSI platform, and assistant engineer Huiqin Gao (Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, China) for her constructive discussions on the redundant information model for FPM.

Disclosures

The authors declare no competing financial interests.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

REFERENCES

1. X. Zhu, Q. Huang, A. DiSpirito, et al., “Real-time whole-brain imaging of hemodynamics and oxygenation at micro-vessel resolution with ultrafast wide-field photoacoustic microscopy,” Light Sci. Appl. 11, 138 (2022). [CrossRef]  

2. J. Lu, B. Chen, M. Levy, et al., “Somatosensory cortical signature of facial nociception and vibrotactile touch–induced analgesia,” Sci. Adv. 8, eabn6530 (2022). [CrossRef]  

3. L. Felger, O. Rodrguez-Núñez, R. Gros, et al., “Robustness of the wide-field imaging Mueller polarimetry for brain tissue differentiation and white matter fiber tract identification in a surgery-like environment: an ex vivo study,” Biomed. Opt. Express 14, 2400–2415 (2023). [CrossRef]  

4. A. Banerjee, B. A. Wang, J. Teutsch, et al., “Analogous cognitive strategies for tactile learning in the rodent and human brain,” Prog. Neurobiol. 222, 102401 (2023). [CrossRef]  

5. N. Farahani, A. V. Parwani, and L. Pantanowitz, “Whole slide imaging in pathology: advantages, limitations, and emerging perspectives,” Pathol. Lab. Med. Int. 7, 23–33 (2015). [CrossRef]  

6. N. Kanwal, F. Pérez-Bueno, A. Schmidt, et al., “The devil is in the details: whole slide image acquisition and processing for artifacts detection, color variation, and data augmentation: a review,” IEEE Access 10, 58821–58844 (2022). [CrossRef]  

7. K. Smith, Y. Li, F. Piccinini, et al., “Cidre: an illumination-correction method for optical microscopy,” Nat. Methods 12, 404–406 (2015). [CrossRef]  

8. S. Wang, X. Liu, Y. Li, et al., “A deep learning-based stripe self-correction method for stitched microscopic images,” Nat. Commun. 14, 5393 (2023). [CrossRef]  

9. A. Moreira, P. Prats-Iraola, M. Younis, et al., “A tutorial on synthetic aperture radar,” IEEE Geosci. Remote Sens. Mag. 1, 6–43 (2013). [CrossRef]  

10. J. Holloway, Y. Wu, M. K. Sharma, et al., “Savi: synthetic apertures for long-range, subdiffraction-limited visible imaging using Fourier ptychography,” Sci. Adv. 3, e1602564 (2017). [CrossRef]  

11. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7, 739–745 (2013). [CrossRef]  

12. A. Pan, C. Zuo, Y. Xie, et al., “Vignetting effect in Fourier ptychographic microscopy,” Opt. Laser Eng. 120, 40–48 (2019). [CrossRef]  

13. Y. Gao, A. Pan, H. Gao, et al., “Design of Fourier ptychographic illuminator for single full-FOV reconstruction,” Opt. Express 31, 29826–29842 (2023). [CrossRef]  

14. G. Zheng, Fourier Ptychographic Imaging: a MATLAB Tutorial (Morgan & Claypool, 2016).

15. L.-H. Yeh, J. Dong, J. Zhong, et al., “Experimental robustness of Fourier ptychography phase retrieval algorithms,” Opt. Express 23, 33214–33240 (2015). [CrossRef]  

16. G. Zheng, C. Shen, S. Jiang, et al., “Concept, implementations and applications of Fourier ptychography,” Nat. Rev. Phys. 3, 207–223 (2021). [CrossRef]  

17. R. Eckert, Z. F. Phillips, and L. Waller, “Efficient illumination angle self-calibration in Fourier ptychography,” Appl. Opt. 57, 5434–5442 (2018). [CrossRef]  

18. J. Zhang, X. Tao, P. Sun, et al., “A positional misalignment correction method for Fourier ptychographic microscopy based on the quasi-Newton method with a global optimization module,” Opt. Commun. 452, 296–305 (2019). [CrossRef]  

19. W. Huang, S. Pan, Q. Zhou, et al., “Positional misalignment correction for Fourier ptychographic microscopy based on intensity distribution,” Proc. SPIE 11549, 115490D (2020). [CrossRef]  

20. C. Zuo, J. Sun, and Q. Chen, “Adaptive step-size strategy for noise-robust Fourier ptychographic microscopy,” Opt. Express 24, 20724–20744 (2016). [CrossRef]  

21. R. Claveau, P. Manescu, D. Fernandez-Reyes, et al., “Structure-dependent amplification for denoising and background correction in Fourier ptychographic microscopy,” Opt. Express 28, 35438–35453 (2020). [CrossRef]  

22. Z. Bian, S. Dong, and G. Zheng, “Adaptive system correction for robust Fourier ptychographic imaging,” Opt. Express 21, 32400–32410 (2013). [CrossRef]  

23. L. Hou, H. Wang, M. Sticker, et al., “Adaptive background interference removal for Fourier ptychographic microscopy,” Appl. Opt. 57, 1575–1580 (2018). [CrossRef]  

24. L. Tian, X. Li, K. Ramchandran, et al., “Multiplexed coded illumination for Fourier ptychography with an LED array microscope,” Biomed. Opt. Express 5, 2376–2389 (2014). [CrossRef]  

25. A. Zhou, W. Wang, N. Chen, et al., “Fast and robust misalignment correction of Fourier ptychographic microscopy for full field of view reconstruction,” Opt. Express 26, 23661–23674 (2018). [CrossRef]  

26. M. Valentino, V. Bianco, L. Miccio, et al., “Beyond conventional microscopy: observing kidney tissues by means of Fourier ptychography,” Front. Physiol. 14, 206 (2023). [CrossRef]  

27. R. Kimmel, M. Elad, D. Shaked, et al., “A variational framework for retinex,” Int. J. Comput. Vis. 52, 7–23 (2003). [CrossRef]  

28. W. Li, K. Mao, H. Zhang, et al., “Selection of Gabor filters for improved texture feature extraction,” in IEEE International Conference on Image Processing (IEEE, 2010), pp. 361–364.

29. S. Zhang, T. T. Berendschot, and J. Zhou, “ELFPIE: an error-laxity Fourier ptychographic iterative engine,” Signal Process. 210, 109088 (2023). [CrossRef]  

30. X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22, 4960–4972 (2014). [CrossRef]  

31. A. Pan, Y. Zhang, T. Zhao, et al., “System calibration method for Fourier ptychographic microscopy,” J. Biomed. Opt. 22, 096005 (2017). [CrossRef]  

32. A. Pan, Y. Zhang, K. Wen, et al., “Subwavelength resolution Fourier ptychography with hemispherical digital condensers,” Opt. Express 26, 23119–23131 (2018). [CrossRef]  

33. M. C. Montalto, R. R. McKay, and R. J. Filkins, “Autofocus methods of whole slide imaging systems and the introduction of a second-generation independent dual sensor scanning method,” J. Pathol. Inform. 2, 44 (2011). [CrossRef]  

34. J. Chen, A. Wang, A. Pan, et al., “Rapid full-color Fourier ptychographic microscopy via spatially filtered color transfer,” Photon. Res. 10, 2410–2421 (2022). [CrossRef]  

35. R. Gribonval, “Should penalized least squares regression be interpreted as maximum a posteriori estimation?” IEEE Trans. Signal Process. 59, 2405–2410 (2011). [CrossRef]  

36. M. Pereyra, “Maximum-a-posteriori estimation with Bayesian confidence regions,” SIAM J. Imaging Sci. 10, 285–302 (2017). [CrossRef]  

37. L. Bian, J. Suo, J. Chung, et al., “Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient,” Sci. Rep. 6, 27384 (2016). [CrossRef]  

38. J. W. Goodman, Introduction to Fourier Optics (Roberts & Company, 2005).

39. J. Kotera, F. Šroubek, and P. Milanfar, “Blind deconvolution using alternating maximum a posteriori estimation with heavy-tailed priors,” in Computer Analysis of Images and Patterns: 15th International Conference, CAIP (Springer, 2013), pp. 59–66.

40. E. J. Candes, M. B. Wakin, and S. P. Boyd, “Enhancing sparsity by reweighted l1 minimization,” J. Fourier Anal. Appl. 14, 877–905 (2008). [CrossRef]  

41. K. Kreutz-Delgado, “The complex gradient operator and the CR-calculus,” arXiv, arXiv:0906.4835 (2009).

42. D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” arXiv, arXiv:1412.6980 (2014).

43. M. Zaheer, S. Reddi, D. Sachan, et al., “Adaptive methods for nonconvex optimization,” Adv. Neural Inf. Process. Syst.31 (2018).

44. J. Sun, Q. Chen, Y. Zhang, et al., “Sampling criteria for Fourier ptychographic microscopy in object space and frequency space,” Opt. Express 24, 15765–15781 (2016). [CrossRef]  

45. H. Gao, A. Pan, Y. Gao, et al., “Redundant information model for Fourier ptychographic microscopy,” Opt. Express 31, 42822–42837 (2023). [CrossRef]  

46. J. Sun, Q. Chen, Y. Zhang, et al., “Efficient positional misalignment correction method for Fourier ptychographic microscopy,” Biomed. Opt. Express 7, 1336–1350 (2016). [CrossRef]  

47. A. Wang, Z. Zhang, S. Wang, et al., “Fourier ptychographic microscopy via alternating direction method of multipliers,” Cells 11, 1512 (2022). [CrossRef]  

48. A. Maiden, D. Johnson, and P. Li, “Further improvements to the ptychographical iterative engine,” Optica 4, 736–745 (2017). [CrossRef]  

49. L. Bouchama, B. Dorizzi, M. Thellier, et al., “Fourier ptychographic microscopy image enhancement with bi-modal deep learning,” Biomed. Opt. Express 14, 3172–3189 (2023). [CrossRef]  

50. H. Zhou, B. Y. Feng, H. Guo, et al., “Fourier ptychographic microscopy image stack reconstruction using implicit neural representations,” Optica 10, 1679–1687 (2023). [CrossRef]  

51. S. Reddi, S. Kale, and S. Kumar, “On the convergence of Adam and beyond,” in International Conference on Learning Representations (2018).

52. K. Guo, S. Dong, and G. Zheng, “Fourier ptychography for brightfield, phase, darkfield, reflective, multi-slice, and fluorescence imaging,” IEEE J. Sel. Top. Quantum Electron. 22, 77–88 (2015). [CrossRef]  

53. H. Lee, B. H. Chon, and H. K. Ahn, “Reflective Fourier ptychographic microscopy using a parabolic mirror,” Opt. Express 27, 34382–34391 (2019). [CrossRef]  

54. H. Zhang, S. Jiang, J. Liao, et al., “Near-field Fourier ptychography: super-resolution phase retrieval via speckle illumination,” Opt. Express 27, 7498–7512 (2019). [CrossRef]  

55. P. Li, D. J. Batey, T. B. Edo, et al., “Separation of three-dimensional scattering effects in tilt-series Fourier ptychography,” Ultramicroscopy 158, 1–7 (2015). [CrossRef]  

56. R. Horstmeyer, J. Chung, X. Ou, et al., “Diffraction tomography with Fourier ptychography,” Optica 3, 827–835 (2016). [CrossRef]  

57. C. Zuo, J. Sun, J. Li, et al., “Wide-field high-resolution 3D microscopy with Fourier ptychographic diffraction tomography,” Opt. Laser Eng. 128, 106003 (2020). [CrossRef]  

58. Z. Tian, M. Zhao, D. Yang, et al., “Optical remote imaging via Fourier ptychography,” Photon. Res. 11, 2072–2083 (2023). [CrossRef]  

Supplementary Material (4)

NameDescription
Supplement 1       Supplemental Document
Visualization 1       Visualization 1
Visualization 2       Visualization 2
Visualization 3       Visualization 3

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Full-color FPM constructions for a pathology slide. (a) Human colorectal carcinoma section. (b1)–(b3) Raw data of three color channels affected by vignetting effect. (c1) Direct full-FOV reconstruction using FD-FPM. (c2) Non-blocked and stitched full-FOV reconstruction using conventional FPM (AS-EPRY). (d1), (e1) Zoomed-in images of two RIOs in (c1). (d2), (e2) and (d3), (e3) are corresponding images captured by a color image sensor using a $\times 20$ and $\times 4$ objective lens, respectively, for comparison. (f1) Sub-region with severe vignetting artifacts in non-blocked reconstruction of AS-EPRY. (g1) Sub-region with color difference and stitching artifact in stitched reconstruction of AS-EPRY. (f2), (g2) Sub-region images of FD-FPM reconstruction corresponding to (f1), (g1).
Fig. 2.
Fig. 2. FD-FPM procedures and experimental setup. (a) Overall architecture of FPM-WSI platform generally consisting of microscopic imaging system, automatic control system, and a host computer; (b1) $19 \times 19$ programmable LED array for sample illumination; (b2) packaged appearance of LED array with the central LED lightning; (b3) $z$ axis driver holding the objective lens for autofocusing; (b4) $x {\text -} y$ axis displacement stage for mechanical movement. (b) Flowchart for FD-FPM involving six steps. Step 1: generation of predicted images based on current estimations. Step 2: feature extraction for the predicted images and corresponding observations. Step 3: calculation of feature-domain error between model prediction and observations. Step 4: error-backdiffraction to yield the complex gradient. Step 5: management of an optimizer. Step 6: update of system parameters. (d), (e) Statistical analysis of pixel intensity distributions for images in (c) in the image domain and feature domain.
Fig. 3.
Fig. 3. Full-FOV reconstruction of USAF resolution target with the illumination of $25 \times 25$ LEDs. (a) Comparison of reconstructions using AS-EPRY and FD-FPM. (b) Magnified view of groups 6–11 elements in the central brightfield raw image, marked by yellow box in (a). (c) Fourier spectrum for AS-EPRY and FD-FPM reconstruction. (d1), (d2) Reconstructed amplitudes of AS-EPRY and FD-FPM corresponding to (b). (e1), (e2) Magnified images of the region marked by the orange box in (d2). (e) Intensity profiles along the dashed lines in (e1), (e2). (f1), (f2) Intensity profiles along lines ${l_1}$ and ${l_2}$, respectively.
Fig. 4.
Fig. 4. Reconstructions for data with lower overlapping rate of spectrum. (a1), (a2) Reconstructed amplitudes for simulated data with 22.5% overlapping rate of spectrum using AS-EPRY and FD-FPM. (b1), (b2) Reconstructed spectrum corresponding to (a1), (a2). (c1), (c2) PNSR and SSIM values with simulated overlapping rate of 11% and 22%. (d1), (d2) Plot of PNSR and SSIM with the variation of illumination height. (e), (f) Experimental results for the reconstruction of USAF target using AS-EPRY and FD-FPM, respectively, when the overlapping rate of spectrum is 22.47%.
Fig. 5.
Fig. 5. Comparison of experimental robustness for conventional FPM algorithm and FD-FPM. (a) Simulated LED positional shifting in the LED array. (b)–(d) Simulated ground truth, reconstructed amplitude, and spectrum. (e) Values of PSNR and SSIM for two methods. (f1), (f2) Values of PSNR and SSIM for 500 groups of simulation with different degrees of LED positional shifting. (g) Raw data obtained with the illumination of first $12 \times 12$ LEDs, where vignetting effect can be found in the central $4 \times 4$ images. (h) Magnified view of ROI in the raw data. (i) Spectrum of reconstruction using AS-EPRY and FD-FPM. (j1), (j2) Reconstructed amplitude of USAF target and magnified view of ROI. (k) Quantitative profile along lines ${l_1}$ and ${l_2}$. Scale bars in (h) and (j2) denote 14 µm.
Fig. 6.
Fig. 6. Embedded pupil function recovery and digital refocusing for FPM reconstruction. (a1) Stitched reconstruction for a USAF target consisting of 16 image segments. (a2) Zoomed-in image of the region marked by the yellow box in (a1). (a3) Reconstructed spatially varying pupil functions for each segment. (b1) Central brightfield raw image of a defocused USAF target with unknown defocus distance. (b2) Reconstructed amplitudes using AS-EPRY and FD-FPM. (b3), (c) Reconstructed pupil function of FD-FPM and its Zernike smoothed output. (d) First 13 coefficients of the Zernike polynomial listed by fringe index.

Tables (1)

Tables Icon

Table 1. Comparison of Conventional FPM and FD-FPM

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

I n = | F P M n FU | 2 + ϵ ,
K I n γ = K | F P M n FU | 2 γ + ϵ K ,
L F D F P M ( U , P ) = n = 1 m K I n γ K | F P M n FU | 2 γ 1 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.