Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Alternating projection combined with fast gradient projection (FGP-AP) method for intensity-only measurement optical diffraction tomography in LED array microscopy

Open Access Open Access

Abstract

Optical diffraction tomography (ODT) is a powerful label-free measurement tool that can quantitatively image the three-dimensional (3D) refractive index (RI) distribution of samples. However, the inherent "missing cone problem," limited illumination angles, and dependence on intensity-only measurements in a simplified imaging setup can all lead to insufficient information mapping in the Fourier domain, affecting 3D reconstruction results. In this paper, we propose the alternating projection combined with the fast gradient projection (FGP-AP) method to compensate for the above problem, which effectively reconstructs the 3D RI distribution of samples using intensity-only images captured from LED array microscopy. The FGP-AP method employs the alternating projection (AP) algorithm for gradient descent and the fast gradient projection (FGP) algorithm for regularization constraints. This approach is equivalent to incorporating prior knowledge of sample non-negativity and smoothness into the 3D reconstruction process. Simulations demonstrate that the FGP-AP method improves reconstruction quality compared to the original AP method, particularly in the presence of noise. Experimental results, obtained from mouse kidney cells and label-free blood cells, further affirm the superior 3D imaging efficacy of the FGP-AP method.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Traditional imaging systems lose significant information when acquiring two-dimensional (2D) intensity images of objects. For instance, microscopic imaging of biological samples captures intensity information only from the focal plane, lacking depth details. Laser scanning confocal microscopy (LSCM) stands as a well-established and commercialized three-dimensional (3D) imaging technique. It mitigates non-focal plane stray light by placing two conjugated pinholes in the illumination and detection light paths. Subsequently, it records fluorescence images from different layers through mechanical scanning in the depth direction to acquire comprehensive 3D structural information [1,2]. To minimize exposure to high-intensity lasers in LSCM, light-sheet microscopy employs a thin light sheet to illuminate the sample from the side. It collects 2D fluorescence images from the front and utilizes mechanical scanning to reconstruct the sample’s 3D structure [3]. Super-resolution fluorescence microscopy, recognized with the 2014 Nobel Prize in Chemistry, enables the observation of nanoscale subcellular organelles beyond the diffraction limit [46]. However, all the above 3D imaging methods are based on labeling technology, which has certain limitations. Firstly, the fluorescence labeling process is complex and time-consuming, potentially affecting normal cellular metabolic activities. Secondly, the excitation light can damage the sample, and the photobleaching and phototoxicity prevent the observation for a long time. Lastly, some important components such as small molecules and lipids may be difficult or impossible to fluorescently label.

Optical diffraction tomography (ODT) is a powerful label-free measurement tool that can be used for 3D imaging of transparent objects, with significant implications for diverse fields such as biology [7] and nanotechnologies [8]. ODT typically captures the complex amplitudes of scattered fields from samples illuminated at different angles through interferometric techniques, and reconstructs its 3D refractive index (RI) distribution by solving inverse scattering problem [9]. In 1969, Wolf linearized the wave equation using Born approximation, deriving the Fourier diffraction theorem mapping an object and its scattering field in the Fourier domain [10]. In 1981, Devaney introduced the Rytov approximation to eliminate the "thin" sample assumption in Born approximation, but still required the sample to be weakly scattered [11]. Nonlinear methods, such as beam propagation method (BPM) [12], contrastive source inversion method [13], and Lippmann-Schwinger equation method [14,15], offer increased accuracy but at the price of large computational cost. Multi-angle illumination in ODT can be achieved through sample rotation or illumination light rotation, each influencing the reconstruction via the Fourier diffraction theorem differently. While sample rotation can map a large and symmetric range of frequency domain information, it often needs intricate sample manipulation. To cover the entire low-frequency region, the sample needs full rotation about two orthogonal axes. Illumination light rotation avoids sample interference, but the measured angles are often limited and there is an inherent "missing cone problem". Additionally, complex interference devices with high environmental requirements and coherent laser noise can compromise imaging quality and 3D reconstruction results. Therefore, the use of a simpler intensity-only imaging device to achieve the same performance as ODT has attracted the attention of researchers.

In 2013, Zheng et al. proposed Fourier ptychographic microscopy (FPM), a technique replacing the illumination part of a commercial microscope with a programmable LED array. By utilizing intensity images from multi-angle LED illumination and employing the alternating projection (AP) algorithm, the approach achieves a large-field and high-resolution complex amplitude image [16]. FPM stitches images in the 2D frequency domain, yielding 2D imaging results. Subsequent studies have explored 3D imaging through the integration of digital refocusing [17] or multi-slice models [18,19]. However, increasing the number of layers in a multi-slice model not only extends computational time but also introduces errors. Since LED array microscope system is precisely provide multi-angle illumination required for ODT, it holds potential for 3D RI imaging. In 2016, Horstmeyer et al. utilized the Fourier diffraction theorem to design an AP algorithm, filling sample information in the 3D Fourier domain with multi-angle illumination intensity images, thus achieving 3D RI imaging [20]. However, the LED array microscope, which changes the illumination angle, may encounter the aforementioned "missing cone problem," leading to the exclusion of some low-frequency information. Subsequently, Zhou et al. used deep image prior as an untrained deep 3D convolutional neural network for 3D reconstruction to address the missing cone problem [21]. In 2020, Zuo et al. established a forward imaging model including Born approximation and Rytov approximation, and added nonnegative constraints to the AP algorithm to reduce the influence of "missing cones" [22]. This group employed $z$-scanning for defocus contrast and enhanced optical sectioning capabilities, integrating the theory of the transport of intensity equation [23]. Beyond the "missing cone problem", limited illumination angles will also make the Fourier domain information mapping incomplete, affecting the quality of 3D reconstruction results. While various methods have addressed the "missing cone problem" and limited illumination angles in ODT algorithm research, including regularization [2428] and machine learning [2931], few studies have applied these methods to LED array microscopy, which is intensity-only measurement.

In this paper, we propose the alternating projection combined with fast gradient projection (FGP-AP) method to improve the 3D imaging quality of LED array microscopy, addressing the problem of incomplete Fourier domain information mapping during intensity-only measurement reconstruction. Drawing inspiration from optimization theory, the FGP-AP method performs the AP algorithm for gradient descent and the FGP algorithm for regularization constraints. Specifically, the FGP-AP method is to transform the Fourier domain mapping result into the spatial domain after one round of AP algorithm update and perform FGP algorithm once. Here, we extend the 2D FGP algorithm [32], rooted in total variation (TV) regularized image processing, to 3D FGP. The 3D FGP algorithm can use regularization terms containing non-negative and TV constraints to process in the space domain. Then the processing results are transformed into the Fourier domain for the next iteration. The process of inverse Fourier transform, spatial domain operation, and Fourier transform involved in 3D FGP algorithm can be seen as using prior knowledge about sample non-negativity and smoothness to fill in the unmapped regions of the Fourier domain, so as to improve the quality of 3D reconstruction. Simulation results demonstrate that the FGP-AP method enhances the reconstruction quality compared to the original AP method, particularly in the presence of noise. Additionally, the inclusion of the FGP algorithm does not significantly increase the processing time. Finally, the mouse kidney cell images and USAF target images in public datasets and the label-free blood cell images captured via a self-built LED array microscopy are used for 3D reconstruction, revealing superior 3D imaging effects with the FGP-AP method.

2. Methods

2.1 Forward scattering model and Fourier diffraction theorem

The forward scattering model aims to compute the scattered field of a biological sample with a known 3D RI distribution $n\left ( {\mathbf {r}} \right )$, where ${\mathbf {r}} = \left ( {x,y,z} \right )$ signifies spatial domain coordinates. The corresponding inverse scattering problem involves reconstructing this RI distribution from the measured scattered field. The forward scattering measurement process is illustrated in Fig. 1(a1). In this process, the biological sample is assumed to be immersed in a homogeneous background medium with a refractive index $n_b$. It is illuminated by the incident field $U_i$, resulting in a scattered field $U_s$. The Lippmann-Schwinger (LS) integral equation provides an accurate description of this scattering process [33]:

$$u\left( {\mathbf {r}} \right) = {u_i}\left( {\mathbf {r}} \right) + \int_\Omega {g\left( {{\mathbf {r}} - {\mathbf {r'}}} \right)} f\left( {{\mathbf {r'}}} \right)u\left( {{\mathbf {r'}}} \right){\rm{d}}{\mathbf {r'}}$$

 figure: Fig. 1.

Fig. 1. Schematic diagram of Fourier diffraction theorem. (a1) Fourier diffraction theorem without an aperture limit. (a2) $y-z$ cross-section of the Fourier domain coverage region obtained from the mapping relation in (a1). (a3) 3D display of the mapped coverage area shown in (a2). (b1) Fourier diffraction theorem with an aperture limit. (b2) $y-z$ cross-section of the Fourier domain coverage region obtained from the mapping relation in (b1). (b3) 3D display of the mapped coverage area shown in (b2).

Download Full Size | PDF

In Eq. (1), the biological sample is considered a scatterer, characterized by the complex scattering potential $f\left ( {\mathbf {r}} \right ) = k_0^2\left [ {{n^2}\left ( {\mathbf {r}} \right ) - n_b^2} \right ]$ with compact support within a finite volume $\Omega$. Here, ${k_0} = {{2\pi } \mathord {\left /{\vphantom {{2\pi } \lambda }} \right. } \lambda }$ represents the wave number in a vacuum. The total field $u\left ( {\mathbf {r}} \right )$ is the sum of the incident field ${u_i}\left ( {\mathbf {r}} \right )$ and the scattered field ${u_s}\left ( {\mathbf {r}} \right )$. The function $g\left ( {\mathbf {r}} \right )$ refers to Green’s function, which satisfies the nonhomogeneous Helmholtz equation:

$$\left[ {{\nabla ^2} + {k^2}} \right]g\left( {\mathbf {r}} \right) ={-} \delta \left( {\mathbf {r}} \right)$$
Where, $k = {n_b}{k_0}$ represents the wavenumber in the background medium. The function $\delta \left ( {\mathbf {r}} \right )$ denotes the Dirac delta function, symbolizing a point source. The Green’s function $g\left ( {\mathbf {r}} \right )$ characterizes the field generated by this point source and is consequently referred to as the point source response function. In the 3D scenario, a solution for Green’s function is expressed as $g\left ( {\mathbf {r}} \right ) = \frac {1}{{4\pi }}\frac {{\exp \left ( {ik\left \| {\mathbf {r}} \right \|} \right )}}{{\left \| {\mathbf {r}} \right \|}}$, illustrating a spherical wave radiating outward from a point source. The integral term on the right-hand side of Eq. (1) represents the scattered field. Physically, this implies that ${u_s}\left ( {\mathbf {r}} \right )$ results from the convolution of the so-called induced source $f\left ( {\mathbf {r}} \right )u\left ( {\mathbf {r}} \right )$ and the point source response function. The LS integral equation establishes a linear relationship between the incident field and the scattered field, as well as a nonlinear correlation between the scattering potential and the scattered field. Given the incident field and scattering potential, the scattered field can be accurately computed using the discretized LS equation [15]. Nevertheless, achieving an exact solution necessitates multiple iterations, incurring high computational time and storage space costs. In various applications, a linear approximation of the LS equation is favored, resulting in a linear representation of the scattering model under the widely adopted first-order Born approximation, commonly known as the Fourier diffraction theorem [10]:
$${{{\tilde u}}_{{s}}}\left( {{{{k}}_{{x}}},{{{k}}_{{y}}};{{z}} = {{{z}}_{{D}}}} \right) = \frac{{2\pi {{i}}}}{{{{{k}}_{{z}}}}}\exp \left( {{{i}}{{{k}}_{{z}}}{{{z}}_{{D}}}} \right) {{\tilde f}}\left( {{\mathbf {k}} - {{\mathbf {k}}_{{i}}}} \right)$$
Where ${{{\tilde u}}_{{s}}}\left ( {{{{k}}_{{x}}},{{{k}}_{{y}}};{{z}} = {{{z}}_{{D}}}} \right ) = {\Im _{\textrm {2D}}}\left \{ {{{{u}}_{{s}}}\left ( {{{x}},{{y}};{{z}} = {{{z}}_{{D}}}} \right )} \right \}$ denotes the 2D Fourier transform of the scattered field ${{{u}}_{{s}}}\left ( {{{x}},{{y}};{{z}} = {{{z}}_{{D}}}} \right )$ at a distance $z_D$ from the object. When measurements are conducted on the imaging plane, ${{{z}}_{{D}}} = 0$, simplifying the exponential term on the right-hand side of Eq. (3) to 1. ${{\tilde f}}\left ( {\mathbf {k}} \right ) = {\Im _{\textrm {3D}}}\left \{ {{{f}}\left ( {\mathbf {r}} \right )} \right \}$ represents the 3D Fourier transform of the scattering potential. Here, ${\mathbf {k}} = \left ( {{{k}}_{{x}}},{{{k}}_{{y}}},{{{k}}_{z}} \right )$ represents the Fourier domain coordinates, satisfying $\left | {\mathbf {k}} \right | = k = \sqrt {k_x^2 + k_y^2 + k_z^2}$ . ${\mathbf {k}}_{{i}}$ denotes the incident wave vector and is expressed as follows:
$${{\mathbf {k}}_i} = \left( {{k_{ix}},{k_{iy}},{k_{iz}}} \right) = k\left( {\sin {\theta _{ix}},\sin {\theta _{iy}},\sqrt {1 - {{\sin }^2}{\theta _{ix}} - {{\sin }^2}{\theta _{iy}}} } \right)$$

The variables $\theta _{ix}$ and $\theta _{iy}$ denote the residual angles between the incident wave direction and the $x$-axis and $y$-axis, respectively. As $\theta _{ix}$ and $\theta _{iy}$ change, the vector ${\mathbf {k}}_i$ undergoes variations on a spherical surface with a radius of $k$ due to the relationship given in Eq. (4). Furthermore, owing to the interrelation between Eq. (4) and the vector ${\mathbf {k}}$, the scattering field and the scattering potential show a specific spherical mapping relationship in the Fourier domain. This sphere, called the Ewald sphere, is characterized by a radius of $k$ and a center located at $-{\mathbf {k}}_{{i}}$. Illustrated in Fig. 1(a1), the Fourier diffraction theorem indicates that a single scattering measurement provides information for only one spherical shell of the sample in the Fourier domain. Achieving a 3D reconstruction of the scattering potential $f\left ( {\mathbf {r}} \right )$ requires illuminating the sample from different angles and conducting multiple scattering measurements. Stitching the measured scattering field information in the Fourier domain through spherical mapping yields the results in the sample’s Fourier domain, as demonstrated in Fig. 1(a2) and 1(a3). Figure 1(a2) displays the 2D stitching result on a cross-section and Fig. 1(a3) presents the 3D stitching result. Figure 1(a3) clearly shows that the shape after 3D stitching resembles an "apple" and cannot be mapped to some low-frequency region, a phenomenon known as the "missing cone problem". This issue arises when the illumination angle is changed. Another alternative measurement method involves sample rotation. While effective in covering the low-frequency region, this method demands the rotation of the sample at all angles around two orthogonal axes, resulting in a time-consuming and challenging control process.

Figure 1(a1)-(a3) assumes a system without an aperture and an infinitely large detection plane, mapping each measured complex scattered field to a hemisphere. Real measurement systems, as depicted in Fig. 1(b1), encounter limitations due to optical aperture and camera sensor size. In real systems, a low-pass filter $P\left ( {{k_x},{k_y}} \right )$ truncates the Fourier domain information of the complex scattering field, causing each scattering measurement result to map only a part of the sphere. Consequently, the Fourier domain information after stitching is less than the ideal case, as seen in the comparison between Fig. 1(a2) - (a3) and Fig. 1(b2) - (b3). Furthermore, the actual measurement process often imposes limitations on the illumination angle, resulting in a significant amount of frequency information not captured within the envelope (indicated by the dashed line) as shown in Fig. 1(b2). Insufficient frequency information, caused by the "missing cone problem" and limited illumination angles, will distort the 3D reconstructed image.

Next, the imaging process of LED array microscopy is introduced according to the aforementioned theoretical framework. LED array microscopy replaces the illumination part of a commercial microscope with a programmable LED array. While the sample is illuminated by a single LED, the detector is used to obtain an intensity-only image. Subsequently, by sequentially turning on each LED, measurements under multi-angle illumination can be achieved. Assuming the LED array contains ${n_{\textrm {LED}}} = {n_x} \times {n_y}$ uniformly distributed LEDs, with adjacent LEDs spaced at $d$ and the distance between the LED array and the sample set at $h$. The 2D variable $\left ( {{i_x},{i_y}} \right )$ denotes the position of each LED. Its value corresponds to the number of off-center LEDs, designating the central LED as $\left ( {0,0} \right )$. For the LED situated at $\left ( {{i_x},{i_y}} \right )$, the associated incident angles are calculated as ${\theta _{ix}} = {\tan ^{ - 1}}\left ( {{{{i_x}d} \mathord {\left /{\vphantom {{{i_x}d} h}} \right. } h}} \right )$ and ${\theta _{iy}} = {\tan ^{ - 1}}\left ( {{{{i_y}d} \mathord {\left /{\vphantom {{{i_y}d} h}} \right. } h}} \right )$. The incident wave vector is then obtained using Eq. (4). The scattered field information acquired by the detector originates from the back focal plane of the microscope objective, where the frequency domain coordinates $\left ( {{k_x},{k_y}} \right )$ are the Fourier conjugates of the spatial domain coordinates $\left ( {x,y} \right )$. If the incident field is considered a constant background plane wave, the frequency domain expression of the scattered field at the back focal plane, under LED illumination at $\left ( {{i_x},{i_y}} \right )$, can be determined using Eq. (3):

$${{\tilde u}}_{{s}}^{\left( {{{{i}}_{{x}}},{{{i}}_{{y}}}} \right)}\left( {{{{k}}_{{x}}},{{{k}}_{{y}}}} \right) = \frac{{2\pi {{i}}}}{{{{{k}}_{{z}}}}} {{\tilde f}}\left( {{{{k}}_{{x}}} - {{{k}}_{{{ix}}}},{{{k}}_{{y}}} - {{{k}}_{{{iy}}}},{{{k}}_{{z}}} - {{{k}}_{{{iz}}}}} \right)$$

Equation (5) establishes the spherical mapping relationship connecting the 3D scattering potential to a single 2D scattering field measurement in the Fourier domain. Nevertheless, the aperture limitations of the objective lens necessitate the incorporation of a pupil function, denoted as $P\left ( {{k_x},{k_y}} \right )$, introducing low-pass filtering with a cutoff frequency of $k\cdot {\textrm {N}}{{\textrm {A}}_o}$, where ${\textrm {N}}{{\textrm {A}}_o}$ represents the numerical aperture of the objective lens. Ignoring the magnification term on the right-hand side of Eq. (5), the resultant simplified expression for the intensity-only image captured by the detector is as follows:

$${g^{\left( {{i_x},{i_y}} \right)}}\left( {x,y} \right) = {\left| {\Im _{2D}^{ - 1}\left[ {\tilde f\left( {{k_x} - {k_{ix}},{k_y} - {k_{iy}},{k_z} - {k_{iz}}} \right) \cdot P\left( {{k_x},{k_y}} \right)} \right]} \right|^2}$$

In addition to the "missing cone problem" and limited measurement angles, the intensity-only measurements resulting from the simplified system setup also makes it infeasible to directly stitching the sample information in the Fourier domain. The following section approaches the inverse scattering problem from an optimization perspective, incorporating prior knowledge of the samples into the inversion process to mitigate the impact of the aforementioned challenges.

2.2 Optimization model for the inverse scattering problem

In this section, the inverse scattering problem is considered from an optimization perspective. We consider the more general scenario of intensity-only measurements. The inversion of the complex scattered field, akin to holographic measurements, closely resembles this case, with a slight modification in the gradient solution component. To start, a general mathematical framework is used to represent the forward scattering measurement process:

$${{\mathbf {y}}_j} = {\left| {{{\mathbf {A}}_j}{\mathbf {f}}} \right|^2} + {{\mathbf {\eta }}_j}$$
Wherein, ${\left | \cdot \right |^2}$ denotes the component-wise squared magnitude, signifying intensity-only measurement. The vector ${{\mathbf {y}}_j} \in \mathbb {R} {^N}$ represents scattering measurement results, where the subscript $j$ is the $j$th measurement, and $N$ represents the number of discrete measurement points. Meanwhile, ${{\mathbf {\eta }}_j} \in \mathbb {R} {^N}$ stands for noise. The complex scattering potential, discretized in 3D space, is denoted ${\mathbf {f}} \in \mathbb {C} {^M}$, where $M$ signifies the number of discrete points. The scattering operator, ${{\mathbf {A}}_j} \in \mathbb {C} {^{M \times N}}$, varies depending on different scattering model. For example, it can be obtained from BPM or discrete LS equations. In the case of the linearized LS equation with finite aperture, ${{\mathbf {A}}_j} = \Im _{{\textrm {2D}}}^{ - 1}P{{\mathbf {S}}_j}{\Im _{{\textrm {3D}}}}\left \{ \cdot \right \}$, indicating the selection of data on the Ewald sphere within the scattering potential Fourier spectrum. This selected data is then modulated by the pupil function $P$, followed by an inverse 2D Fourier transform. Here, ${\mathbf {S}}_j$ represents the spherical mapping relation corresponding to the $j$th scattering measurement.

The mathematical model of the corresponding inverse scattering problem is formulated as an optimization problem:

$${\mathbf {\hat f}} \in \left\{ {\arg \min \left[ {{{\cal D}}\left( {\mathbf {f}} \right) + \tau {{\cal R}}\left( {\mathbf {f}} \right)} \right]} \right\}$$
Wherein, ${{\cal D}}\left ( {\mathbf {f}} \right )$ denotes the data fidelity term, ${{\cal R}}\left ( {\mathbf {f}} \right )$ represents the regularization term, and $\tau$ balances the two terms. The standard data fidelity term describes the difference between the actual measurements and the values calculated by the scattering model over $J$ scattering experiments:
$${{\cal D}}\left( {\mathbf {f}} \right) = \sum_{j = 1}^J {{{{\cal D}}_p}\left( {\mathbf {f}} \right)} = \sum_{j = 1}^J {\frac{1}{2}\left\| {{{\left| {{{\mathbf {A}}_p}{\mathbf {f}}} \right|}^2} - {{\mathbf {y}}_p}} \right\|_2^2}$$

And the regularization term is formulated as:

$${\cal R}\left( {\mathbf {f}} \right) = {I_{ \ge 0}}\left( {\mathbf {f}} \right) + {\left\| {\nabla {\mathbf {f}}} \right\|_{2,1}} = {I_{ \ge 0}}\left( {\mathbf {f}} \right) + \sum_{m = 1}^M {\sqrt {\sum_{d = 1}^D {\left( {{\partial _d}{\mathbf {f}}} \right)_m^2} } }$$
Where ${I_{ \ge 0}}\left ( {\mathbf {f}} \right ) =\left \{ {0,{\textrm {if}}\ {\mathbf {f}}_m \ge 0; + \infty,{\textrm {otherwise}}} \right \}$ represents non-negative constraint. ${\left \| {\nabla {\mathbf {f}}} \right \|_{2,1}}$ stands for TV constraint. ${\partial _d}$ denotes the gradient operator along the $d$th direction, while $D$ is the dimension. The regularization term shown in Eq. (10) adds prior knowledge that the sample is non-negative and smoothness. Note that the optimization problem Eq. (8) comprises two distinct components: a smooth part $\cal D$ and a non-smooth part $\cal R$. The resolution of this problem can be achieved through the idea of the proximal gradient descent method. This involves performing gradient descent on the smooth part and employing the proximity operator on the non-smooth part [14].

2.3 Alternating projection combined with fast gradient projection (FGP-AP) method

In this section, the AP and FGP algorithm are combined to achieve 3D RI distribution reconstruction from intensity-only images obtained through a LED array microscope. This reconstruction is performed with the constraint of a regularization term, and we refer to the combined approach as the FGP-AP method. Specifically, the AP algorithm corresponds to a gradient descent process using intensity-only images, while the FGP algorithm performs proximity operation on regularized terms. Introducing the FGP algorithm to alternating projection is akin to imposing the previously mentioned non-negativity and TV constraints on the 3D reconstruction process. The algorithmic steps of the FGP-AP method are outlined as follows.

Step 1: Initialize the Fourier spectrum of the scattering potential, denoted as ${{{\tilde f}}_{{j}}}\left ( {\mathbf {k}} \right )$ with $j=0$, using a 3D array. The size of this array in the $x-y$ direction is determined by the dimensions of the measured images and is often chosen to provide several times the resolution. The number of layers in the $z$ direction is determined by the objective numerical aperture ${\textrm {N}}{{\textrm {A}}_o}$ and the maximum illumination numerical aperture ${\textrm {N}}{{\textrm {A}}_i}$ [20]. The maximum resolvable wave vector range along the $z$ direction is denoted as: $k_z^{\max } = k\left ( {2 - \sqrt {1 - {\textrm {NA}}_o^2} - \sqrt {1 - {\textrm {NA}}_i^2} } \right )$. Let ${z_{\max }}$ denote the imaging range in the $z$ direction, typically exceeding the sample thickness. The frequency interval in the $z$ direction is calculated as $\Delta {k_z} = {{2\pi } \mathord {\left /{\vphantom {{2\pi } {{z_{\max }}}}} \right. } {{z_{\max }}}}$. The ratio ${{k_z^{\max }} \mathord {\left /{\vphantom {{k_z^{\max }} {\Delta {k_z}}}} \right. } {\Delta {k_z}}}$ gives the array size in the $z$ direction, which is also the number of slices in the $z$ direction. As for the initialized 3D array values, an array of constant values such as an all-zero array can be used for cold initialization.

Step 2: Calculate the illumination wave vector ${\mathbf {k}}_i^{\left ( {{i_x},{i_y}} \right )}$ for the LED located at $\left ( {{i_x},{i_y}} \right )$ using Eq. (4). Utilize $- {\mathbf {k}}_i^{\left ( {{i_x},{i_y}} \right )}$ as the sphere center and $k$ as the radius to select values on the corresponding Ewald spherical shell in ${{{\tilde f}}_{{j}}}\left ( {\mathbf {k}} \right )$. Apply low-pass filtering through the pupil function $P\left ( {{k_x},{k_y}} \right )$, and generate the corresponding scattering field estimate ${\psi ^{\left ( {{i_x},{i_y}} \right )}}\left ( {x,y} \right )$ via the inverse 2D Fourier transform, expressed as:

$${\psi ^{\left( {{i_x},{i_y}} \right)}}\left( {x,y} \right) = \Im _{2D}^{ - 1}\left\{ {{{\tilde f}_j}\left( {{k_x} - {k_{ix}},{k_y} - {k_{iy}},{k_z} - {k_{iz}}} \right) \cdot P\left( {{k_x},{k_y}} \right)} \right\}$$

When the 3D discrete array is mapped to the 2D array through the Ewald sphere, the mapping coordinates of the two are matched by the nearest neighbor method.

Step 3: Perform gradient descent of the data fidelity term using the AP method. That is, the phase term of ${\psi ^{\left ( {{i_x},{i_y}} \right )}}\left ( {x,y} \right )$ obtained by Eq. (11) is kept unchanged, and its amplitude is constrained according to the image ${g^{\left ( {{i_x},{i_y}} \right )}}\left ( {x,y} \right )$ obtained by the LED array microscope, so as to match the measured amplitude result. The updated scattered field estimate ${\phi ^{\left ( {{i_x},{i_y}} \right )}}\left ( {x,y} \right )$ is expressed as:

$${\phi ^{\left( {{i_x},{i_y}} \right)}}\left( {x,y} \right) = \sqrt {{g^{\left( {{i_x},{i_y}} \right)}}\left( {x,y} \right)} \frac{{{\psi ^{\left( {{i_x},{i_y}} \right)}}\left( {x,y} \right)}}{{\left| {{\psi ^{\left( {{i_x},{i_y}} \right)}}\left( {x,y} \right)} \right|}}$$

Step 4: The scattering field estimation ${\phi ^{\left ( {{i_x},{i_y}} \right )}}\left ( {x,y} \right )$ in the spatial domain is converted to the Fourier domain expression ${\tilde \phi ^{\left ( {{i_x},{i_y}} \right )}}\left ( {{k_x},{k_y}} \right )$ using 2D Fourier transform. And the value of ${\tilde \phi ^{\left ( {{i_x},{i_y}} \right )}}\left ( {{k_x},{k_y}} \right )$ is used to replace the corresponding value on the Ewald spherical shell of ${{{\tilde f}}_{{j}}}\left ( {\mathbf {k}} \right )$, that is, to fill the data extracted in Step 2.

Step 5: Repeat steps 2-4 for ${n_{\textrm {LED}}}$ measurements to complete one iteration. At the end of one iteration, the inverse 3D Fourier transform is applied to the updated ${{{\tilde f}}_{{j}}}\left ( {\mathbf {k}} \right )$ to obtain its spatial domain expression ${{{f}}_{{j}}}\left ( {\mathbf {x}} \right )$. The 3D FGP algorithm is subsequently executed on ${{{f}}_{{j}}}\left ( {\mathbf {x}} \right )$ and the result is denoted as ${{{f}}_{{j+1}}}\left ( {\mathbf {x}} \right )$. A detailed description of the 3D FGP algorithm is in Appendix A. This is equivalent to adding TV and non-negative regularization term in the iteration, which satisfies the prior knowledge that the sample is non-negative and smoothness.

Step 6: Apply a 3D Fourier transform to ${{{f}}_{{j+1}}}\left ( {\mathbf {x}} \right )$, yielding its Fourier spectrum ${{{\tilde f}}_{{{j}} + 1}}\left ( {\mathbf {k}} \right )$. Subsequently, the scattering potential ${{f}}\left ( {\mathbf {r}} \right )$ is obtained by iterating steps 2-5 for many times, or when the convergence condition is satisfied. Finally, the 3D RI distribution ${{n}}\left ( {\mathbf {r}} \right )$ can be calculated by the linear relationship between ${{f}}\left ( {\mathbf {r}} \right )$ and ${{n}}\left ( {\mathbf {r}} \right )$.

3. Simulation

To clarify the influence of the mentioned issues, including the missing cone problem, limited illumination angles, and dependence on intensity-only measurements, on the accuracy of 3D reconstruction, and to assess the effectiveness of the proposed FGP-AP method, we conducted simulations involving the generation of intensity-only images and subsequent 3D reconstruction analysis. The forward imaging model of the LED array microscopy, detailed in Section 2.1, is used to generate a set of intensity-only images for multi-angle scattering measurements of the simulated sample, as shown in Fig. 2. Then, these images are used for the 3D reconstruction analysis.

 figure: Fig. 2.

Fig. 2. Generation of intensity-only simulated images in LED array microscopy. Three standard images are used to generate a simulated sample, and a set of intensity-only images illuminated at different angles are generated according to the forward imaging model in Section 2.1.

Download Full Size | PDF

The simulation first defines the system parameters for a LED array microscope, in coherence with the subsequent experimental setup. The light source employs a programmable LED array with a wavelength of 520 $nm$, consisting of $15 \times 15$ LEDs for multi-angle illumination. The distance $d$ between each adjacent LED element is 2 $mm$, and the distance $h$ between the LED array and sample is set to 50 $mm$. The objective lens has a magnification of $10 \times$ and a numerical aperture of 0.3. The detector’s pixel size is 6.5 $\mu m$. Additionally, the axial imaging range $z_{\textrm {max}}$ is set to 100 $\mu m$. As mentioned above, the number of layers in $z$-direction of the simulated sample can be determined by $z_{\textrm {max}}$, ${\textrm {NA}}_{o}$ and the maximum illumination numerical aperture ${\textrm {NA}}_{i}$. Subsequently, three standard images house, camera man and aerial view image are used as the top layer, middle layer and bottom layer of the simulated sample, respectively. The remaining layers are generated through interpolation. The resulting simulated sample is depicted on the left in Fig. 2. Subsequently, each LED illuminates the simulated sample, generating a set of intensity-only images, as shown on the right in Fig. 2.

The LED array microscope conducts multiple scattering measurements by varying the illumination angle. The inherent "missing cone problem" results in the loss of low-frequency information in the Fourier domain during image generation and the reconstruction process, thereby reducing the accuracy of 3D reconstruction. In addition, each measurement only involves part of the information on the Ewald spherical shell in the Fourier domain, and the finite number of measurements will also make the information mapping in 3D Fourier domain sparse. To explore whether an increase in the amount of measurement data improves the quality of 3D reconstruction, we simulate the generation of intensity-only images with different amounts of LED illumination, subsequently perform 3D reconstruction using the original AP method and analyze the reconstructed results.

The number of generated images is changed in two ways. Firstly, the maximum illumination numerical aperture $\mathrm {NA}_i$ is maintained, and the distance $h$ from the LED array to sample is adjusted while varying the number of LEDs. Secondly, by keeping $h$ constant while changing the number of LEDs, with concurrent adjustments to $\mathrm {NA}_i$. In the first way, using a reference configuration with $15 \times 15$ LEDs and $h$ set at 50 $mm$, keeping the corresponding $\mathrm {NA}_i$ at 0.368, different sets of intensity-only images are generated by varying the number of LEDs and $h$. Subsequently, 3D reconstruction is conducted using the original AP method, and the results for specific three layers are analyzed through structural similarity (SSIM). The results are presented in columns 3 to 5 of Table 1. SSIM serves as an indicator evaluating the similarity between two images, considering factors such as lightness, contrast, and image structure. The calculation formula for SSIM is expressed as follows:

$$\text{SSIM}\left( {X,Y} \right) = \frac{{\left( {2{\mu _X}{\mu _Y} + {C_1}} \right)\left( {2{\sigma _{XY}} + {C_2}} \right)}}{{\left( {\mu _X^2 + \mu _Y^2 + {C_1}} \right)\left( {\sigma _X^2 + \sigma _Y^2 + {C_2}} \right)}}$$
where, $\mu _X$ and $\mu _Y$ denote the mean of images $X$ and $Y$, respectively. $\sigma _X^2$ and $\sigma _Y^2$ represent the variance of images $X$ and $Y$, respectively. $\sigma _{XY}$ indicates the covariance between images $X$ and $Y$, and $C_1$, $C_2$ are constants. The SSIM value, ranging from 0 to 1, is higher when two images exhibit more similar structural information.

Tables Icon

Table 1. The comparison of different LED number while keeping the ${\textrm {NA}}_{i}$ constant.

It can be seen from those columns that when the amount of data is small, the SSIM values for the specific three layers’ reconstruction results are all small. Increasing the amount of data will increase their SSIM values to some extent, but if the amount of data is continuously increased, the SSIM values will not become higher. Simultaneously, considerations such as computation time, data coverage, and Fourier domain mapping results in the $x-z$ plane are detailed in Table 1. The data coverage is determined by enclosing all Fourier domain information within the smallest cube and calculating the ratio of mapped data points to the total points in the cube. Since ${\textrm {NA}}_{i}$ remains constant, maintaining 22 sample layers, and the cube’s boundaries remain unaltered, increasing the number of LEDs introduces more data into the cube, thereby enhancing the Fourier domain data coverage. However, column 7 reveals that as the amount of data increases to a certain extent, the growth in data coverage decelerates. The last column in Table 1, representing the $x-z$ section of the Fourier domain mapping result, suggests that this deceleration may be attributed to the limitation imposed by the missing cone area in the Fourier domain. Increasing the number of LEDs not only fails to result in elevated data coverage and enhanced reconstruction effects but also induces an escalation in reconstruction computation time.

Consider another case in which $h$ is kept constant at 50 $mm$ and the number of LEDs is varied, causing a corresponding adjustment in ${\textrm {NA}}_{i}$ and the number of sample layers. The original AP method is employed for 3D reconstruction, and the SSIM values for three specific layers are computed and shown in columns 4 to 6 of Table 2. Notably, the observed trend in SSIM values with varying LED numbers mirrors the pattern observed in Table 1. For a small number of LEDs, the SSIM values remain low, exhibiting an increase as the number of LEDs rises. However, continued escalation in the number of LEDs results in a subsequent decrease in SSIM values. This incremental LED addition strategy enhances illumination at larger angles while maintaining the position of the front LEDs. With ${\textrm {NA}}_{o}$ fixed at 0.3, the ${\textrm {NA}}_{i}$ for the two configurations, $7 \times 7$ and $11 \times 11$, falls below 0.3, indicating incomplete acquisition of bright-field images. As the number of LEDs increases, the number of sample layers also grows, expanding the range of Fourier domain mapping, specifically the "apple core" area. Consequently, despite increased data quantity, unfilled gaps persist, as evidenced by data coverage calculations. Simultaneously, the comparison of reconstruction calculation time in Table 1 reveals a rough proportionality to the data amount.

Tables Icon

Table 2. The comparison of different LED number while keeping the height constant.

The above simulation demonstrates that it is feasible to increase the amount of data by increasing the illumination angle, thereby enhancing information mapping in the Fourier domain and improving the quality of 3D reconstruction within a certain range. However, beyond this range, the calculation time will be increased while the reconstruction effect will be reduced.

When increasing the amount of data cannot reduce the impact of the missing cone problem and the lack of illumination angle on the quality of 3D reconstruction, the prior knowledge of samples is added to the reconstruction algorithm to see whether proposed FGP-AP method can improve reconstruction quality. To validate the efficacy of the FGP-AP method, 225 images are generated with $15 \times 15$ LEDs and $h$ of 50 $mm$. And then, the original AP method and the FGP-AP method are employed for comparison. In addition, considering the possible measurement noise in the actual system, additive white Gaussian noise with a standard deviation of 0.01 is introduced to the 225 images to compare the reconstruction results under noise condition. The specific three layers in the reconstruction results are depicted in Fig. 3, where Fig. 3(a1) - (a3) are the three standard images adopted by the simulated sample. Figure 3(b1) - (b3) and Fig. 3(c1) - (c3) showcase the reconstruction results of the original AP method and the FGP-AP method using images without noise, respectively. Furthermore, Fig. 3(d1) - (d3) and Fig. 3(f1) - (f3) are the reconstruction results of the two methods in the case of images with noise. The SSIM and the root mean square error (RMSE) between the standard images and these reconstructed results are also calculated and provided below them. Where RMSE calculates the difference between two images, and a smaller value implies less difference between the two images.

 figure: Fig. 3.

Fig. 3. Comparison of reconstruction results between the original AP method and the FGP-AP method. The assessment involves the utilization of 225 simulated intensity-only images generated with $15 \times 15$ LEDs and $h$ of 50 $mm$ for the reconstruction process. (a1) - (a3) Three standard images used for different layers of the simulated sample. (b1) - (b3) Reconstruction results of the original AP method without noise. (c1) - (c3) Reconstruction results of the FGP-AP method without noise. (d1) - (d3) Reconstruction results of the original AP method with noise. (e1) - (e3) Reconstruction results of the FGP-AP method with noise.

Download Full Size | PDF

According to the comparison of RMSE values, it can be seen that the RMSE of the middle layer is always the lowest, indicating that the reconstruction effect of the middle layer is always better than that of other layers. However, the conclusion obtained by using RMSE is slightly different from that obtained by direct observation. As shown in Fig. 3(c1) and Fig. 3(e1), the noise in Fig. 3(e1) is more than Fig. 3(c1), and the RMSE value of Fig. 3(e1) (0.134) is lower than that of Fig. 3(c1) (0.146). Comparing the reconstruction results of the original AP method and the FGP-AP method in the absence of image noise reveals some obvious horizontal lines in the original AP method’s results. Conversely, the FGP-AP method improves the reconstruction results, leading to increased SSIM and decreased RMSE, indicating an improvement in reconstruction quality. At the same time, a comparison is drawn between the SSIM values of the FGP-AP method’s reconstruction results and the approach of increasing the amount of data. It can be seen that the SSIM values of the reconstructed results by the FGP-AP method (0.800, 0.818 and 0.825) are close to or even exceed the maximum SSIM values in Table 1 (0.776, 0.830 and 0.825) and Table 2 (0.759, 0.820 and 0.803). This observation suggests that proposed FGP-AP method can achieve similar or even superior reconstruction effect compared to the improvement achievable by increasing the amount of data. This may be because there is still missing cone area information unfilled in the Fourier domain by increasing the amount of data. The additional FGP algorithm in the FGP-AP method performs an inverse Fourier transform on the Fourier domain mapping result during the iteration process. It further applies spatial domain operations based on the prior knowledge of sample non-negativity and smoothness and then performs the subsequent iteration in the Fourier domain. This combined process of inverse Fourier transform, spatial domain operations, and Fourier transform can be interpreted as compensating for the gaps in the missing cone area within the Fourier domain, thereby mitigating the impact of the missing cone to some extent. In addition, a comparison of the reconstruction times for the two methods reveals that the original AP method requires 12.4 seconds, while the FGP-AP method demands 14.3 seconds. Notably, the calculation time of the FGP-AP algorithm does not increase significantly, and the increase of the calculation time caused by increasing the amount of data exceeds that of the additional FGP algorithm.

In the case of images with noise, the original AP method have obvious noisy point in the reconstructed results. Because the sample smooth prior is added in FGP-AP, the noisy point in reconstructed results is removed, and the reconstruction effect is intuitively better than the original AP method. At the same time, the SSIM value is also increased a lot, which indicates that the FGP-AP method is robust in the presence of noise in the system.

 figure: Fig. 4.

Fig. 4. Comparison of SSIM values for the reconstruction results obtained from different simulated samples. (a) - (f) Exchange the order of standard images to generate 6 different simulated samples. The SSIM values of reconstructed results by the original AP method and the FGP-AP method with or without noise are compared at each layer.

Download Full Size | PDF

To assess the effectiveness of the proposed method under varying sample structures, the three standard images corresponding to the top layer, the middle layer and the bottom layer of the simulation sample are changed in order and re-interpolated to generate six simulated samples with different structures. Following this, the above steps are applied to generate intensity-only images, from which 3D reconstruction is performed. Similarly, the reconstruction results of the original AP method and the FGP-AP method are analyzed in the case of images with and without noise, and the SSIM values of each layer are calculated and plotted in Fig. 4. The sequence number below each image represents the permutation order of the three standard images in the simulation sample, ① is house, ② is camera man, and ③ is aerial view image.

The results in Fig. 4 reveal that, across the reconstruction of simulated samples with varied structures, the FGP-AP method enhances SSIM values, thereby improving reconstruction efficacy. Especially, in the case of images with noise, the improvement of the FGP-AP method is more obvious than that of the original AP method. Additionally, it is evident that, within each sample, the improvement in the reconstruction effect for the layer corresponding to the ③ image is not as substantial as for the other layers. Upon analysis, it may be that the ③ image is the aerial view image, and there are many sharp edges in the original standard image. While the FGP-AP method is based on the prior of sample smoothing. Therefore, although the reconstruction effect of this layer is also improved, it is not as obvious as the improvement of the other layers. This indicates that the FGP-AP method may be more suitable for samples with smooth structures, and most biological samples satisfy this condition, so the FGP-AP method is suitable for the reconstruction of biological samples.

4. Experiment

To evaluate the effectiveness of the proposed FGP-AP method, this section conduct verifications on experimental data obtained in two ways. One is to use the public Fourier Ptychography dataset [34], and the other is to use a self-built LED array microscope to capture multi-angle illumination intensity images of a specified sample. For the public dataset, the imaging system parameters are as follows: The number of LEDs is $15\times 15$; The distance between adjacent LEDs is 4 $mm$; The distance between the LED array and the sample is 90.88 $mm$; The NA and magnification of objective lens are 0.1 and $2\times$, respectively; The original image pixel size is 1.845 $\mu m$. The multi-angle green light illumination images of mouse kidney cells are selected, and the illumination light wavelength is 532 $nm$. Using the above methods for 3D reconstruction, the results are depicted in Fig. 5, and the complete 3D results are shown in Visualization 1.

 figure: Fig. 5.

Fig. 5. 3D reconstruction results from public dataset images of mouse kidney cells. (a1) 3D RI imaging rendering results by original AP method. (a2) - (a5) Reconstruction results by original AP method at layer 1, 8, 11 and 12, respectively. (a6) The mapping results of sample information in the Fourier domain ($x-z$ cross section) by original AP method. (b1) 3D RI imaging rendering results by FGP-AP method. (b2) - (b5) Reconstruction results by FGP-AP method at layer 1, 8, 11 and 12, respectively. (b6) The mapping results of sample information in the Fourier domain ($x-z$ cross section) by FGP-AP method.

Download Full Size | PDF

During the reconstruction process, the axial imaging range $z_{\text {max}}$ is set to 100 $\mu m$, determining the number of sample layers in the $z$-direction as 16 layers based on $z_{\text {max}}$, ${\text {NA}}_{o}$ and ${\text {NA}}_{i}$. The AP and FGP-AP methods are employed for the reconstruction, with Fig. 5(a1) and Fig. 5(b1) displaying the 3D RI imaging rendering results. Figure 5(a2) - (a5) the RI values of layers 1, 8, 11, and 12 reconstructed by AP method, and Fig. 5(b2) - (b5) presenting the corresponding layers using FGP-AP method. A noticeable improvement in contrast and clarity is observed in the results obtained by FGP-AP method, making cells more distinguishable, particularly evident in Fig. 5(b3) - (b5) compared to Fig. 5(a3) - (a5). In comparison, the RI images reconstructed by AP method are relatively blurry, with lower contrast between the sample and background. In addition, compared with the images of layer 1 (as depicted in Fig. 5(a2) and 5(b2)), there are no obvious samples in the results obtained by AP method, but the RI value is also higher than that of the background medium. In contrast, the results obtained by FGP-AP method indicate that there is no sample in this layer, with the RI value equivalent to that of background medium. Generally, considering that the sample is located within the $z$-direction imaging range, the middle layer should display an obvious sample image, while the outer layer without a sample should have an RI value equal to that of the background medium. The reconstruction results of the FGP-AP method obviously show the difference between the middle layer and the outer layer, while the difference in the reconstruction results of the AP method is not obvious. Furthermore, Fig. 5(a6) and 5(b6) show the Fourier domain mapping results in the $x-z$ section. Figure 5(a6) reveals that numerous points in the Fourier domain remain unmapped when the AP method is employed, potentially contributing to the lower quality of the reconstructed image. In contrast, Fig. 5(b6) illustrates that the additional FGP algorithm in the FGP-AP method utilizes the spatial domain’s prior of sample smoothing to fill the unmapped points in the Fourier domain, thereby enhancing the quality of the reconstructed image.

To demonstrate the horizontal resolution improvement of the raw image by the proposed method, the USAF target images in public datasets above are used for 3D reconstruction. Other system parameters remain unchanged except that the wavelength of LED is 630 $nm$. Figure 6(a) is the raw image of USAF target, and Fig. 6(b) is the amplitude result reconstructed by FPM method. Figure 6(c1) and Fig. 6(d1) show the 3D imaging rendering results obtained by AP method and FGP-AP method respectively, with a total of 14 layers in $z$-direction. In order to avoid the blocking of the outer layer to the middle layer during the rendering process, select the appropriate data range (as shown in the colorbar) for display. If the value exceeds this range, it is colorless and transparent. Figure 6(c2) - (c4) show the results of layers 1,4 and 8 obtained by AP method, respectively. And Fig. 6(d2) - (d4) respectively show the results of FGP-AP method in corresponding layers. It can be seen from Fig. 6(c4) and Fig. 6(d4) that both AP method and FGP-AP method achieve the horizontal resolution of FPM method (Fig. 6(b)) in the middle layer, and both can distinguish 9-2 line pairs. Comparing Fig. 6(c2) - (c3) and Fig. 6(d2) - (d3), the AP method can still see that there is a target in the outer layer, but the FGP-AP method does not show it. Generally speaking, the USAF target should appear in the middle layers, but not in the outer layers, indicating that the 3D imaging effect of FGP-AP method is better.

 figure: Fig. 6.

Fig. 6. 3D reconstruction results from public dataset images of USAF target. (a) Raw image of the USAF target. (b) Amplitude results of FPM method. (c1) 3D imaging rendering results by original AP method. (c2) - (c4) Reconstruction results by original AP method at layer 1, 4, and 8, respectively. (d1) 3D imaging rendering results by FGP-AP method. (d2) - (d4) Reconstruction results by FGP-AP method at layer 1, 4, and 8, respectively.

Download Full Size | PDF

Moreover, label-free blood cells are imaged using a self-built LED array microscope, and 3D reconstruction is performed employing the above methods. The experimental system is a modification of a commercial inverted microscope by replacing its light source with a programmable LED array controlled by Arduino (Arduino Mega 2560). Every LED element is sequentially illuminated and a scientific CMOS (sCMOS) camera (Dhyana 400BSI V2, Tucsen Photonics Co., Ltd, Fujian, China) with a pixel size of 6.5 $\mu m$ is synchronously controlled to record sample images. The system parameters are set to match those described in Section 3. Specifically, the magnification of the objective lens is $10\times$, while the NA is 0.3. The distance between adjacent LED elements in LED array is 2 $mm$, and the distance between LED array and sample is set to 50 $mm$. Green LEDs, with a central wavelength of 520 $nm$ confirmed by a spectrometer, are employed for illumination. The systematic errors caused by LED array can seriously affect 3D reconstruction results, so before the start of 3D reconstruction algorithm, our previous work is used to correct the multi-parameter errors in the system [35]. The axial imaging range $z_{\text {max}}$ during the reconstruction process is also set to 100 $\mu m$, and the number of sample layers in the z-direction is determined to be 24 by $z_{\text {max}}$, $\text {NA}_{o}$ and $\text {NA}_{i}$. The 3D reconstruction results by AP method and FGP-AP method are shown in Fig. 7.

 figure: Fig. 7.

Fig. 7. 3D reconstruction results for label-free blood cell images captured by a self-built LED array microscope. (a1) 3D RI imaging rendering results by original AP method. (a2) - (a5) Reconstruction results by original AP method at layer 1, 8, 12 and 13, respectively. (b1) 3D RI imaging rendering results by FGP-AP method. (b2) - (b5) Reconstruction results by FGP-AP method at layer 1, 8, 12 and 13, respectively.

Download Full Size | PDF

Displayed in Fig. 7(a1) is the 3D RI imaging rendering results by AP method. Similarly, the appropriate data range is selected as the threshold when rendering the 3D image results. Figure 7(a2) - (a5) are layer 1, 8, 12, and 13 of the reconstructed samples by AP method, respectively, while Fig. 7(b1) - (b5) exhibit the corresponding reconstructed results by FGP-AP method. And the complete 3D results are also shown in Visualization 1. It can be seen from Fig. 7 that the performance of reconstruction results by two methods in different layers is similar to that in Fig. 5. The reconstruction results by AP method show that there are RI changes in each layer (Fig. 7(a2) - (a5)), while the reconstruction results by FGP-AP method show that there are RI changes only in middle layers (Fig. 7(b4) and 7(b5)). In the outer layer, the RI does not change significantly (Fig. 7(b2) and 7(b3)). In addition, the contrast between sample and background in the reconstruction results at middle layers by FGP-AP method is also higher than that of AP method, so intuitively, the 3D imaging effect by FGP-AP method is better.

5. Conclusion

In this paper, the FGP-AP method is proposed for 3D RI reconstruction of intensity-only images collected by LED array microscopy. The AP algorithm plays a role in gradient descent and can perform 3D reconstruction with intensity-only measurements. And the FGP algorithm adds prior knowledge of non-negative and smoothness into 3D reconstruction process to compensate for the incomplete Fourier domain information mapping caused by the inherent "missing cone problem" in ODT and limited illumination angles. In the simulation section, by gradually increasing the number of LED used for the original AP method, it is found that increasing the illumination angle can improve the reconstruction quality when the amount of data is small. However, when the measurement data reaches a certain level, the inherent "missing cone problem" makes it impossible to map some low-frequency information, and increasing the illumination angle cannot improve the reconstruction quality. The FGP-AP method utilizes prior knowledge of samples to fill in areas that cannot be mapped in the Fourier domain, improving the quality of 3D reconstruction. At the same time, 3D reconstruction under noisy conditions is also simulated, demonstrating the robustness of the FGP-AP method. For the experimental part, we utilized mouse kidney cell images and USAF target images from public datasets and label-free blood cell images obtained through a self-built LED array microscopy system for 3D reconstruction. The FGP-AP method also achieved better 3D imaging results.

The simulation part of this paper aims to illustrate the influence of incomplete Fourier domain information caused by "missing cones" and limited illumination angles on the reconstruction results. Therefore, the generation process of multi-angle illumination intensity images uses the forward imaging model in Section 2.1. To simulate more accurate images, accurate scattering models can be considered. In addition, the theoretical foundation for 3D reconstruction in this paper is the linearized inverse scattering model. While the linearized model provides faster calculation speed, it does pose limitations when applied to thicker samples. To address this, the proposed FGP-AP method can be improved to adapt to more accurate nonlinear inverse scattering models.

A. The 3D FGP algorithm

In this appendix, we give a detailed description of the 3D FGP algorithm used in this paper. This algorithm is an extension of the 2D FGP algorithm [32], rooted in regularized image processing. The notation used is defined as follows:

  • $\cal P$ is the set of matrix-pairs $\left ( {{\mathbf {p}},{\mathbf {q}},{\mathbf {r}}} \right )$, where ${\mathbf {p}},{\mathbf {q}},{\mathbf {r}} \in \mathbb {R} {^{m \times n \times l}}$.
  • • The linear operator ${\cal L}: \mathbb {R} {^{m \times n \times l}} \times \mathbb {R} {^{m \times n \times l}} \times \mathbb {R} {^{m \times n \times l}} \to \mathbb {R} {^{m \times n \times l}}$ is defined by:
    $$ {\cal L}{\left( {{\mathbf {p}},{\mathbf {q}},{\mathbf {r}}} \right)_{i,j,k}} = {p_{i,j,k}} + {q_{i,j,k}} + {r_{i,j,k}} - {p_{i - 1,j,k}} - {q_{i,j - 1,k}} - {r_{i,j,k - 1}} $$
    where $i = 1, \ldots,m;j = 1, \ldots,n;k = 1, \ldots,l$. And make periodicity assumptions: ${p_{0,j,k}} = {p_{m,j,k}};{q_{i,0,k}} = {q_{i,n,k}};{r_{i,j,0}} = {r_{i,j,l}}$.
  • • The operator ${\cal L}^T: \mathbb {R} {^{m \times n \times l}} \to \mathbb {R} {^{m \times n \times l}} \times \mathbb {R} {^{m \times n \times l}} \times \mathbb {R} {^{m \times n \times l}}$ is adjoint to $\cal L$:
    $$ {\cal L}^T\left( {\mathbf {x}} \right) = \left( {{\mathbf {p}},{\mathbf {q}},{\mathbf {r}}} \right) $$
    where ${\mathbf {p}},{\mathbf {q}},{\mathbf {r}} \in \mathbb {R} {^{m \times n \times l}}$ are the matrics defined by:
    $$\begin{array}{ccc} {{p_{i,j,k}} = {x_{i,j,k}} - {x_{i + 1,j,k}}} & {} & {i = 0, \ldots ,m - 1;j = 1, \ldots ,n;k = 1, \ldots ,l}\\ {{q_{i,j,k}} = {x_{i,j,k}} - {x_{i,j + 1,k}}} & {} & {i = 1, \ldots ,m;j = 0, \ldots ,n - 1;k = 1, \ldots ,l}\\ {{r_{i,j,k}} = {x_{i,j,k}} - {x_{i,j,k + 1}}} & {} & {i = 1, \ldots ,m;j = 1, \ldots ,n;k = 0, \ldots ,l - 1} \end{array}$$
  • ${P_C}$ is the orthogonal projection operator on the set $C$. When $C = \left [ {lb,ub} \right ]$, then ${P_C}$ is given by:
    $$ {P_C}{\left( {\mathbf {x}} \right)_{i,j,k}} = \left\{ {\begin{array}{c} {lb}\\ {{x_{i,j,k}}}\\ {ub} \end{array}\begin{array}{c} {}\\ {}\\ {} \end{array}} \right.\begin{array}{c} {{x_{i,j,k}} < lb}\\ {lb \le {x_{i,j,k}} \le ub}\\ {ub < {x_{i,j,k}}} \end{array} $$
  • • The projection $\left ( {{\mathbf {p'}},{\mathbf {q'}},{\mathbf {r'}}} \right ) = {P_{\cal P}}\left ( {{\mathbf {p}},{\mathbf {q}},{\mathbf {r}}} \right )$ on the set $\cal P$ is given by:
    $$\begin{aligned} {{p'}_{i,j,k}} &= \frac{{{p_{i,j,k}}}}{{\max \left\{ {1,\sqrt {p_{i,j,k}^2 + q_{i,j,k}^2 + r_{i,j,k}^2} } \right\}}}\\ {{q'}_{i,j,k}} &= \frac{{{q_{i,j,k}}}}{{\max \left\{ {1,\sqrt {p_{i,j,k}^2 + q_{i,j,k}^2 + r_{i,j,k}^2} } \right\}}}\\ {{r'}_{i,j,k}} &= \frac{{{r_{i,j,k}}}}{{\max \left\{ {1,\sqrt {p_{i,j,k}^2 + q_{i,j,k}^2 + r_{i,j,k}^2} } \right\}}} \end{aligned}$$

Subsequently, the implementation of 3D FGP algorithm is shown in Algorithm 1.

Tables Icon

Algorithm 1. The 3D FGP algorithm

Among them, the linear operation $\cal L$, its inverse operation ${\cal L}^T$, and orthogonal projection operator $P$ in 3D FGP algorithm are equivalent to adding TV constraint to the scattering potential $\textbf {f}$. And when the set $C$ is selected as $\left [ 0,+\infty \right )$, it is equivalent to adding non-negative constraints. The $\tau$ value controls the size of the regularization term, which is set to 0.05 in this paper.

Funding

National Natural Science Foundation of China (62375214).

Acknowledgments

This study was funded by National Nature Science Foundation of China (Grant No. 62375214). The authors would like to thank the reviewers and the associate editor for their comments that contributed to meaningful improvements in this paper.

Disclosures

The authors declare no conflicts of interest.

Data availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

References

1. A. D. Elliott, “Confocal microscopy: principles and modern practices,” Curr. Protoc. Cytom. 92(1), e68 (2020). [CrossRef]  

2. Z. Yang, L. Zhang, N. Lv, et al., “Accurate 3d morphological computational model reconstruction of suspended cells imaged through stratified media by the precise depth-varying point spread function method,” Opt. Express 30(15), 27539–27559 (2022). [CrossRef]  

3. O. E. Olarte, J. Andilla, E. J. Gualda, et al., “Light-sheet microscopy: a tutorial,” Adv. Opt. Photonics 10(1), 111–179 (2018). [CrossRef]  

4. S. W. Hell and J. Wichmann, “Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy,” Opt. Lett. 19(11), 780–782 (1994). [CrossRef]  

5. E. Betzig, G. H. Patterson, R. Sougrat, et al., “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006). [CrossRef]  

6. M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3(10), 793–796 (2006). [CrossRef]  

7. P. Y. Liu, L. K. Chin, W. Ser, et al., “Cell refractive index for cell biology and disease diagnosis: past, present and future,” Lab Chip 16(4), 634–644 (2016). [CrossRef]  

8. T. Zhang, C. Godavarthi, P. C. Chaumet, et al., “Far-field diffraction microscopy at λ/10 resolution,” Optica 3(6), 609–612 (2016). [CrossRef]  

9. Z. Yang, L. Zhang, N. Lü, et al., “Progress of three-dimensional, label-free quantitative imaging of refractive index in biological samples,” Chin. J. Laser 49, 0507201 (2022). [CrossRef]  

10. E. Wolf, “Three-dimensional structure determination of semi-transparent objects from holographic data,” Opt. Commun. 1(4), 153–156 (1969). [CrossRef]  

11. A. Devaney, “Inverse-scattering theory within the rytov approximation,” Opt. Lett. 6(8), 374–376 (1981). [CrossRef]  

12. J. Lim, A. B. Ayoub, E. E. Antoine, et al., “High-fidelity optical diffraction tomography of multiple scattering samples,” Light: Sci. Appl. 8(1), 82 (2019). [CrossRef]  

13. A. Abubakar and P. M. van den Berg, “The contrast source inversion method for location and shape reconstructions,” Inverse Problems 18(2), 495–510 (2002). [CrossRef]  

14. E. Soubies, T.-A. Pham, and M. Unser, “Efficient inversion of multiple-scattering model for optical diffraction tomography,” Opt. Express 25(18), 21786–21800 (2017). [CrossRef]  

15. T.-A. Pham, E. Soubies, A. Ayoub, et al., “Three-dimensional optical diffraction tomography with Lippmann-Schwinger model,” IEEE Trans. Comput. Imaging 6, 727–738 (2020). [CrossRef]  

16. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

17. R. Claveau, P. Manescu, M. Elmi, et al., “Digital refocusing and extended depth of field reconstruction in fourier ptychographic microscopy,” Biomed. Opt. Express 11(1), 215–226 (2020). [CrossRef]  

18. L. Tian and L. Waller, “3d intensity and phase imaging from light field measurements in an led array microscope,” Optica 2(2), 104–111 (2015). [CrossRef]  

19. M. Chen, D. Ren, H.-Y. Liu, et al., “Multi-layer born multiple-scattering model for 3d phase microscopy,” Optica 7(5), 394–403 (2020). [CrossRef]  

20. R. Horstmeyer, J. Chung, X. Ou, et al., “Diffraction tomography with Fourier ptychography,” Optica 3(8), 827–835 (2016). [CrossRef]  

21. K. C. Zhou and R. Horstmeyer, “Diffraction tomography with a deep image prior,” Opt. Express 28(9), 12872–12896 (2020). [CrossRef]  

22. C. Zuo, J. Sun, J. Li, et al., “Wide-field high-resolution 3d microscopy with Fourier ptychographic diffraction tomography,” Opt. Lasers Eng. 128, 106003 (2020). [CrossRef]  

23. J. Li, N. Zhou, J. Sun, et al., “Transport of intensity diffraction tomography with non-interferometric synthetic aperture for three-dimensional label-free microscopy,” Light: Sci. Appl. 11(1), 154 (2022). [CrossRef]  

24. J. Lim, K. Lee, K. H. Jin, et al., “Comparative study of iterative reconstruction algorithms for missing cone problems in optical diffraction tomography,” Opt. Express 23(13), 16933–16948 (2015). [CrossRef]  

25. U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, et al., “Optical tomographic image reconstruction based on beam propagation and sparse regularization,” IEEE Trans. Comput. Imaging 2(1), 59–70 (2016). [CrossRef]  

26. J. M. Long, J. Y. Chun, and T. K. Gaylord, “ADMM approach for efficient iterative tomographic deconvolution reconstruction of 3d quantitative phase images,” Appl. Opt. 60(27), 8485–8492 (2021). [CrossRef]  

27. Y. Sung and R. R. Dasari, “Deterministic regularization of three-dimensional optical diffraction tomography,” J. Opt. Soc. Am. A 28(8), 1554–1561 (2011). [CrossRef]  

28. W. Krauze, “Optical diffraction tomography with finite object support for the minimization of missing cone artifacts,” Biomed. Opt. Express 11(4), 1919–1926 (2020). [CrossRef]  

29. D. Dong and K. Shi, “Solving the missing cone problem by deep learning,” Adv. Photonics 2(02), 1 (2020). [CrossRef]  

30. D. H. Ryu, D. Ryu, Y. S. Baek, et al., “Deepregularizer: Rapid resolution enhancement of tomographic imaging using deep learning,” IEEE Trans. Med. Imaging 40(5), 1508–1518 (2021). [CrossRef]  

31. S. Aslan, Z. Liu, V. Nikitin, et al., “Joint ptycho-tomography with deep generative priors,” Mach. Learn.: Sci. Technol. 2(4), 045017 (2021). [CrossRef]  

32. A. Beck and M. Teboulle, “Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems,” IEEE Trans. on Image Process. 18(11), 2419–2434 (2009). [CrossRef]  

33. A. J. Devaney, Mathematical Foundations of Imaging, Tomography and Wavefield inversion (Cambridge University Press, 2012).

34. G. Zheng, C. Shen, S. Jiang, et al., “Concept, implementations and applications of Fourier ptychography,” Nat. Rev. Phys. 3(3), 207–223 (2021). [CrossRef]  

35. Z. Yang, L. Zhang, T. Liu, et al., “Led array microscopy system correction method with comprehensive error parameters optimized by phase smoothing criterion,” Biomed. Opt. Express 14(9), 4696–4712 (2023). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       3D reconstruction results of mouse kidney cells and label-free blood cells.

Data availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Schematic diagram of Fourier diffraction theorem. (a1) Fourier diffraction theorem without an aperture limit. (a2) $y-z$ cross-section of the Fourier domain coverage region obtained from the mapping relation in (a1). (a3) 3D display of the mapped coverage area shown in (a2). (b1) Fourier diffraction theorem with an aperture limit. (b2) $y-z$ cross-section of the Fourier domain coverage region obtained from the mapping relation in (b1). (b3) 3D display of the mapped coverage area shown in (b2).
Fig. 2.
Fig. 2. Generation of intensity-only simulated images in LED array microscopy. Three standard images are used to generate a simulated sample, and a set of intensity-only images illuminated at different angles are generated according to the forward imaging model in Section 2.1.
Fig. 3.
Fig. 3. Comparison of reconstruction results between the original AP method and the FGP-AP method. The assessment involves the utilization of 225 simulated intensity-only images generated with $15 \times 15$ LEDs and $h$ of 50 $mm$ for the reconstruction process. (a1) - (a3) Three standard images used for different layers of the simulated sample. (b1) - (b3) Reconstruction results of the original AP method without noise. (c1) - (c3) Reconstruction results of the FGP-AP method without noise. (d1) - (d3) Reconstruction results of the original AP method with noise. (e1) - (e3) Reconstruction results of the FGP-AP method with noise.
Fig. 4.
Fig. 4. Comparison of SSIM values for the reconstruction results obtained from different simulated samples. (a) - (f) Exchange the order of standard images to generate 6 different simulated samples. The SSIM values of reconstructed results by the original AP method and the FGP-AP method with or without noise are compared at each layer.
Fig. 5.
Fig. 5. 3D reconstruction results from public dataset images of mouse kidney cells. (a1) 3D RI imaging rendering results by original AP method. (a2) - (a5) Reconstruction results by original AP method at layer 1, 8, 11 and 12, respectively. (a6) The mapping results of sample information in the Fourier domain ($x-z$ cross section) by original AP method. (b1) 3D RI imaging rendering results by FGP-AP method. (b2) - (b5) Reconstruction results by FGP-AP method at layer 1, 8, 11 and 12, respectively. (b6) The mapping results of sample information in the Fourier domain ($x-z$ cross section) by FGP-AP method.
Fig. 6.
Fig. 6. 3D reconstruction results from public dataset images of USAF target. (a) Raw image of the USAF target. (b) Amplitude results of FPM method. (c1) 3D imaging rendering results by original AP method. (c2) - (c4) Reconstruction results by original AP method at layer 1, 4, and 8, respectively. (d1) 3D imaging rendering results by FGP-AP method. (d2) - (d4) Reconstruction results by FGP-AP method at layer 1, 4, and 8, respectively.
Fig. 7.
Fig. 7. 3D reconstruction results for label-free blood cell images captured by a self-built LED array microscope. (a1) 3D RI imaging rendering results by original AP method. (a2) - (a5) Reconstruction results by original AP method at layer 1, 8, 12 and 13, respectively. (b1) 3D RI imaging rendering results by FGP-AP method. (b2) - (b5) Reconstruction results by FGP-AP method at layer 1, 8, 12 and 13, respectively.

Tables (3)

Tables Icon

Table 1. The comparison of different LED number while keeping the NA i constant.

Tables Icon

Table 2. The comparison of different LED number while keeping the height constant.

Tables Icon

Algorithm 1. The 3D FGP algorithm

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

u ( r ) = u i ( r ) + Ω g ( r r ) f ( r ) u ( r ) d r
[ 2 + k 2 ] g ( r ) = δ ( r )
u ~ s ( k x , k y ; z = z D ) = 2 π i k z exp ( i k z z D ) f ~ ( k k i )
k i = ( k i x , k i y , k i z ) = k ( sin θ i x , sin θ i y , 1 sin 2 θ i x sin 2 θ i y )
u ~ s ( i x , i y ) ( k x , k y ) = 2 π i k z f ~ ( k x k i x , k y k i y , k z k i z )
g ( i x , i y ) ( x , y ) = | 2 D 1 [ f ~ ( k x k i x , k y k i y , k z k i z ) P ( k x , k y ) ] | 2
y j = | A j f | 2 + η j
f ^ { arg min [ D ( f ) + τ R ( f ) ] }
D ( f ) = j = 1 J D p ( f ) = j = 1 J 1 2 | A p f | 2 y p 2 2
R ( f ) = I 0 ( f ) + f 2 , 1 = I 0 ( f ) + m = 1 M d = 1 D ( d f ) m 2
ψ ( i x , i y ) ( x , y ) = 2 D 1 { f ~ j ( k x k i x , k y k i y , k z k i z ) P ( k x , k y ) }
ϕ ( i x , i y ) ( x , y ) = g ( i x , i y ) ( x , y ) ψ ( i x , i y ) ( x , y ) | ψ ( i x , i y ) ( x , y ) |
SSIM ( X , Y ) = ( 2 μ X μ Y + C 1 ) ( 2 σ X Y + C 2 ) ( μ X 2 + μ Y 2 + C 1 ) ( σ X 2 + σ Y 2 + C 2 )
L ( p , q , r ) i , j , k = p i , j , k + q i , j , k + r i , j , k p i 1 , j , k q i , j 1 , k r i , j , k 1
L T ( x ) = ( p , q , r )
p i , j , k = x i , j , k x i + 1 , j , k i = 0 , , m 1 ; j = 1 , , n ; k = 1 , , l q i , j , k = x i , j , k x i , j + 1 , k i = 1 , , m ; j = 0 , , n 1 ; k = 1 , , l r i , j , k = x i , j , k x i , j , k + 1 i = 1 , , m ; j = 1 , , n ; k = 0 , , l 1
P C ( x ) i , j , k = { l b x i , j , k u b x i , j , k < l b l b x i , j , k u b u b < x i , j , k
p i , j , k = p i , j , k max { 1 , p i , j , k 2 + q i , j , k 2 + r i , j , k 2 } q i , j , k = q i , j , k max { 1 , p i , j , k 2 + q i , j , k 2 + r i , j , k 2 } r i , j , k = r i , j , k max { 1 , p i , j , k 2 + q i , j , k 2 + r i , j , k 2 }
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.