Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-resolution 3D Fourier ptychographic reconstruction using a hemispherical illumination source with multiplexed-coded strategy

Open Access Open Access

Abstract

Fourier ptychography is a promising and flexible imaging technique that can achieve 2D quantitative reconstruction with higher resolution beyond the limitation of the system. Meanwhile, by using different imaging models, the same platform can be applied to achieve 3D refractive index reconstruction. To improve the illumination NA as much as possible while reducing the intensity attenuation problem caused by the LED board used in the traditional FP platform, we apply a hemispherical lighting structure and design a new LED arrangement according to 3D Fourier diffraction theory. Therefore, we could obtain the illumination of 0.98NA using 187 LEDs and achieve imaging half-pitch resolutions of ∼174 nm and ∼524 nm for the lateral and axial directions respectively, using a 40×/0.6NA objective lens. Furthermore, to reduce the number of captured images required and realize real-time data collection, we apply the multiplexed-coded illumination strategy and compare several coded patterns through simulation and experiment. Through comparison, we determined a radial-coded illumination pattern that could achieve more similar results as sequential scanning and increase the acquisition speed to above 1 Hz. Therefore, this paper provides the possibility of this technique in real-time 3D observation of in vitro live samples.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

Corrections

18 April 2022: A typographical correction was made to the affiliations.

1. Introduction

High-resolution 3D reconstruction of thick samples has always been critical for biomedical research, and it is challenging to achieve with a standard microscope. Several methods based on fluorescent imaging, such as multiphoton microscopy and confocal microscopy, could achieve fine 3D results. However, they typically require mechanical movement to scan the sample, which could be time consuming. The effective signal also decreases as the depth of the ballistic photons increases and will be influenced by biological aberrations [1]. Moreover, fluorescence-based imaging techniques require exogenous biological labels and have specific requirements for the type of samples, and the photobleaching and phototoxicity of the fluorescent agents further prevent the long-term observation of live cells. Meanwhile, the method cannot generate quantitative distributions related to the refractive index (RI) of the sample.

Currently, techniques based on optical different tomography (ODT) have become promising to achieve label-free quantitative RI reconstruction in biomedical imaging [2,3]. In addition, a noninterference-based phase reconstruction method called Fourier ptychographic microscopy (FPM) has been proposed recently [4], which can obtain the estimated phase distribution with high resolution and a large field of view (FOV) by iteratively minimizing the difference between the measured intensity and the estimated intensity. Instead of mechanical scanning, FPM typically uses a programmable LED array to provide illumination from different angles; therefore, higher-frequency information that normally surpasses the system bandwidth could pass through the system and be detected by the camera.

Although traditional FPM technology has been proposed for 2D complex field reconstruction [4], the same strategy could be used for 3D reconstruction, which can remove the need for interference processes, thus reducing the complexity of the system. At present, there are two forward imaging models solving 3D reconstruction problems from intensity-only measurements. One is to treat the thick sample as a 3D diffraction sample and uses the Fourier diffraction theorem to project the information in the 3D spectrum domain (i.e., Ewald shell) onto the 2D imaging plane [58] and uses iterative methods similar to 2D phase retrieval method [4,9] to update the sample’s scattering potential [5,6]. The other is multislice-based propagation method, which specifically regards the thick sample as a superposition of multiple thin slices. During the propagation of the incident light from the first slice to the end, the emitted light of each slice, which contains the scattered complex field, is regarded as the incident light of the next slice [1013].

In this work, we first analyze the 3D spectrum transmission process under angularly varying illumination based on the Fourier diffraction theorem and design a suitable hemispherical light source for 3D reconstruction (187 LEDs), which could increase the illumination NA up to 0.98, resulting in theoretical half-pitch resolutions of ∼147 nm and ∼464 nm for lateral and axial respectively, at a 40×/0.6NA objective lens with a wavelength of 465 nm. In terms of the reconstruction algorithm, we use the multi-slice Rytov tomography model and embed the aberration recovery process, which can reconstruct both the high-resolution 3D scattering potential of the sample and the aberration distribution of the imaging system. Therefore, an aberration-free 3D result could be obtained. Furthermore, to improve the acquisition speed of the system, we employ the multiplexed coded illumination strategy, and by combining with aberration recovery, we can reduce the acquisition time to less than 1s while ensuring the reconstruction quality.

This paper is structured as follows: in Section 2, we present a new hemispherical illumination source and describe multi-slice tomography with the Rytov approximation mathematically; moreover, we introduce our reconstruction framework with a multiplexed coded strategy and aberration recovery in detail. In Section 3, through simulation, we quantitatively demonstrate the feasibility of our method in aberration recovery and compare the reconstruction results under several coded illumination strategies. In Section 4, we first use the USAF target to quantitatively verify the lateral resolution enhancement ability of the system. Then, we show the experimental analysis and investigation results of microbeads and bio-samples (COS-7 cell) under different illumination conditions. In Section 5, the work ends with discussions and conclusions.

2. Theory

2.1 LED arrangement design for 3D reconstruction based on the Fourier diffraction theorem

In many recent works, flat LED arrays are widely used for both 2D and 3D reconstructions due to their low cost and easy installation [46], however, several issues may occur due to the structure. For the plane structure, the intensities received by the sample would decline severely with the increase in the illumination angle ($\theta $), as shown in the following formula:

$${I_\theta } = {I_0}{\cos ^4}(\theta )$$

Moreover, for the FPM technique, the reconstruction half-pitch resolution is related to the illumination NA ($N{A_{ill}}$):

$$\delta x,\delta y = \frac{\lambda }{{2(N{A_{obj}} + N{A_{ill}})}}, \qquad \delta z = \frac{\lambda }{{2 - \sqrt {1 - NA_{obj}^2} - \sqrt {1 - NA_{ill}^2} }}$$
where $\lambda $ is the wavelength of incident light in vacuum, $N{A_{obj}}$ is the NA of the objective lens used, and $N{A_{ill}} = \frac{{nsin(\theta )}}{\lambda }$, n is the refractive indice of the illumination path. Therefore, the dark-field image corresponding to the edge LED, with $N{A_{ill}}$ larger than $N{A_{obj}}$, would have a low signal-to-noise ratio (SNR), which will reduce the reconstruction quality and further restrain the available $N{A_{ill}}$ of the system [14]. To the best of our knowledge, currently, only Zuo et al. [5] utilize intensity-only images based on an FPM system that contains dark-field information to realize 3D reconstruction, and they achieve a synthetic NA beyond 1.0 (∼1.3) with reconstruction resolutions of 390 nm and 899 nm for lateral and axial respectively. However, due to the plane structure used, 3001 LEDs are needed to achieve an $N{A_{ill}}$ of ∼0.9, resulting in a data acquisition time for a single frame of 91s, which is not suitable for the dynamic observation of in vitro live samples. Another way to improve the illumination NA is to use an oil immersion condenser [16], but this method will bring two problems, firstly, the use of condenser will reduce the illumination field of view, secondly, due to the need for oil immersion, the system could not realize the observation of living samples cultured in a petri dish.

Therefore, similar to [14], in this paper, we design and utilize a hemispherical LED source. Moreover, we also design the arrangement of the LEDs based on Fourier diffraction tomography and use only 187 LEDs to achieve 3D reconstruction with the synthetic NA close to 1.58. For the arrangement of LEDs, it is helpful to begin our discussion by introducing the information transmission process described by traditional diffraction tomography. Unlike the forward imaging process of FPM for 2D reconstruction in which the sample is characterized by its 2D amplitudes and phase, a thick 3D sample is described by its 3D scattering potential $V({x,\; y,\; z} )$, which is related to its complex refractive index (RI) $n({x,\; y,\; z})$:

$$V(x,y,z) = k_0^2[{n^2}(x,y,z) - n_m^2]$$
where ${k_0} = \frac{{2\pi }}{\lambda }$, $({x,\; y,\; z} )$ represents the 3D coordinates and ${n_m}$ represents the RI of the surrounding media. When the incident light passes through the sample, based on two different weakly scattering approximations (Born and Rytov), the complex field of the outgoing light can be expressed as the superposition of the incident light ${U_{in}}({x,y,z} )$ and the first-order scattered light ${U_s}({x,y,z} )$ [5,15]:
$$U(x,y,z) \approx \left\{ \begin{array}{ll} {U_{in}}(x,y,z) + {U_s}(x,y,z) &Born \,approximation\\ {U_{in}}(x,y,z)\exp \left( {\frac{{{U_s}(x,y,z)}}{{{U_{in}}(x,y,z)}}} \right) &Rytov\,approximation \end{array} \right.$$
Where the scattering term ${U_s}({x,y,z} )$ contains the RI information of the sample and can be described as follows [5,6]:
$${U_s}(\boldsymbol{r}) = \frac{\textrm{1}}{{\textrm{4}\pi }}\int {G(|\boldsymbol{r} - \boldsymbol{r}^{\prime}|)} V(\boldsymbol{r}^{\prime}){U_{in}}(\boldsymbol{r}^{\prime})d\boldsymbol{r}^{\prime} = [{V(\boldsymbol{r}){U_{in}}(\boldsymbol{r})} ]\ast G(\boldsymbol{r})$$

In which, $\boldsymbol{r} = ({x,y,z} )$ is shorthand notation for 3D spatial coordinates and $G({|{\boldsymbol{r} - \boldsymbol{r^{\prime}}} |} )$ is Green's function, which connects the scattered field with the scattering potential $V(\boldsymbol{r} )$:

$$G(\boldsymbol{r}) = \frac{{\exp (i{k_0}{n_m}\boldsymbol{r})}}{\boldsymbol{r}}$$

In a traditional FPM system, the sample is sequentially illuminated with an LED array, as shown in Fig. 1(a), which is placed at a long distance from the sample, and typically, the incident light is quasi-monochromatic (central wavelength $\lambda $) and spatially coherent. Therefore, the ${U_{in}}(r )$ term in Eq. (4) for the $lth$ LED can be expressed as a plane wave in the surrounding media modulated by the incident wave vector:

$$U_{in}^l(\boldsymbol{r}) = {A_l}\exp (i{\boldsymbol{k}_l}\cdot \boldsymbol{r})$$
where ${A_l}$ is the amplitude of the incident light, which is set to 1 in this paper for simplicity, and ${k_l}$ is the 3D wave vector according to the plane wave [6]:
$${\boldsymbol{k}_l} = (\boldsymbol{k}_x^l,\boldsymbol{k}_y^l,\boldsymbol{k}_z^l) = {k_0}\cdot \left( {\sin \theta_x^l,\sin \theta_y^l,\sqrt {1 - ({{{\sin }^2}\theta_x^l + {{\sin }^2}\theta_y^l} )} } \right)$$

 figure: Fig. 1.

Fig. 1. (a) The structure of the Fourier ptychographic microscopy system with a thick sample illuminated from a noncenter LED. (b) The 3D pupil function derived from the Fourier diffraction theorem. (c) The projection of the 3D pupil function on the Y-Z plane in the frequency domain according to different LEDs. (d) Distribution of $N{A_{ill}}$ corresponding to each LED in the lateral spectrum domain.

Download Full Size | PDF

$({\theta_x^l,\; \theta_y^l} )$ represents the incident angles for the $lth$ LED. It can be seen that as $({\theta_x^l,\; \theta_y^l} )$ changes, the value of ${k_l}$ always moves along a spherical surface with a radius of ${k_0}$, which is called the Ewald sphere. Then, combining with Eq. (5)–(7), we can obtain the relationship between the scattered complex field and the RI function in the frequency domain:

$$\tilde{U}_s^l({\boldsymbol{k}_{2D}};z = {z_0}) = \frac{{i\pi \cdot \exp (i{\boldsymbol{k}_z}{z_0})}}{{{\boldsymbol{k}_z}}}\tilde{V}({\boldsymbol{k}_x} - \boldsymbol{k}_x^l,{\boldsymbol{k}_y} - \boldsymbol{k}_y^l,{\boldsymbol{k}_z} - \boldsymbol{k}_z^l)$$
where ${k_{2D}} = ({{k_x},{k_y}} )$ represents the 2D coordinates in the Fourier domain and ${z_0}$ is the shift in the z direction, which is set to 0 in this paper since we assume that the z coordinate of the measurement plane (in-focus plane) is 0. Furthermore, considering the truncation effect of the objective lens, under the illumination of the $jth$ LED, the spectrum information of the scattered light of the thick sample that can pass through the system can be expressed as:
$$\tilde{U}_s^l({\boldsymbol{k}_{2D}};z = {z_0}) = \frac{{i\pi }}{{{\boldsymbol{k}_z}}}\tilde{V}({{\boldsymbol{k}_x} - \boldsymbol{k}_x^l,{\boldsymbol{k}_y} - \boldsymbol{k}_y^l,{\boldsymbol{k}_z} - \boldsymbol{k}_z^l} )P({\boldsymbol{k}_{2D}})\delta \left( {{\boldsymbol{k}_z} - \sqrt {k_\textrm{0}^\textrm{2} - \boldsymbol{k}_{2D}^2} } \right)$$

The pupil function defined by the objective lens modulates the spectral range of the scattered light, and only a part of the Ewald sphere could be detected by the camera, as shown in Fig. 1(b). Typically, the pupil function is determined as the coherent transfer function (CTF) [5]:

$$P\left(\boldsymbol{k}_{2 D}\right)=\left\{\begin{array}{cc} 1, & \text { if }\left|\boldsymbol{k}_{2 D}\right|^{2} \leq\left(\frac{2 \pi}{\lambda} N A_{o b j}\right)^{2} \\ 0, & \text { otherwise } \end{array}\right.$$

The delta function is used to project the 2D pupil function onto the 3D Ewald spherical surface; therefore, the range of the Ewald ‘shell’ is limited by the NA of the objective lens used ($N{A_{obj}}$). As shown in Fig. 1(c), when the illumination angle of the incident light changes, the Ewald ‘shell’ also shifts accordingly, therefore, for the FPM system, we could obtain several images that contain certain regions of the 3D spectrum of the sample when sequentially turning on the LED array, which will allow us to reconstruct the scattering potential $V({x,\; y,\; z} )$ using gradient decent methods or iterative algorithms similar to 2D FPM reconstruction [5,6].

However, no matter what kind of algorithm is used, in order to successfully reconstruct the 3D scattered potential of the sample, it is necessary to ensure that there is enough overlap in the three dimensions of the spectrum, especially on the z dimension. For only blight-field images, this situation is intrinsically satisfied, as shown by the red shells in Fig. 1(c). However, for dark-field images, it is necessary to design the specific illumination angle for each LED. As shown in Fig. 1(c), two adjacent LEDs are sequentially turned on, and the corresponding Ewald shell would have a shift of $\delta {k_z}$ along the z direction in the 3D frequency domain. Moreover, the thickness of the shell is defined by, $N{A_{obj}}$(${\Delta}{k_z} = {k_0}\left( {1 - \sqrt {1 - NA_{obj}^2} } \right)$), and to ensure that there is no missing sampling along the z direction, $\delta {k_z}$ needs to be smaller than ${\Delta}{k_z}$.

Since an objective lens with $N{A_{obj}} = 0.6$ (40X) is used in our system, which leads to, ${\Delta}{k_z} = 0.2$, we set $\delta {k_z}$ to 0.1 to meet the requirement. Therefore, the design specification of our light source is presented in Table 1, and the theoretically designed maximum illumination NA is 0.998. The source has a total number of 187 LEDs and 11 rings with a constant step length along the ${k_z}$-axis of 0.1. To avoid the lack of a lateral spectrum similar to [14] and ensure the sampling criteria [17], we increase the number of LEDs in the outer rings and add 6 LEDs as the second ring with a $N{A_{ill}}$ of 0.25. From the distribution of $N{A_{ill}}$ corresponding to each LED in the lateral spectrum domain shown in Fig. 1(d), it can be seen that the $N{A_{ill}}$ of the outer rings is actually very close due to the satisfaction of the sampling requirement along the ${k_z}$-axis, which also reminds us that we can apply the multiplex coded imaging strategy by turning on several LEDs together to further speed up the data acquisition process. This part of the work will be introduced in Section 2(C).

Tables Icon

Table 1. Design specification of our hemispherical light source

2.2 Multi-slice tomography with Rytov approximation

As mentioned above, two models could be used to solve 3D reconstruction problems using intensity-only measurements. Both frameworks could obtain fine 3D complex refractive indexes in visualizing unlabeled samples. However, several constraints of the first framework limit its further applications in biological research [58], in which the most important constraint is that when the overall phase shift or absorption of the sample is large, the first-order approximations (Born or Rytov approximation) may not still be valid. On the other hand, a multislice-based model could avoid this problem to a certain extent by considering a thin sample slice once a time. Therefore, in this paper, we utilize the multislice-based propagation model, and similar to [11], we consider the diffraction effect of each slice by combining with the Fourier diffraction theorem and using the Rytov approximation to further relax the restriction on the phase shift.

As above, we use the 3D scattering potential $V({x,\; y,\; z} )$ to describe the sample, however, due to the consideration of each single slice, $V({x,\; y,\; z} )$ is also divided into several equally spaced slices with a separation distance of ${\Delta}z$. Therefore, the $nth$ slice with thickness ${\Delta}z$ is approximated into a 2D scattering potential $V({x,\; y,\; n{\Delta}z} )$, and the corresponding scattered field $U_s^n({x,y,\; n{\Delta}z} )$ is expressed as:

$$U_s^n(x,y,n{\Delta} z) = \frac{\textrm{1}}{{\textrm{4}\pi }}\int_{(n - 1){\Delta} z}^{n{\Delta} z} {\int\!\!\!\int {G(x - x^{\prime},y - y^{\prime},n{\Delta} z - z^{\prime})U_{in}^n(x^{\prime},y^{\prime},z^{\prime})V(x^{\prime},y^{\prime},z^{\prime})dx^{\prime}dy^{\prime}dz^{\prime}} }$$
where $U_{in}^n(\boldsymbol{r} )$ is the incident light of slice $nth$, which is the output light of slice $({n - 1} )th$. Replace $({n - 1} ){\Delta}z + z^{\prime}$ with $z^{\prime}$, and perform a 2D Fourier transform of $({x,y} )$ on both sides of the formula:
$${\begin{aligned} U_s^n({\boldsymbol{k}_{2D}};z = n{\Delta} z) &= \frac{1}{{4\pi }}\int_0^{{\Delta} z} {\frac{{i\exp (i{\boldsymbol{k}_z}({\Delta} z - z^{\prime}))}}{{{\boldsymbol{k}_z}}}\cdot {{ {[{\tilde{U}_{in}^n({\boldsymbol{k}_{2D}};z) \otimes \tilde{V}({\boldsymbol{k}_{2D}};z)} ]} |}_{z = (n - 1){\Delta} z + z^{\prime}}}d{\boldsymbol{k}_{2D}}dz^{\prime}} \\ &= \frac{\textrm{1}}{{\textrm{4}\pi }}\int_0^{{\Delta} z} {\int\!\!\!\int {\frac{{i\exp (i{\boldsymbol{k}_z}({\Delta} z - z^{\prime}))}}{{{\boldsymbol{k}_z}}}\cdot \tilde{U}_{in}^n(\boldsymbol{k}{^{\prime}_{2D}};(n - 1){\Delta} z + z^{\prime})\tilde{V}({\boldsymbol{k}_{2D}} - \boldsymbol{k}{^{\prime}_{2D}};(n - 1){\Delta} z + z^{\prime})d\boldsymbol{k}{^{\prime}_{2D}}dz} ^{\prime}} \end{aligned}}$$
where ${k_z} = \sqrt {{{(2\pi {n_m}/\lambda )}^2} - {{|{{k_{2D}}} |}^2}} $ and the 2D convolution term is approximately $({{k_x},{k_y}} )$. $\tilde{U}_{in}^n({{k_{2D}};({n - 1} ){\Delta}z + z^{\prime}} )$ means that the 2D spectrum of the incident field propagates a $z^{\prime}$ distance from $({n - 1} ){\Delta}z$, which can be obtained by using angular spectrum theory:
$$\begin{aligned} \tilde{U}_{in}^n(\boldsymbol{k}{^{\prime}_{2D}};(n - 1){\Delta} z + z^{\prime}) &= \tilde{U}_{in}^n(\boldsymbol{k}{^{\prime}_{2D}};(n - 1){\Delta} z)\cdot \exp (i\boldsymbol{k}{^{\prime}_z}z^{\prime})\\ &= \tilde{U}_{in}^n(\boldsymbol{k}{^{\prime}_{2D}};(n - 1){\Delta} z)\cdot \exp (i\sqrt {{{\left( {\frac{{2\pi {n_m}}}{\lambda }} \right)}^2} - |\boldsymbol{k}{^{\prime}_{2D}}{|^2}} \cdot z^{\prime}) \end{aligned}$$

Meanwhile, consider that the scattering potential would not vary much axially within each thin slice, which also conforms to the assumption of the Rytov approximation [5,8]. Therefore, $\tilde{V}({{k_{2D}} - {{k^{\prime}}_{2D}};({n - 1} ){\Delta}z + z^{\prime}} )\approx {\tilde{V}^n}({{k_{2D}} - {{k^{\prime}}_{2D}}} )$, which denotes the 2D spectrum of the scattering potential at slice $nth$, and brings Eq. (14) in to Eq. (13), we could get:

$$\begin{aligned}\tilde{U}_s^n({\boldsymbol{k}_{2D}};z = n\Delta z) &= \frac{i}{{\textrm{4}\pi }}\int_0^{\Delta z} {\frac{{\exp (i{\boldsymbol{k}_z}(\Delta z - z^{\prime}))\exp (i{\boldsymbol{k}^{\prime}_z}z^{\prime})}}{{{\boldsymbol{k}_z}}}dz^{\prime}} \cdot \\ &\int\!\!\!\int {\tilde{U}_{in}^n(\boldsymbol{k}^{\prime}_{2D};(n - 1)\Delta z){{\tilde{V}}^n}({\boldsymbol{k}_{2D}} - \boldsymbol{k}^{\prime}_{2D})d\boldsymbol{k}^{\prime}_{2D}}\end{aligned}$$

Consider the integral of $z^{\prime}$ above and use the Taylor approximation on the result:

$$\begin{aligned} \tilde{U}_s^n({\boldsymbol{k}_{2D}};z = n\Delta z) &= \frac{{i\exp (i{\boldsymbol{k}_z}\Delta z)}}{{\textrm{4}\pi {\boldsymbol{k}_z}}}\int\!\!\!\int {\tilde{U}_{in}^n(\boldsymbol{k}^{\prime}_{2D};(n - 1)\Delta z){{\tilde{V}}^n}({\boldsymbol{k}_{2D}} - \boldsymbol{k}^{\prime}_{2D})\cdot \Delta zd\boldsymbol{k}^{\prime}_{2D}} \\ &= \frac{{i\exp (i{\boldsymbol{k}_z}\Delta z)}}{{\textrm{4}\pi {\boldsymbol{k}_z}}}\cdot {F_{2D}}\left\{ {U_{in}^n(x,y,(n - 1)\Delta z){V^n}(x,y)\cdot \Delta z} \right\} \end{aligned}$$
Where, ${F_{2D}}\{\cdots \}$ means the Fourier transform of $({x,y} )$. Therefore, following the Rytov approximation (Eq. (4)), the output field of slice $nth$ can be expressed as:
$$U_{out}^n(x,y,n{\Delta} z) = U_{in}^n(x,y,n{\Delta} z)\cdot \exp (\frac{{F_{2D}^{ - 1}\{{\tilde{U}_s^n({\boldsymbol{k}_{2D}};z = n{\Delta} z)} \}}}{{U_{in}^n(x,y,n{\Delta} z)}})$$

Similar to Eq. (14), $U_{in}^n({x,y,n{\Delta}z} )$ can be obtained from $U_{in}^n({x,y,({n - 1} ){\Delta}z} )$ through angular spectrum theory:

$$\begin{aligned} U_{in}^n(x,y,n\Delta z) &= F_{2D}^{ - 1}\left\{ {\tilde{U}_{in}^n({\boldsymbol{k}_{2D}};(n - 1)\Delta z)\cdot \exp (i{\boldsymbol{k}_z}\Delta z)} \right\}\\ &= F_{2D}^{ - 1}\left\{ {{F_{2D}}\left\{ {U_{in}^n(x,y,(n - 1)\Delta z)} \right\}\cdot \exp (i2\pi \sqrt {{{\left( {\frac{{{n_m}}}{\lambda }} \right)}^2} - |{\boldsymbol{k}_{2D}}{|^2}} \cdot \Delta z)} \right\} \end{aligned}$$

Specifically, the output field of slice $nth$ is equal to the incident field of slice $({n + 1} )th$ ($U_{out}^n({x,y,n{\Delta}z} )= U_{in}^{n + 1}({x,y,n{\Delta}z} )$). Similar to the Fourier diffraction theorem, after the incident light passes through all the slices, the output field is further affected by the spectrum truncation of the objective lens; however, the pupil function is no longer a 3D shell but a 2D low-pass filter, usually the CTF of the system (Eq. (11)). Therefore, assuming the total number of slices is N, the complex field that finally images on the camera can be expressed as:

$$U(x,y,z) = F_{2D}^{ - 1}\left\{ {P({\boldsymbol{k}_{2D}})\exp ( - i{\boldsymbol{k}_z}(\frac{N}{2}{\Delta} z))\cdot {F_{2D}}\{{U_{out}^N(x,y,N{\Delta} z)} \}} \right\}$$
where $\textrm{exp}\left( { - i{k_z}\left( {\frac{N}{2}{\Delta}z} \right)} \right)$ is the angular spectrum propagation kernel, which digitally refocuses the field from the last slice to the in-focus plane, which is set as the middle layer of the reconstructed result.

2.3 Reconstruction framework for multiplexed coded illumination with aberration recovery

Based on the forward imaging model above, we could obtain the estimated complex field according to each illumination angle, therefore, the reconstruction framework could be described as the process of iteratively minimizing the cost function, which measures the difference between the estimated images and the captured intensity-only images. In this paper, we choose the amplitude-based ${l_2} - norm$ function to improve the robustness of the algorithm against noise [18]:

$$\mathop {\min }\limits_{V(\boldsymbol{r})} L({V(\boldsymbol{r})} )= \sum\limits_l^{{N_{LED}}} {\sum\limits_{\boldsymbol{r}} {{{{\bigg \|}{\sqrt {{I^l}(\boldsymbol{r})} - |{{U^l}(\boldsymbol{r})} |} {\bigg \|}}^2}} }$$
where ${N_{LED}}$ is the total number of LEDs used (187 in this paper), and ${U^j}(r )$ means the estimated field at the image plane corresponding to the $lth$ LED based on Eq. (19). Furthermore, as mentioned in Section 2(A), to improve the data acquisition speed and satisfy the requirement of dynamic collection of in vitro live samples, we apply the multiplexed coded strategy, which turns on several LEDs simultaneously. For multiplexed imaging, each LED must be considered mutually incoherent with all others while being spatially coherent with a unique wave vector. Therefore, the intensity of the $pth$ image ${I^p}(r )$ is the sum of intensities from each LED coded. The cost function will be rewritten as follows:
$$\mathop {\min }\limits_{V(\boldsymbol{r})} {L_{multi}}({V(\boldsymbol{r})} )= \sum\limits_p^{{N_{image}}} {\sum\limits_{\boldsymbol{r}} {{{{\bigg \|}{\sqrt {{I^p}(\boldsymbol{r})} - \sqrt {\sum\limits_{l \in {{\cal L}_p}} {{{||{{U^l}(\boldsymbol{r})} ||}^2}} } } {\bigg \|}}^2}} }$$

In which ${N_{image}}$ is the number of captured LR images and ${L_p}$ means the LED list turned on simultaneously for the $pth$ image.

Then, we utilize the gradient decent algorithm to iteratively update the scattering potential. According to the multislice model mentioned above, the information between each slice is coupled together, and from the update process below, it can be seen that we need the reconstruction result of the $({n + 1} )th$ slice to reconstruct the $nth$ slice; therefore, the inaccuracy of the pupil function used in the reconstruction process due to the existence of system aberrations [19,20] will affect the reconstructed result slice-by-slice. Therefore, in this paper, we embed the pupil recovery process into the reconstruction, and the specific steps are shown below:

  • (1) At the beginning, an index of $t = 0$ is set to represent the iteration time. We also initialize an N-slice estimate of the unknown scattering potential ${V_e}(\boldsymbol{r} )$. In this paper, we use the constant value of 0, which means that the initially assumed RI of the sample is the same as the surrounding media. Meanwhile, the CTF of the system is used as the initialization of the pupil function ${P_e}({{\boldsymbol{k}_{2D}}} )$.
  • (2) For better convergence, we first select the coded illumination pattern in the bright field ($N{A_{ill}} < N{A_{obj}}$) according to the $pth$ image [21], which gives us the coded LED list ${L_p}$ ($p = 1,\; 2, \ldots ,\; {N_{image}}$). Therefore, for each LED in the list, we could obtain the incident field $U_{in}^l(\boldsymbol{r} )= \textrm{exp}({i{\boldsymbol{k}_l}\boldsymbol{r}} )$ accordingly, l means the $lth$ LED ($l \in {L_p}$). Then, the estimated ${V_e}(\boldsymbol{r} )$ and the multislice tomography model mentioned above are used to obtain the corresponding estimated field ${U^l}(\boldsymbol{r} )$ ($l \in {L_p}$).

    In particular, for the requirement in the following steps, we need to store $U_{in}^{l,\; n}({x,y,({n - 1} ){\Delta}z} )$ and $U_{in}^{l,\;n}({x,y,n{\Delta}z} )$ and the exponential factor shown in Eq. (17) at this illumination angle ($\frac{{F_{2D}^{ - 1}\{{\tilde{U}_s^{l,n}({{\boldsymbol{k}_{2D}};z = n{\Delta}z} )} \}}}{{U_{in}^{l,n}({x,y,\;n{\Delta}z} )}}$) for each LED in the list ($l \in {L_p}$) and slice reconstructed ($n = 0,\;1,\;2,\; \ldots ,\;N$). For simplicity, the exponential factor is represented as $\varphi _s^{l,n}({x,y,n{\Delta}z} )$ in the following text.

  • (3) Then, calculate the cost function for this LED $L_{multi}^p = \mathop \sum \nolimits_{\boldsymbol{r}} {\bigg \|}\sqrt {{I^p}(\boldsymbol{r} )} - {\sqrt {\mathop \sum \nolimits_{l \in {L_p}} {U^l}{{(\boldsymbol{r} )}^2}} {\bigg \|}^2}$ and obtain the gradient value with respect to each slice of ${V_e}(r )$ using the chain rule. First, we consider the gradient of the last slice ($V_e^N({x,y} )$):
    $${\begin{aligned} \frac{{\partial L_{multi}^p}}{{\partial V_e^N(x,y)}} &= \sum\limits_{l \in {{\cal L}_p}} {\frac{{\partial L_{multi}^p}}{{\partial U_{out}^{l,N}(x,y,N{\Delta} z)}}\frac{{\partial U_{out}^{l,N}(x,y,N{\Delta} z)}}{{\partial {V^N}(x,y)}}} \\ &=\sum\limits_{l \in {{\cal L}_p}} {\frac{{\partial L_{multi}^p}}{{\partial U_{out}^{l,N}(x,y,N{\Delta} z)}}\cdot U_{in}^{l,N,\ast }(x,y,N{\Delta} z){{[{\exp (\varphi_s^{l,N}(x,y,N{\Delta} z))} ]}^\ast }\cdot \frac{{\partial \varphi _s^{l,N}(x,y,N{\Delta} z)}}{{\partial V_e^N(x,y)}}} \\ &=\sum\limits_{l \in {{\cal L}_p}} {U_{in}^{l,N,\ast }(x,y,(N - 1){\Delta} z)\cdot {\Delta} z\cdot F_{2D}^{ - 1}\left\{ {\frac{{ - i\exp ( - i{\boldsymbol{k}_z}{\Delta} z)}}{{4\pi {\boldsymbol{k}_z}}}{F_{2D}}\left\{ {{{[{\exp (\varphi_s^{l,N}(x,y,N{\Delta} z))} ]}^\ast }\cdot \frac{{\partial L_{multi}^p}}{{\partial U_{out}^{l,N}(x,y,N{\Delta} z)}}} \right\}} \right\}} \end{aligned}}$$
    where ${\{\cdots \}^\ast }$ denotes the complex conjugate operation and $\frac{{\partial L_{multi}^p}}{{\partial U_{out}^{l,N}({x,y,N{\Delta}z} )}}$ can be expressed as:
    $${$\displaystyle\frac{{\partial L_{multi}^p}}{{\partial U_{out}^{l,N}(x,y,N{\Delta} z)}}\textrm{ = }F_{2D}^{ - 1}\left\{ {\exp (i{\boldsymbol{k}_z}\frac{N}{2}{\Delta} z){P^\ast }({\boldsymbol{k}_{2D}}){F_{2D}}\left\{ {2\left[ {\sqrt {\sum\limits_{l \in {{\cal L}_p}} {{{||{{U^l}(\boldsymbol{r})} ||}^2}} } - \sqrt {{I^p}(\boldsymbol{r})} } \right]\cdot \frac{{{U^l}(\boldsymbol{r})}}{{\sqrt {\sum\limits_{l \in {{\cal L}_p}} {{{||{{U^l}(\boldsymbol{r})} ||}^2}} } }}} \right\}} \right\}$}$$

    Therefore, take Eq. (23) into Eq. (22), we can obtain the gradient of the $Nth$ slice, which is the sum of the gradients according to each LED in the current illumination pattern.

  • (4) Then, consider the gradient of the cost function with respect to the $({N - 1} )th$ slice ($V_e^{N - 1}({x,y} )$):
    $$\begin{aligned} \frac{{\partial L_{multi}^p}}{{\partial V_e^{N - 1}(x,y)}} &= \sum\limits_{l \in {{\cal L}_p}} {\frac{{\partial L_{multi}^p}}{{\partial U_{out}^{l,N - 1}(x,y,(N - 1){\Delta} z)}}\frac{{\partial U_{out}^{l,N - 1}(x,y,(N - 1){\Delta} z)}}{{\partial V_e^{N - 1}(x,y)}}} \\ &= \sum\limits_{l \in {{\cal L}_p}} {\frac{{\partial L_{multi}^p}}{{\partial U_{out}^{l,N}(x,y,N{\Delta} z)}}\frac{{\partial U_{out}^{l,N}(x,y,N{\Delta} z)}}{{\partial U_{out}^{l,N - 1}(x,y,(N - 1){\Delta} z)}}\frac{{\partial U_{out}^{l,N - 1}(x,y,(N - 1){\Delta} z)}}{{\partial V_e^{N - 1}(x,y)}}} \end{aligned}$$

    It can be seen that the form of Eq. (24) is similar to Eq. (22) but with one more term, $\frac{{\partial U_{out}^{l,N}({x,y,N{\Delta}z} )}}{{\partial U_{out}^{l,N - 1}({x,y,({N - 1} ){\Delta}z} )}}$, which can be rephrased as $\frac{{\partial U_{out}^{l,N}({x,y,N{\Delta}z} )}}{{\partial U_{in}^{l,N}({x,y,({N - 1} ){\Delta}z} )}}$ according to the discussion above. Therefore, $\frac{{\partial L_{multi}^p}}{{\partial U_{out}^{l,N - 1}({x,y,({N - 1} ){\Delta}z} )}}$ could be expressed as:

    $${\begin{aligned} &\frac{{\partial L_{multi}^p}}{{\partial U_{out}^{l,N - 1}(x,y,(N - 1){\Delta} z)}} = \frac{{V_e^N(x,y)}}{{U_{in}^{l,N,\ast }(x,y,(N - 1){\Delta} z)}}\cdot \frac{{\partial L_{multi}^p}}{{\partial V_e^N(x,y)}}\\ &+ F_{2D}^{ - 1}\left\{ {\exp ( - i{\boldsymbol{k}_z}{\Delta} z){F_{2D}}\left\{ {{{[{\exp (\varphi_s^{l,N}(x,y,N{\Delta} z))} ]}^\ast }\cdot \left[ {1 - \frac{{{{[{\varphi_s^{l,N}(x,y,N{\Delta} z)\cdot U_{in}^{l,N}(x,y,N{\Delta} z)} ]}^\ast }}}{{U_{in}^{l,N}(x,y,N{\Delta} z)}}} \right]\cdot \frac{{\partial L_{multi}^p}}{{\partial U_{out}^{l,N}(x,y,N{\Delta} z)}}} \right\}} \right\} \end{aligned}}$$

    The scattering potential of slice $Nth$ is used above. Since all slices of potential $V_e^n({x,y} )$ ($n = 0,\; 1,\; 2,\; \ldots ,\; N$) are updated together in this algorithm, $V_e^N({x,y} )$ is the initial value in step (2). Meanwhile, the $\frac{{\partial U_{out}^{j,N - 1}({x,y,({N - 1} ){\Delta}z} )}}{{\partial V_e^{N - 1}({x,y} )}}$ in Eq. (24) can be calculated using the same method as in Eq. (20).

  • (5) According to the strategy above, for the $pth$ image, calculate the gradient term of the cost function with respect to each slice of the scattering potential backwards: $\frac{{\partial L_{multi}^p}}{{\partial V_e^n({x,y} )}}$ ($n = N,\; ({N - 1} ),\; \ldots ,\; 2,\; 1$). Then, use gradient decent method to update each slice:
    $$V_{new}^n(x,y) = V_e^n(x,y) - \alpha \cdot \frac{{\partial L_{multi}^p}}{{\partial V_e^n(x,y)}},\textrm{ }n = N,(N - 1),\ldots ,2,1$$
    where $\alpha $ is the constant update step-size.
  • (6) To eliminate the influence of aberrations on the reconstruction, since the pupil function is only used during the imaging process in Eq. (19), we use the second-order Newton’s method on the cost function (Eq. (21)) to update the pupil function, which can be expressed as:
    $${$\displaystyle{P_{new}}({\boldsymbol{k}_{2D}}) = {P_e}({\boldsymbol{k}_{2D}}) - \beta \cdot \frac{{\sum\limits_{l \in {{\cal L}_p}} {|{{W^{l,\ast }}({\boldsymbol{k}_{2D}})} |\cdot {W^{l,\ast }}({\boldsymbol{k}_{2D}}){F_{2D}}\left\{ {\left[ {\sqrt {\sum\limits_{l \in {{\cal L}_p}} {{{||{{U^l}(\boldsymbol{r})} ||}^\textrm{2}}} } - \sqrt {{I^l}(\boldsymbol{r})} } \right]\cdot \frac{{{U^l}(\boldsymbol{r})}}{{\sqrt {\sum\limits_{l \in {{\cal L}_p}} {{{||{{U^l}(\boldsymbol{r})} ||}^\textrm{2}}} } }}} \right\}} }}{{{{|{{W^l}({\boldsymbol{k}_{2D}})} |}_{\max }}\left( {\sum\limits_{l \in {{\cal L}_p}} {{{||{{W^l}({\boldsymbol{k}_{2D}})} ||}^2}} + \delta } \right)}}$}$$

    In which ${W^l}({{k_{2D}}} )$ represents the $\textrm{exp}\left( { - i{k_z}\frac{N}{2}{\Delta}z} \right){\Delta}{F_{2D}}\{{U_{out}^{l,N}({x,y,N{\Delta}z} )} \}$ in Eq. (19) for the current $lth$ LED. $\beta $ is the update step size, which is set to 1 in this paper, and $\delta $ is the regularization term to ensure iteration stability. Moreover, after the pupil function is updated, we constrain the pupil function by replacing the updated amplitude with that of the CTF while retaining the updated phase:

    $${P_{new}}({\boldsymbol{k}_{2D}}) = CTF\cdot \exp (i\cdot \textrm{angle(}{P_{new}}\textrm{(}{\boldsymbol{k}_{2D}}\textrm{))})$$
    which makes the algorithm focus on processing the influence of aberrations, which is expressed by the phase of the pupil function [19,20], and could also improve the stability of the iteration.

  • (7) After the update on both the scattering potential and pupil function, we replace ${V_e}(r )$ and ${P_e}({{k_{2D}}} )$ with the updated ones and repeat steps (2)-(6) for each image and the coded LED list for both bright-field and dark-field accordingly ($p = 1,\; 2,\; \ldots ,\; {N_{image}}$). After one round for all the images captured, we impose the nonnegative constraint on the reconstructed scattering potential by setting the value less than 0 to 0 and set the constrained scattering potential as the initialization for the next iteration round.
  • (8) Set the iteration index $t = t + 1$ and repeat steps (1)-(7) until the iteration converges or the preset maximum iteration times are reached.

3. Simulation

To quantitatively demonstrate the feasibility of the aberration recovery process mentioned above, in this section, we first illustrate the influence of system aberrations on this multislice model-based reconstruction algorithm. A simulated cell structure is designed as the sample, as shown in Fig. 2(a1) and 2(a2), which is composed of the cytoplasm ($n = 1.34$), a nucleus ($n = 1.36$) and several small organelles ($n = 1.36$). The simulated cell is surrounded by a medium of $n = 1.33$. The oversize of the sample is $80 \times 80 \times 50$ with $100nm$ resolution for the X-Y plane and $300nm$ for the Z axis. For the system, we utilize the hemispherical source mentioned above, the illumination wavelength is 465 nm, and the objective NA ($N{A_{obj}}$) is 0.6. To simulate the actual installation, the bottom surface of the light source is placed 13 mm above the sample, leading to the actual maximum $N{A_{ill}}$ of $0.98$. Therefore, according to the resolution equations in Eq. (2), when we use all 187 LEDs, we can obtain theoretical reconstruction half-pitch resolutions of ${\sim} 147nm$ and ${\sim} 464nm$ for the lateral and axial directions respectively. Moreover, due to the existence of the gap between the bottom surface of the light source and the sample, the 4th ring of LEDs is changed from dark-field to bright-field, which leads to 37 bright-field LEDs and 150 dark-field LEDs.

 figure: Fig. 2.

Fig. 2. The simulated 3D cell sample. (a1) Lateral and axial cross-sectional planes of the simulated sample. (a2) The 3D view of the sample.

Download Full Size | PDF

As mentioned in Section 2(C), to accelerate the data acquisition process and achieve dynamic collection of in vitro live samples, the strategy of multiplexed coded illumination could be applied. Since the brightness of dark-field images is much less than that of bright-field images (orders of magnitude), to prevent the SNR of the dark-field information from being further reduced due to the bright-field information captured together, we code their LED patterns separately. In addition to the commonly used multiplexed random-coded strategy [21,22], for the hemispherical light source used, the $N{A_{ill}}$ of the outer rings is close, as shown in Fig. 1(d), which inspires us to use a radial-coded strategy, and we also apply the circular-coded strategy for comparison. The details of the coded strategies in this paper are as follows:

  • (1) The first two patterns (P1, P2) are random-coded with different schemes: (P1) for bright-field LEDs, randomly multiplexed 2 LEDs except for the first two rings; randomly multiplexed 3 LEDs for dark-field LEDs, therefore a total of 72 LR images need to be captured; (P2) for bright-field LEDs, randomly multiplexed 3 LEDs except for the first ring; randomly multiplexed 2 LEDs for dark-field LEDs, which leads to 88 LR images.
  • (2) The third pattern (P3) uses the radial-coded strategy, which simultaneously turns on 3 radially adjacent LEDs from the $5th$ to the $7th$ ring, and 4 radially adjacent LEDs from the last 4 rings. For the bright-field LEDs (the first 4 rings), we use single-LED illumination by sequential scanning, therefore, we need to capture 79 LR images.
  • (3) The fourth pattern (P4) is circular-coded, which simultaneously turns on the 2 adjacent LEDs along the circumferential direction except for the first ring, which leads to a total of 94 LR images.

Specifically, to better demonstrate the impact of different multiplexed coded strategies on reconstruction, we assume that there is no aberration in the system. Moreover, we compare the reconstructed results of those 4 multiplexed strategies with that of the single-LED sequential scanning, which is set as the baseline for this comparison, as shown in Fig. 3(a).

 figure: Fig. 3.

Fig. 3. Comparison of the reconstructed results using different multiplexed coded strategies. (a) The baseline that is reconstructed using single-LED sequential scanning. (b) The reconstructed results using 4 different multiplexed coded strategies (P1-P4). (c) The contrast curve of the reconstructed ‘organelles’ on the X-Z plane (after normalizing the 3D result into 0∼1). (d) Comparison of the iteration losses of these 4 strategies during convergence.

Download Full Size | PDF

For each multiplexed strategy, we calculate the error map of the reconstructed X-Y plane with the baseline, and we can see from the comparison that although all 4 multiplexed strategies could obtain fine results intuitively, in contrast, the radial-coded strategy holds the smallest error. Meanwhile, it can be seen from the contrast curve of the reconstructed ‘organelles’ on the X-Z plane of these multiplexed strategies (Fig. 3(c)), radial-coded strategy could obtain higher contrast than other 3 schemes which is very close to the baseline. Furthermore, we measure the convergence for each algorithm by calculating the iteration loss (Fig. 3(d)):

$$loss = {\log _{10}}\left( {\sum\limits_p^{{N_{image}}} {\sum\limits_{\boldsymbol{r}} {{{{\bigg \|}{\sqrt {{I^p}(\boldsymbol{r})} - \sqrt {\sum\limits_{l \in {{\cal L}_p}} {{{||{{U^l}(\boldsymbol{r})} ||}^2}} } } {\bigg \|}}^2}} } } \right)$$

As we can see, the reconstruction using the radial-coded strategy converges faster than other 3 strategies and has the smallest loss value after the convergence.

4. Experiments

We manufactured the hemispherical LED source mentioned in Section 2(A) using 3D printing technology (79 mm in radius) and then utilized a standard microscope (DMi8, Leica, Germany) with the light source replaced to verify our reconstruction method (Fig. 4(a)). The actual system parameters are similar to those in the simulation, a $40 \times 0.6NA$ objective lens is used and an additional $0.7 \times $ magnification is utilized at the back focal plane; therefore, the system magnification is $28 \times $. The wavelength of each LED is 465 nm (XPEROYL1-0000-00B01, Cree, 465 nm, 10 nm bandwidth). All images are captured by a 16-bit sCMOS camera (400BSI, Tucsen, 2048× 2044 pixels, 6.5 µm pixel pitch).

 figure: Fig. 4.

Fig. 4. The system hardware and the reconstructed result of USAF. (a1)-(a2) Photograph of the platform and the hemispherical light source. (b1)-(b2) The raw image captured under normal illumination and the local enlarged area. (c1)-(c2) The reconstructed amplitude and the phase of the local region in (b2).

Download Full Size | PDF

A. Lateral resolution verification using USAF target

To quantify the feasibility of our system in resolution enhancement, we first apply the system on the USAF target, and the raw image captured with normal illumination (Fig. 4(b)) displays the maximum half-pitch resolution of ∼388 nm (Group 10, Element 3). We use the traditional AP method [4,9] with an adaptive step-size strategy [23] to perform 2D reconstruction. Similar to the simulation, hemispherical light is placed 13 mm above the sample. Therefore, the theoretical half-pitch lateral resolution is ∼147 nm, which is between Group 11, Element 5 (154 nm) and Group 11, Element 6 (137 nm). Figure 4(c1) and (c2) show that we can obtain Group 11 and Element 5 with fine contrast in terms of both amplitude and phase.

B. 3D imaging of beads with several multiplex-coded illumination strategies

Next, we experimentally verify our algorithm in 3D reconstruction by imaging a sample with 2µm beads (Huge biotechnology) immersed in an index-matching oil (Cargille, n = 1.44, λ=589.3 nm). The system setup is the same as before, Fig. 5(d) shows the bright-field and dark-field images of this sample, it can be seen that the dark-field image has much lower brightness than bright-field image. The algorithm is coded with a Python compiler and PyTorch framework, therefore, we could operate the iteration process on the GPU instead of the CPU. Reconstructions are performed on a desktop computer with an Intel I7 CPU and an NVIDIA TITAN.

 figure: Fig. 5.

Fig. 5. Comparison of reconstructed beads with several multiplexed-coded illumination strategies. (a1)-(a2) The baseline reconstructed using single-LED sequential scanning and the result reconstructed without dark-field images. (b) The refractive index along the black dotted lines for each situation. (c) Orthogonal slices of the reconstructed 3D refractive index for different multiplexed-coded illumination strategies. (d) The raw images of bright-field and dark-field.

Download Full Size | PDF

Same as before, we apply 4 different multiplexed-coded illumination strategies and compare the reconstructed results with the single-LED sequential scanning strategy labeled as ‘baseline’. Moreover, to demonstrate the effect of dark-field images on improving the axial resolution, we add the reconstruction result using only the bright-field images. The comparison is shown in Fig. 5.

It can be seen from the comparison between Fig. 5(a1) and (a2) that, due to the participation of the dark-field images, the axial resolution has been significantly improved. Moreover, in the experiment, there are many defects, such as noise, that cannot be well represented in the simulation. Therefore, different multiplexed-coded illumination strategies have much greater impacts on the reconstructed results than those in the simulation. The results using the P1, P2 and P4 strategies have serious background fluctuations, especially the P2 pattern; moreover, they all suffer from the problem of missing content in the axial slice compared to the baseline. However, the reconstructed result obtained by radial-coded illumination (P3) is more similar to the baseline in both lateral and axial slices.

C. 3D imaging of bio-sample

In this part, we apply our system on bio-sample to achieve 3D imaging of the refractive index. The system parameters are the same as before, we utilize the fixed COS-7 cells and compare the reconstructed results with multiplexed-coded illumination. First, we implement the reconstructed 3D RI using sequential scanning which is used as the baseline in the following comparison. The algorithm is iterated for 50 times and cost nearly 480s, the result is shown in Fig. (6). The reconstruction result contains $674 \times 674 \times 80$ voxels with voxel size of $87 \times 87 \times 131\; n{m^3}$ and the maximum thickness of the sample is about $7.86\mu m$.

 figure: Fig. 6.

Fig. 6. The reconstructed 3D refractive index of a COS-7 cell using sequential scanning strategy. (a) The enlarged area within the green box in (c) at three different depths and the enlarged area. (b) The bright-field and dark-field images. (c) Orthonormal views of the reconstructed result. (d) The X-Z plane slice indicated by the red line in (c). (e) The line profile indicated by the yellow line in (a). (f) The line profile indicated by the yellow line in(d).

Download Full Size | PDF

Due to the acquisition of 3D information, we could observe the sample from different depths. As shown in Fig. 6(a), in the deeper layer ($z ={-} 2.751\mu m$), the network structure in the cell that is not available in the shallow layer is revealed. Moreover, form the line profiles according to X-Y plane (Fig. 6(e)) and X-Z plane (Fig. 6(f)), we can see that the lateral and axial half-pitch resolution of the 3D reconstruction is approximately 174 nm and 524 nm which are close to our theoretical values.

Furthermore, we apply the P3 (radial-coded) and P4 (circular-coded) multiplexed strategies since the poor performance of the two random-coded strategies (P1 and P2) and compare the results with our baseline in Fig. (7), in which the reconstructed area is the orange box indicated in Fig. 6(c). From the comparison, it can be seen that the result using radial-coded strategy shows more similarities to the baseline than circular-coded (Fig. 7(b2)) and achieves the same half-pitch resolution as the baseline (Fig. 7(b1)). Moreover, we compare the reconstructed aberrations with these three illumination strategies as shown in Fig. 8, which shows that using multiplexed redial-coded illumination strategy could better reconstruct the system aberrations which is much closer to the baseline.

 figure: Fig. 7.

Fig. 7. Comparison of reconstructed results within the orange box shown in Fig. 6(c) using different illumination strategies. (a) Orthonormal views of the reconstructed results and the enlarged areas. (b1-b3) Comparison of the line profiles according to different lines indicated in X-Y, X-Z and Y-Z slices respectively.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Comparison of the reconstructed aberrations using different illumination strategies.

Download Full Size | PDF

In addition, it should be noted that due to the external triggering mode, each image is captured with ∼10 ms of integration time. Therefore, the total acquisition speeds of the 4 multiplexed-coded illumination strategies (p1)∼(P4) for one cycle are shown in Table 2.

Tables Icon

Table 2. The total acquisition time of several multiplexed-coded illumination strategies

Therefore, by using the multiplexed-coded illumination strategy, we can increase the capture speed above 1 Hz, and the reconstruction quality could also be guaranteed to a certain extent, which provides the possibility of real-time 3D observation of this technology on in vitro live cells.

5. Discussion and conclusion

In summary, to ensure the integrity of the reconstructed 3D spectrum in both lateral and axial directions while increasing the acquisition speed by reducing the number of LEDs required, we design a hemispherical illumination source based on the physical model of 3D tomography theory and apply it to the FPM imaging platform. Meanwhile, benefitting from the hemispherical structure, we could obtain the high-angle programmable plane-wave illumination of $0.98NA$ with only 187 LEDs, therefore, we could achieve the final effective imaging of $1.58NA$ with a $40 \times{/}0.6NA$ objective lens and obtain the half-pitch resolution of ${\sim} 174nm$ and ${\sim} 524nm$ for the lateral and axial directions with a wavelength of $465nm$ across the FOV of $550\mu {m^2}$. A higher resolution could be easily obtained by utilizing an objective lens with a higher NA without any other changes.

When sequentially illuminating all 187 LEDs, the total acquisition time is ${\sim} 1.9s$, which still does not meet the observation requirement of live samples, therefore, in this paper, 4 empirical multiplexed-coded illumination strategies are provided to further reduce the number of captured images. By using those strategies, we could improve the efficiency of data collection and achieve subsecond imaging. Meanwhile, through simulations and experiments, we analyze the influence of different multiplexed-coded patterns on the reconstruction results and propose the radial-coded pattern, which has better results, and the acquisition frame rate is 1.27. In general, the reasonable design of the light source combined with the multiplexed-coded illumination strategy could further expand the application range of the technology, which provides the possibility of real-time 3D observation of in vitro live biological samples.

In addition, although our hemispherical light source has been carefully manufactured and achieve near-theoretical reconstructed resolution in 2D reconstruction, there are still some minor miscalibrations that can affect the quality of 3D reconstruction. This phenomenon also indicates that 3D reconstruction requires higher accuracy for illumination wave vector than 2D reconstruction. Therefore, in the following work, we could combine 3D reconstruction with miscalibration correction methods similar to 2D FPM [24,25]. Moreover, it can be seen from the comparison of experiments that different multiplex-coded strategies have a greater impact on the result, therefore, instead of the empirical illumination patterns, we could apply the data-driven approach [26] using the deep-learning method and find the best multiplexed-coded illumination pattern through training, which will be our future work.

Funding

National Key Research and Development Program of China (2017YFA0505300); National Natural Science Foundation of China (11604327, 11974345, 21721003, 21727816, 31901073, 61775212, 61875036, 61975202, U2030101).

Acknowledgments

We acknowledge the CAS Interdisciplinary Innovation Team for supporting this work.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request

References

1. D. Wilding and R. Fiolka, “Two-photon light-sheet microscope with adaptive optics in the illumination and detection path,” in Imaging and Applied Optics 2018 3D, AO, AIO, COSI, DH, IS, LACSEA, LSC, MATH, pcAOP), OSA Technical Digest (Optica Publishing Group, 2018), paper JTu5B.5.

2. D. Dong, X. Huang, L. Li, H. Mao, and L. Chen, “Super-resolution fluorescence-assisted diffraction computational tomography reveals the three-dimensional landscape of the cellular organelle interactome,” Light: Sci. Appl. 9(1), 11 (2020). [CrossRef]  

3. B. Simon, M. Debailleul, M. Houkal, C. Ecoffet, J. Bailleul, J. Lambert, A. Spangenberg, H. Liu, O. Soppera, and O. Haeberlé, “Tomographic diffractive microscopy with isotropic resolution,” Optica 4(4), 460–463 (2017). [CrossRef]  

4. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

5. C. Zuo, J. Sun, J. Li, A. Asundi, and Q. Chen, “Wide-field high-resolution 3D microscopy with Fourier ptychographic diffraction tomography,” Opt. Lasers Eng. 128(5), 106003 (2020). [CrossRef]  

6. R. Horstmeyer, J. Chung, X. Ou, G. Zheng, and C. Yang, “Diffraction tomography with Fourier ptychography,” Optica 3(8), 827–835 (2016). [CrossRef]  

7. M. H. Maleki, A. J. Devaney, and A. Schatzberg, “Tomographic reconstruction from optical scattered intensities,” J. Opt. Soc. Am. A 9(8), 1356–1363 (1992). [CrossRef]  

8. T. E. Gureyev, T. J. Davis, A. Pogany, S. C. Mayo, and S. W. Wilkins, “Optical phase retrieval by use of first Born and Rytov-type approximations,” Appl. Opt. 43(12), 2418–2430 (2004). [CrossRef]  

9. S. Marchesini, “A unified evaluation of iterative projection algorithms for phase retrieval,” Rev. Sci. Instrum. 78(1), 011301 (2007). [CrossRef]  

10. X. Ma, X. Wen, and P. Feng, “Optical tomographic reconstruction based on multislice wave propagation method,” Opt. Express 25(19), 22595 (2017). [CrossRef]  

11. M. Chen, D. Ren, H. Y. Liu, S. Chowdhury, and L. Waller, “Multi-layer Born multiple-scattering model for 3D phase microscopy,” Optica 7(5), 394–403 (2020). [CrossRef]  

12. S. Chowdhury, M. Chen, R. Eckert, D. Ren, and L. Waller, “High-resolution 3D refractive index microscopy of multiple-scattering samples from intensity images,” Optica 6(9), 1211 (2019). [CrossRef]  

13. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2(2), 104–111 (2015). [CrossRef]  

14. A. Pan, Y. Zhang, K. Wen, M. Zhou, J. Min, M. Lei, and B. Yao, “Subwavelength resolution Fourier ptychography with hemispherical digital condensers,” Opt. Express 26(18), 23119–23131 (2018). [CrossRef]  

15. K. C. Zhou and R. Horstmeyer, “Diffraction tomography with a deep image prior,” Opt. Express 28(9), 12872–12896 (2020). [CrossRef]  

16. J. Sun, C. Zou, L. Zhang, and Q. Chen, “Resolution-enhanced Fourier ptychographic microscopy based on high-numerical-aperture illuminations,” Sci. Rep. 7(1), 1187 (2017). [CrossRef]  

17. J. Sun, Q. Chen, Y. Zhang, and C. Zuo, “Sampling criteria for Fourier ptychographic microscopy in object space and frequency space,” Opt. Express 24(14), 15765–15781 (2016). [CrossRef]  

18. L. H. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, and L. Waller, “Experimental robustness of Fourier ptychography phase retrieval algorithms,” Opt. Express 23(26), 33214–33240 (2015). [CrossRef]  

19. X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22(5), 4960–4972 (2014). [CrossRef]  

20. M. Sun, X. Chen, Y. Zhu, D. Li, Q. Mu, and L. Xuan, “Neural network model combined with pupil recovery for Fourier ptychographic microscopy,” Opt. Express 27(17), 24161–24174 (2019). [CrossRef]  

21. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier ptychography with an LED array microscope,” Biomed. Opt. Express 5(7), 2376–2389 (2014). [CrossRef]  

22. L. Tian, Z. Liu, L. H. Yeh, M. Chen, J. Zhong, and L. Waller, “Computational illumination for high-speed in vitro Fourier ptychographic microscopy,” Optica 2(10), 904 (2015). [CrossRef]  

23. C. Zuo, J. Sun, and Q. Chen, “Adaptive step-size strategy for noise-robust Fourier ptychographic microscopy,” Opt. Express 24(18), 20724–20744 (2016). [CrossRef]  

24. R. Eckert, Z. F. Phillips, and L. Waller, “Efficient illumination angle self-calibration in Fourier ptychography,” Appl. Opt. 57(19), 5434–5442 (2018). [CrossRef]  

25. V. Bianco, B. Mandracchia, J. Běhal, D. Barone, P. Memmolo, and P. Ferraro, “Miscalibration-tolerant Fourier ptychography”,” IEEE J. Select. Topics Quantum Electron. 27(4), 1–17 (2021). [CrossRef]  

26. M. Kellman, E. Bostan, M. Chen, and L. Waller, “Data-driven design for Fourier ptychographic microscopy,” arXiv preprint arXiv: 1904.04175v (2019).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. (a) The structure of the Fourier ptychographic microscopy system with a thick sample illuminated from a noncenter LED. (b) The 3D pupil function derived from the Fourier diffraction theorem. (c) The projection of the 3D pupil function on the Y-Z plane in the frequency domain according to different LEDs. (d) Distribution of $N{A_{ill}}$ corresponding to each LED in the lateral spectrum domain.
Fig. 2.
Fig. 2. The simulated 3D cell sample. (a1) Lateral and axial cross-sectional planes of the simulated sample. (a2) The 3D view of the sample.
Fig. 3.
Fig. 3. Comparison of the reconstructed results using different multiplexed coded strategies. (a) The baseline that is reconstructed using single-LED sequential scanning. (b) The reconstructed results using 4 different multiplexed coded strategies (P1-P4). (c) The contrast curve of the reconstructed ‘organelles’ on the X-Z plane (after normalizing the 3D result into 0∼1). (d) Comparison of the iteration losses of these 4 strategies during convergence.
Fig. 4.
Fig. 4. The system hardware and the reconstructed result of USAF. (a1)-(a2) Photograph of the platform and the hemispherical light source. (b1)-(b2) The raw image captured under normal illumination and the local enlarged area. (c1)-(c2) The reconstructed amplitude and the phase of the local region in (b2).
Fig. 5.
Fig. 5. Comparison of reconstructed beads with several multiplexed-coded illumination strategies. (a1)-(a2) The baseline reconstructed using single-LED sequential scanning and the result reconstructed without dark-field images. (b) The refractive index along the black dotted lines for each situation. (c) Orthogonal slices of the reconstructed 3D refractive index for different multiplexed-coded illumination strategies. (d) The raw images of bright-field and dark-field.
Fig. 6.
Fig. 6. The reconstructed 3D refractive index of a COS-7 cell using sequential scanning strategy. (a) The enlarged area within the green box in (c) at three different depths and the enlarged area. (b) The bright-field and dark-field images. (c) Orthonormal views of the reconstructed result. (d) The X-Z plane slice indicated by the red line in (c). (e) The line profile indicated by the yellow line in (a). (f) The line profile indicated by the yellow line in(d).
Fig. 7.
Fig. 7. Comparison of reconstructed results within the orange box shown in Fig. 6(c) using different illumination strategies. (a) Orthonormal views of the reconstructed results and the enlarged areas. (b1-b3) Comparison of the line profiles according to different lines indicated in X-Y, X-Z and Y-Z slices respectively.
Fig. 8.
Fig. 8. Comparison of the reconstructed aberrations using different illumination strategies.

Tables (2)

Tables Icon

Table 1. Design specification of our hemispherical light source

Tables Icon

Table 2. The total acquisition time of several multiplexed-coded illumination strategies

Equations (29)

Equations on this page are rendered with MathJax. Learn more.

I θ = I 0 cos 4 ( θ )
δ x , δ y = λ 2 ( N A o b j + N A i l l ) , δ z = λ 2 1 N A o b j 2 1 N A i l l 2
V ( x , y , z ) = k 0 2 [ n 2 ( x , y , z ) n m 2 ]
U ( x , y , z ) { U i n ( x , y , z ) + U s ( x , y , z ) B o r n a p p r o x i m a t i o n U i n ( x , y , z ) exp ( U s ( x , y , z ) U i n ( x , y , z ) ) R y t o v a p p r o x i m a t i o n
U s ( r ) = 1 4 π G ( | r r | ) V ( r ) U i n ( r ) d r = [ V ( r ) U i n ( r ) ] G ( r )
G ( r ) = exp ( i k 0 n m r ) r
U i n l ( r ) = A l exp ( i k l r )
k l = ( k x l , k y l , k z l ) = k 0 ( sin θ x l , sin θ y l , 1 ( sin 2 θ x l + sin 2 θ y l ) )
U ~ s l ( k 2 D ; z = z 0 ) = i π exp ( i k z z 0 ) k z V ~ ( k x k x l , k y k y l , k z k z l )
U ~ s l ( k 2 D ; z = z 0 ) = i π k z V ~ ( k x k x l , k y k y l , k z k z l ) P ( k 2 D ) δ ( k z k 0 2 k 2 D 2 )
P ( k 2 D ) = { 1 ,  if  | k 2 D | 2 ( 2 π λ N A o b j ) 2 0 ,  otherwise 
U s n ( x , y , n Δ z ) = 1 4 π ( n 1 ) Δ z n Δ z G ( x x , y y , n Δ z z ) U i n n ( x , y , z ) V ( x , y , z ) d x d y d z
U s n ( k 2 D ; z = n Δ z ) = 1 4 π 0 Δ z i exp ( i k z ( Δ z z ) ) k z [ U ~ i n n ( k 2 D ; z ) V ~ ( k 2 D ; z ) ] | z = ( n 1 ) Δ z + z d k 2 D d z = 1 4 π 0 Δ z i exp ( i k z ( Δ z z ) ) k z U ~ i n n ( k 2 D ; ( n 1 ) Δ z + z ) V ~ ( k 2 D k 2 D ; ( n 1 ) Δ z + z ) d k 2 D d z
U ~ i n n ( k 2 D ; ( n 1 ) Δ z + z ) = U ~ i n n ( k 2 D ; ( n 1 ) Δ z ) exp ( i k z z ) = U ~ i n n ( k 2 D ; ( n 1 ) Δ z ) exp ( i ( 2 π n m λ ) 2 | k 2 D | 2 z )
U ~ s n ( k 2 D ; z = n Δ z ) = i 4 π 0 Δ z exp ( i k z ( Δ z z ) ) exp ( i k z z ) k z d z U ~ i n n ( k 2 D ; ( n 1 ) Δ z ) V ~ n ( k 2 D k 2 D ) d k 2 D
U ~ s n ( k 2 D ; z = n Δ z ) = i exp ( i k z Δ z ) 4 π k z U ~ i n n ( k 2 D ; ( n 1 ) Δ z ) V ~ n ( k 2 D k 2 D ) Δ z d k 2 D = i exp ( i k z Δ z ) 4 π k z F 2 D { U i n n ( x , y , ( n 1 ) Δ z ) V n ( x , y ) Δ z }
U o u t n ( x , y , n Δ z ) = U i n n ( x , y , n Δ z ) exp ( F 2 D 1 { U ~ s n ( k 2 D ; z = n Δ z ) } U i n n ( x , y , n Δ z ) )
U i n n ( x , y , n Δ z ) = F 2 D 1 { U ~ i n n ( k 2 D ; ( n 1 ) Δ z ) exp ( i k z Δ z ) } = F 2 D 1 { F 2 D { U i n n ( x , y , ( n 1 ) Δ z ) } exp ( i 2 π ( n m λ ) 2 | k 2 D | 2 Δ z ) }
U ( x , y , z ) = F 2 D 1 { P ( k 2 D ) exp ( i k z ( N 2 Δ z ) ) F 2 D { U o u t N ( x , y , N Δ z ) } }
min V ( r ) L ( V ( r ) ) = l N L E D r I l ( r ) | U l ( r ) | 2
min V ( r ) L m u l t i ( V ( r ) ) = p N i m a g e r I p ( r ) l L p | | U l ( r ) | | 2 2
L m u l t i p V e N ( x , y ) = l L p L m u l t i p U o u t l , N ( x , y , N Δ z ) U o u t l , N ( x , y , N Δ z ) V N ( x , y ) = l L p L m u l t i p U o u t l , N ( x , y , N Δ z ) U i n l , N , ( x , y , N Δ z ) [ exp ( φ s l , N ( x , y , N Δ z ) ) ] φ s l , N ( x , y , N Δ z ) V e N ( x , y ) = l L p U i n l , N , ( x , y , ( N 1 ) Δ z ) Δ z F 2 D 1 { i exp ( i k z Δ z ) 4 π k z F 2 D { [ exp ( φ s l , N ( x , y , N Δ z ) ) ] L m u l t i p U o u t l , N ( x , y , N Δ z ) } }
$ L m u l t i p U o u t l , N ( x , y , N Δ z )  =  F 2 D 1 { exp ( i k z N 2 Δ z ) P ( k 2 D ) F 2 D { 2 [ l L p | | U l ( r ) | | 2 I p ( r ) ] U l ( r ) l L p | | U l ( r ) | | 2 } } $
L m u l t i p V e N 1 ( x , y ) = l L p L m u l t i p U o u t l , N 1 ( x , y , ( N 1 ) Δ z ) U o u t l , N 1 ( x , y , ( N 1 ) Δ z ) V e N 1 ( x , y ) = l L p L m u l t i p U o u t l , N ( x , y , N Δ z ) U o u t l , N ( x , y , N Δ z ) U o u t l , N 1 ( x , y , ( N 1 ) Δ z ) U o u t l , N 1 ( x , y , ( N 1 ) Δ z ) V e N 1 ( x , y )
L m u l t i p U o u t l , N 1 ( x , y , ( N 1 ) Δ z ) = V e N ( x , y ) U i n l , N , ( x , y , ( N 1 ) Δ z ) L m u l t i p V e N ( x , y ) + F 2 D 1 { exp ( i k z Δ z ) F 2 D { [ exp ( φ s l , N ( x , y , N Δ z ) ) ] [ 1 [ φ s l , N ( x , y , N Δ z ) U i n l , N ( x , y , N Δ z ) ] U i n l , N ( x , y , N Δ z ) ] L m u l t i p U o u t l , N ( x , y , N Δ z ) } }
V n e w n ( x , y ) = V e n ( x , y ) α L m u l t i p V e n ( x , y ) ,   n = N , ( N 1 ) , , 2 , 1
$ P n e w ( k 2 D ) = P e ( k 2 D ) β l L p | W l , ( k 2 D ) | W l , ( k 2 D ) F 2 D { [ l L p | | U l ( r ) | | 2 I l ( r ) ] U l ( r ) l L p | | U l ( r ) | | 2 } | W l ( k 2 D ) | max ( l L p | | W l ( k 2 D ) | | 2 + δ ) $
P n e w ( k 2 D ) = C T F exp ( i angle( P n e w ( k 2 D )) )
l o s s = log 10 ( p N i m a g e r I p ( r ) l L p | | U l ( r ) | | 2 2 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.