Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Self-supervised neural network for phase retrieval in QDPC microscopy

Open Access Open Access

Abstract

Quantitative differential phase contrast (QDPC) microscope plays an important role in biomedical research since it can provide high-resolution images and quantitative phase information for thin transparent objects without staining. With weak phase assumption, the retrieval of phase information in QDPC can be treated as a linearly inverse problem which can be solved by Tikhonov regularization. However, the weak phase assumption is limited to thin objects, and tuning the regularization parameter manually is inconvenient. A self-supervised learning method based on deep image prior (DIP) is proposed to retrieve phase information from intensity measurements. The DIP model that takes intensity measurements as input is trained to output phase image. To achieve this goal, a physical layer that synthesizes the intensity measurements from the predicted phase is used. By minimizing the difference between the measured and predicted intensities, the trained DIP model is expected to reconstruct the phase image from its intensity measurements. To evaluate the performance of the proposed method, we conducted two phantom studies and reconstructed the micro-lens array and standard phase targets with different phase values. In the experimental results, the deviation of the reconstructed phase values obtained from the proposed method was less than 10% of the theoretical values. Our results show the feasibility of the proposed methods to predict quantitative phase with high accuracy, and no use of ground truth phase.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Since a bright field microscope captures only the intensity, weak phase objects that lack enough amplitude contrast can barely be observed. To visualize weak phase objects, a phase-contrast image is required. Quantitative phase imaging (QPI) has the ability to obtain quantified optical path differences of label-free samples and to enable living cell observation; thus, it has a great impact on the biomedical field [1]. One of the common QPI methods is digital holography which is an attractive alternative to measurements of phase by interference [2]. In contrast to the interference approach that uses a coherent light source, quantitative differential phase contrast microscopy (QDPC) uses a partially coherent light source to obtain phase imaging [3]. By using a light-emitting diode (LED) to provide asymmetric illumination, QDPC can avoid the speckle issue [3]. In general, QDPC applies complementary asymmetric half-circle illumination patterns to shine on the sample to generate several phase-contrast images along arbitrary axes [4]. In this way, the quantitative phase values can be extracted by deconvolving phase transfer function (PTF) with captured intensity measurements. To avoid frequency loss of PTF that causes artifacts and inaccurate phase values, QDPC requires twenty-four frames to reconstruct an isotropic phase image [3]. Due to the long image acquisition time, researchers have developed several different methods to reduce the image acquisition time [511] and to improve the accuracy of the recovered phase information [12,13].

Extracting a quantitative phase image from several intensity images is an ill-posed problem. To solve this problem, a weak phase approximation of the object transmission function is made to linearize the relation between the intensity and the phase [3,14]. After linearization, the inverse problem can be formulated in a linear regression fashion and solved using the Tikhonov regularization which estimates the phase value by adding a regularization parameter to prevent singularity [3]. However, tuning the regularization parameter manually is inconvenient. This is because the regularization parameter is related to several factors such as the components of the imaging system, the amplitude masks, and the refractive index of samples [15]. Using different regularization values would lead to different phase values. The other issue for phase retrieval by the Tikhonov regularization is that the phase value of a sample is required to be less than 0.7 radians to conform the weak phase assumption [16]. This would further limit the selection of imaging samples.

As an alternative way for solving various inverse problems arising in computational imaging [17,18], deep learning-based methods have been successfully implanted into optical systems such as holographic imaging [19], optical coherence tomography [20], optical metrology [21] and phase recovery [22,23]. Recently, Kellman et. al. proposed a data-driven method to optimize the illumination design for QDPC [24]. Most of these applications are based supervised learning methods, which require a large pairwise dataset for training the neural network. The neural network learns the mapping relationship from the input to the specific ground truth, and the results are highly dependent on training data. In practice, it may be difficult to obtain ground truth as well as a large training dataset. For example, the ground truth phase of the sample may be obtained from other systems; therefore, it is difficult to have the same field of view in the input and the ground truth images. In contrast, deep image prior (DIP) is a self-supervised algorithm that can solve a few inverse problems (e.g. image denoising, super-resolution, and inpainting) without ground truth and training data [25]. One previous study showed the practicability of using DIP to reconstruct three-dimensional refractive index of thick biological samples in diffraction tomography [26]. Other optical applications such as the design of focus shaping [27], the reconstruction of Fourier ptychographic microscopy [28], and the resolution enhancement in ghost imaging can be achieved with the use of DIP [29]. Moreover, DIP was applied to reconstruct a phase image from either four through-focus images or a single diffraction pattern [30,31].

Inspired by the above works, we propose a DIP-based method for phase retrieval in QDPC microscopy. In particular, we aim to evaluate whether the proposed method has the ability to achieve phase retrieval without using the weak object transfer function and Tikhonov regularization. In other words, we investigate the feasibility of using DIP to retrieve phase without the weak phase assumption. To evaluate the performance of the proposed method, a micro-lens array (MLA) and a standard phase target [32] are imaged by a QDPC microscope. The proposed method can retrieve the quantitative phase distribution and provide high quality images.

2. Materials and methods

The schematic diagram, shown in Fig. 1, describes two parts: imaging and phase retrieval. The imaging part is performed by a QDPC microscope, and the phase retrieval part is performed by a self-supervised neural network.

 figure: Fig. 1.

Fig. 1. The schematic diagram of the proposed method. (a) Setup of a QDPC microscope for image acquisition. (b) DIP algorithm for phase retrieval.

Download Full Size | PDF

2.1 Experimental setup of QDPC microscope

In this study, we built the QDPC microscope using an inverted microscope (Olympus iX70). By implanting a thin-film transistor (TFT) panel and an additional condenser (LA1951-ML, NA = 0.3) between the light source and sample, the microscope can generate an oblique light source according to the asymmetric amplitude masks displayed on the TFT. As shown in Fig. 1 (a), the TFT is placed at the front focal plane of the condenser, and the condenser is located at the front focal plane of the objective lens (LMPLN10XIR from Olympus, NA = 0.3). The amplitude mask shown on the TFT can be acted as a digital pupil of the system to generate the corresponding PTF. To achieve an isotropic phase transfer function, we used a set of linearly gradient pupils that produce amplitude differences along horizontal and vertical directions [12]. A total of four intensity images were acquired in order to retrieve the phase information.

2.2 Principle of QDPC microscopy

The equation shown below describes the intensity image (${I_c}$) captured by the imaging system [3].

$${I_c}({{r_c}} )= |\mathrm{\int\!\!\!\int }\left[ {\mathrm{\int\!\!\!\int }o(r )\sqrt {S({u^{\prime}} )} {e^{i2\pi u^{\prime}r}}{e^{ - i2\pi u^{\prime\prime} r}}{d^2}r} \right]P({u^{\prime\prime} } ){e^{ - i2\pi u^{\prime\prime} {r_c}}}{d^2}u^{\prime\prime} {|^2},$$
where ${r_c}$, r, and $u^{\prime\prime} $ represent coordinates on the sample plane, image plane, and Fourier plane of the objective lens, respectively. $o(r )$ is the object transmission function which can be described as $o(r )= {e^{ - \alpha (r )}}{e^{i\phi (r )}}$. $\mathrm{\alpha }(\textrm{r} )$ and $\phi (r )$ denote absorption and phase components of the object, respectively. $S({u^{\prime}} )$ denotes the structured light from the TFT panel, and $P({u^{\prime\prime} } )$ is the pupil function of the objective lens.

To reduce the impact due to the background intensity, the intensity measurements are normalized to differential phase contrast (DPC) images by using the following equation [3].

$${I_{DPC,j}} = \frac{{({{I_{1,j}} - {I_{2,j}}} )}}{{({{I_{1,j}} + {I_{2,j}}} )}},$$
where j denotes the orientation of symmetric axis and ${I_{1,j}}$ and ${I_{2,j}}$ represent intensity measurements captured with complementary asymmetric illumination along the axis j. To perform phase retrieval, one previous study made the weak phase assumption which led to a linear relationship between the sample intensity and the phase distribution in the frequency domain [3]. Therefore, the phase information can be deconvolved using the Tikhonov regularization method [3].

2.3 Phase retrieval via DIP

By using linearly gradient pupils, four intensity images (i.e. two DPC images) can be obtained from the QDPC microscope. In the experiment, the phase retrieval task is performed by the proposed DIP algorithm, and the process is shown in Fig. 1 (b). First, the intensity images were divided by the background image in order to correct the non-uniform illumination and the noise on the samples [33]. Second, we calculated the DPC images (i.e. (2)) using the corrected intensity images. These DPC images were used as target images for evaluating the predicted phase images while the input images would be the absolute value of the DPC images since the phase differences of the samples were assumed to be positive. Since U-net can be applied to solve the inverse problems efficiently [34], the phase prediction model here was a convolutional neural network model based on the U-net structure [35] which was designed to retrieve the phase image from the absolute DPC images.

The U-net architecture used in this algorithm is shown in Fig. 2. Through three-time down-sampling and fifteen convolutional layers, the phase distribution is extracted from the absolute DPC images. The filter size of the convolutional layers is 1 ${\times} $ 1 for the last layer and 3 ${\times} $ 3 for the remaining layers. The leaky rectified linear unit (ReLU) with alpha equal to 0.1 is the activation function after all 3 ${\times} $ 3 convolutional layers, and the ReLU function is applied after the last convolutional layer. Here, the down-sampling process reduces the feature by half using average pooling that takes the mean value of each area, and the up-sampling process is done by bilinear interpolation to increase the size by a factor of two. Several skip connections are used to reuse the hierarchical features. The concatenation of the hidden layers with the same size enables the preservation of spatial features of the input. In the learning procedure, the weighting values are updated iteratively based on the loss evaluation using the Adam optimizer [36] with a learning rate that decreases by 0.9 every 300 epochs from 0.0001. Model training was implemented using TensorFlow 2.8.0 and Python 3.8.10 on a workstation equipped with two NVIDIA Titan XP GPU cards.

 figure: Fig. 2.

Fig. 2. The U-net architecture for phase prediction in the proposed algorithm.

Download Full Size | PDF

As mentioned above, the phase prediction model is designed to generate the phase image of the sample from its absolute DPC images in unsupervised manor. To achieve this goal via DIP, we iteratively update the phase prediction model by minimizing the following formula to find a possible solution:

$${D_{w\mathrm{\ast }}} = \mathop{arg\,min }\limits_w L({D_w}({|{{I_{DPC}}} |} ),{I_{DPC}}),$$
where ${D_w}$ is the phase prediction model with w as the model weights and $|{I_{DPC}}|$ as the model input. ${D_{w\mathrm{\ast }}}$ is the final phase prediction model with trained model weights w*. ${I_{DPC}}$ denotes the target which is a set of normalized DPC images and L is the mean-squared-error (MSE) loss function defined as follows:
$$L({\phi ,{I_{DPC}}} )= \frac{1}{{JNM}}\mathop \sum \limits_j^J \mathop \sum \limits_n^N \mathop \sum \limits_m^M ||h({\phi ,j} )- {I_{DPC,j}}({m,n} )||_2^2,$$
where j denotes the orientation of symmetric axis, and n and m denote the x and y coordinates of image, respectively. h is a custom physical layer that synthesizes the normalized DPC images from the model-predicted phase image ($\phi $), where $\phi = {D_w}({|{{I_{DPC}}} |} )$. Specifically, h $({\phi ,j} )$ is the physical layer that defined as follows:
$$h({\phi ,j} )= \frac{{I_{1,j}^\mathrm{^{\prime}}(\phi )- I_{2,j}^\mathrm{^{\prime}}(\phi )}}{{I_{1,j}^\mathrm{^{\prime}}(\phi )+ I_{2,j}^\mathrm{^{\prime}}(\phi )}},$$
where $I_{i,j}^\mathrm{^{\prime}}(\phi )$ is the synthesized intensity image calculated from the model-predicted phase ($\phi $) and is written as
$$I_{i,j}^\mathrm{^{\prime}}(\phi )= {|{{e^{i\phi }}\mathrm{\ast }{s_{i,j}}(r )} |^2},$$
where ${s_{i,j}}(r )$ is the light distribution related to the pupil pattern used in the imaging system, i denotes the orientation of gradient along the same axis ($j$), and r denotes the coordinate in the sample plane. Due to the weak phase of the sample, ${e^{i\phi }}$ acts as an object function with a pure phase ($\phi $). As stated in (5) and (6), the custom layer firstly synthesized the paired intensity images (i.e. $I_{1,j}^\mathrm{^{\prime}}$ and $I_{2,j}^\mathrm{^{\prime}}$) from the model-predicted phase image ($\phi $). Then, the model-predicted DPC images were calculated using (5). The MSE loss between the predicted and the measured DPC images are used to update the weights and biases through gradient descent. The iterative process helps the phase retrieval converge to a reasonable solution. After minimizing the MSE loss function shown in (4), the final predicted phase image ($\bar{\phi }$) can be obtained using the updated phase prediction model (${D_{w\mathrm{\ast }}}$) as follows:
$$\bar{\phi } = {D_{w\mathrm{\ast }}}({|{{I_{DPC}}} |} ).$$
In the above equations, no ground truth phases of the samples were required, meaning that the phase retrieval can rely only on the intensity measurements.

To decide the number of training iterations, the signal-to-noise ratios (SNRs) of the resultant images were calculated every 100 epochs. The diagrams (see Fig. S1) showed that the critical epochs were around 1000 and 2000 epochs for 100 and 200 nm, respectively.

To verify the practicability of the proposed DIP-based method, a standard phase target [31] and an MLA were used. The theoretical phase difference (Δϕ) is defined by [6].

$$\Delta \phi = \frac{{2\pi D({{n_b} - {n_s}} )}}{\lambda },$$
where D is the thickness of sample, and $\lambda $ is the wavelength of the incident light, ${n_b}$ and ${n_s}$ denote the refractive index of the surrounding and the sample, respectively. The theoretical phase differences are used as standards for evaluating the predicted phase of the tested samples. In this study, standard phase targets with two different phase values were used. With an operating wavelength of 532 nm, the theoretical phase of the phase target with a thickness of 100 nm and 200 nm is 0.61 radian and 1.22 radian, respectively, and the theoretical value is 4.53 radian for the MLA. To further evaluate the accuracy of the retrieved phases, the percentage of recovery is calculated by $\frac{{\bar{\phi }}}{{{\phi _{ideal}}}}$, where $\bar{\phi }$ denotes the mean value of peak phases, and ${\phi _{ideal}}$ denotes the ideal phase difference of the same areas.

3. Results

The loss for training for our deep learning model was plotted and the details can be found in Fig. S2. Figure 3 shows the results of the resolution targets with the thickness of 100 and 200 nm (i.e. 1000 and 2000 epochs for 100 nm and 200 nm, respectively). The phase distributions retrieved using the proposed DIP-based method and the cross-sectional phase profiles were presented. For each thickness, we displayed two zoomed-in sub-images of the retrieved phase distributions with various widths of bars. It can be observed that the proposed DIP-based method has the ability to retrieve phase distributions from the absolute DPC images. The mean value of the retrieved phase of the targets were about 0.64 and 1.12 radians for the thickness of 100 and 200 nm, respectively. By taking the ideal phase values as 100% recovery, we compared it to both the results. The phase value recovery of the standard targets with the thickness equal to 100 and 200 nm can reach about 95% and 92% compared to the ideal values. We further compared the profiles of the retrieved phase distributions with those of the ideal phase distribution according to the standard pattern format of USAF resolving power test target 1951 and (8). The retrieved phase profiles could spatially fit the ideal distribution of the patterns, and the retrieved phase values were close to theoretic values. Since the quantitative phase is calculated using phase contrast, the quality of the phase contrast image influents the result a lot. Due to the shade-off effect [37], the phase contrast image of a large, extended specimen shows higher intensity at the edge and gradually reduces at the center where is supposed to be uniform. The phenomenon leads to the current results of the cube features. The issue may be reduced with a ring light source or a lower magnification.

 figure: Fig. 3.

Fig. 3. Results of resolution target with the thickness equal to 100 nm and 200 nm. (a1-a3), (c1-c3) Phase prediction with the proposed DIP method of the standard targets with 100 nm and 200 nm, respectively. (a2-a3), (c2-3) Zoomed-in phase distribution of the highlighted boxes in (a1) and (c1). (b1-b2), (d1-d2) Phase distribution of the dotted line in (a2-a3) and (c2-c3). The gray lines denote the theoretical phase distribution.

Download Full Size | PDF

Similar to Fig. 3, Fig. 4 shows the results of the focus star which has the same thickness as USAF. The results of the targets with the same ideal phase values were trained using the same epochs (i.e. 1000 and 2000 epochs for 100 nm and 200 nm, respectively). The ideal phase distributions generated based on the specification of the Quantitative Phase Target (QPTTM) were also shown. The results indicate that the proposed DIP-based method could perform well in different phantoms. Finally, we evaluated the performance of the proposed DIP-based method using the MLA which had a higher theoretical phase value. The intensity measurements and the retrieved phase distributions (with 2500 epochs) were shown in Fig. 5. Phase contrast images obtained by the QDPC system are shown in Fig. 5 (a) and the retrieved phase results are in Fig. 5 (b) and (c). We barely observed the contour of the MLA in the intensity measurements, whereas the phase distribution of the MLA was clearly shown. The maximum phase value obtained from the retrieved phase distribution was about 4.78 radians which were very close to the theoretical phase value (4.53 radians). As shown in Fig. 5(c), the cross-section profile from different angles indicated that phase could be retrieved isotopically by using the proposed DIP-based method.

 figure: Fig. 4.

Fig. 4. Results of focus star phase target with the thickness equal to 100 nm and 200 nm. (a1-a3), (c1-c3) Phase prediction with the proposed DIP method. (b1-b2), (d1-d2) Phase distribution of the dotted line in (a2-a3) and (c2-c3), respectively. The gray lines denote the theoretical phase distribution.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Results of the micro-lens array. (a) Captured intensity images from the CCD. Four phase-contrast images are required. (b) Retrieved phase image of the lens array. (c) Phase distribution of the lines in (b) are shown in blue (horizontal) and red (vertical) lines, and the black dotted line shows the ideal distribution of the lens array.

Download Full Size | PDF

Table 1 shows the ideal and retrieved phase value using the proposed method and Tikhonov regularization. Note that the DIP-based results were the mean peak values that directly calculated using the grayscale values, while the results using Tikhonov regularization were obtained by the difference between the mean peak values and the mean background values. The percentage of recovery is the comparison by using the ideal phase value as 100% recovery. In the experiment of the DIP-based method, the samples with the same ideal phase values were trained with the same number of epochs. However, the phase recoveries of phase targets with patterns of resolution target and focus star were different due to the differences in pattern complexity and the ratio of the background where phase equals zero. Also, the phase recovery of the MLA is 95% which is higher than the phase targets with 200 nm, showing that the phase value is not the only factor affecting the convergence of phase recovery. The results by Tikhonov regularization, on the other hand, show similar tendency as DIP-based results. In addition, the results of the two methods are more similar when the phase is small.

Tables Icon

Table 1. Comparison of results of different standard targets.

In the experiment, the proposed method showed less noise in the background than the Tikhonov method. To verify noise alleviation, the retrieved results of the phase target were shown to compare the difference. Fig. 6 (b1) and (b2) show the zoomed-in background region obtained by the proposed DIP method and the Tikhonov regularization, respectively. The background standard deviation (SD) of DIP results is about 0.0062 which is lower than that of the other results. Obvious artifact patterns appeared in Fig. 6 (b2) due to noise, whereas the smoother background was observed in Fig. 6 (b1). This indicates that the proposed method removes imperfections and smooths the background.

 figure: Fig. 6.

Fig. 6. Comparison of results using different phase retrieval algorithms. (a1), (a2) Results of the phase target with a thickness equal to 200 nm using DIP and Tikhonov regularization. (b1), (b2) Zoomed-in phase images of (a1) and (a2), respectively. The background SD of (a1) and (a2) is 0.0062 and 0.0296, respectively.

Download Full Size | PDF

4. Discussion

For weak-phase objects such as cells, visualization of their phase is important since the weak phase leads to poor amplitude contrast. The QDPC microscopy used to image weak phase objects with structured lights can combine with the Tikhonov regularization to provide a quantitative phase image for living cells. However, the process of manual parameter tuning makes phase retrieval tedious. Moreover, the assumption of weak phase limits the thickness of samples. More importantly, the problem of missing PTF in the zero frequency would offset the values of the reconstructed phase image [16]. In this article, we utilized the concept of DIP to encapsulate the physical knowledge of image formation for phase retrieval to prevent a tedious parameter tuning process. In addition, the proposed method that retrieved phase via DIP is performed without prior training and weak phase assumption. The algorithm also avoids the use of numerous image pairs of the input and the ground truth and predicts quantitative phase with only intensity measurements. This implies that the phase retrieval using other pupil patterns can also be achieved only by changing the ${s_{i,j}}(r )$ in (6), and the process takes the same amount of time. Unlike the Tikhonov regularization, our DIP-based method provides quantitative phase values without offset. In other words, the absolute phase value can be reconstructed using the proposed method. The results of the standard phase targets demonstrated the performance of our proposed method for phase retrieval. Experimental results of the MLA show the feasibility of using the proposed method to reconstruct a sample with higher phase values.

Despite promising results, the proposed method has some limitations. First, we observed that the phase values in the background region were not zero. Note that the input and target of the neural network were DPC images instead of intensity measurements. This is because we cannot measure the intensity of the structured light from the TFT panel (i.e.$S({u\mathrm{^{\prime}}} )$ in (1)). To address this issue, we utilized the DPC images as the input and the target to reduce the influence of the light intensity. Due to this reason, the background intensity is cancelled (i.e. $I_{1,j}^\mathrm{^{\prime}}(\phi )- I_{2,j}^\mathrm{^{\prime}}(\phi )$) and normalized (i.e. $I_{1,j}^\mathrm{^{\prime}}(\phi )+ I_{2,j}^\mathrm{^{\prime}}(\phi )$) to a small value. As a result, the phase in the background region has no information for learning. We speculate that the phase values in the background region are produced from the network bias parameters updated based on the sample’s phase (i.e. non-background regions). Second, the stopping criterion used in the proposed method was based on a fixed number of epochs. Using the fixed number may not lead to the optimal result. Samples with different refractive indexes, thickness, and phase values may have different convergent rates. Some recently developed stopping strategies used for DIP [38,39] will be investigated in our future work. Third, the proposed method took ∼2 minutes to perform phase retrieval. It requires a higher computational cost compared to the Tikhonov regularization. To further speed-up the DIP-based phase retrieval, one can use a more powerful GPU card. Alternatively, it is possible to use a pre-trained model as initial weights. This may reduce the number of required epochs. The pre-trained model may be obtained by training the model with different sample images. Running a few hundred epochs may be sufficient to help the convergence of phase recovery.

5. Conclusions

In summary, we proposed a self-supervised learning method based on DIP to retrieve phase distribution from the intensity measurements in QDPC microscopy. Since the retrieval process via DIP can be performed without any pre-training data pairs, the proposed method is more convenient to be adapted for biomedical imaging where acquiring a large dataset is difficult. More importantly, the proposed method does not require the weak phase assumption as well as the Tikhonov regularization parameter. The experimental results obtained from the micro-lens array and standard phase targets show the capability of the proposed DIP-based method for phase retrieval in QDPC microscopy. The proposed method can be applied to different pupil patterns to further enhance image quality or imaging efficiency. The simplification in the phase retrieval process can greatly help to monitor live cells.

Funding

National Taiwan University (NTU-CC-112L892902); National Science and Technology Council (108-2221-E-002-168-MY4, 110-2926-B-002-001).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. Y. Park, C. Depeursinge, and G. Popescu, “Quantitative phase imaging in biomedicine,” Nat. Photonics 12(10), 578–589 (2018). [CrossRef]  

2. P. Marquet, C. Depeursinge, and P. J. Magistretti, “Review of quantitative phase-digital holographic microscopy: promising novel imaging technique to resolve neuronal network activity and identify cellular biomarkers of psychiatric disorders,” Neurophotonics 1(2), 020901 (2014). [CrossRef]  

3. L. Tian and L. Waller, “Quantitative differential phase contrast imaging in an LED array microscope,” Opt. Express 23(9), 11394 (2015). [CrossRef]  

4. S. B. Mehta and C. J. R. Sheppard, “Quantitative phase-gradient imaging at high resolution with asymmetric illumination-based differential phase contrast,” Opt. Lett. 34(13), 1924 (2009). [CrossRef]  

5. W. Lee, D. Jung, S. Ryu, and C. Joo, “Single-exposure quantitative phase imaging in color-coded LED microscopy,” Opt. Express 25(7), 8398 (2017). [CrossRef]  

6. Y. Z. Lin, K. Y. Huang, and Y. Luo, “Quantitative differential phase contrast imaging at high resolution with radially asymmetric illumination,” Opt. Lett. 43(12), 2973–2976 (2018). [CrossRef]  

7. A. C. Li, S. Vyas, Y. H. Lin, Y. Y. Huang, H. M. Huang, and Y. Luo, “Patch-based U-net Model for Isotropic Quantitative Differential Phase Contrast Imaging,” IEEE Trans. Med. Imaging 40(11), 3229–3237 (2021). [CrossRef]  

8. Y. H. Lin, A. C. Li, S. Vyas, Y. Y. Huang, J. A. Yeh, and Y. Luo, “Isotropic quantitative differential phase contrast microscopy using radially asymmetric color-encoded pupil,” JPhys Photonics 3(3), 035001 (2021). [CrossRef]  

9. Y. J. Chen, Y. Z. Lin, S. Vyas, T. H. Young, and Y. Luo, “Time-lapse imaging using dual-color coded quantitative differential phase contrast microscopy,” J. Biomed. Opt. 27(05), 056002 (2022). [CrossRef]  

10. Z. F. Phillips, M. Chen, and L. Waller, “Single-shot quantitative phase microscopy with color-multiplexed differential phase contrast (cDPC),” PLoS One 12(2), e0171228 (2017). [CrossRef]  

11. Y. Fan, J. Sun, Q. Chen, X. Pan, M. Trusiak, and C. Zuo, “Single-shot isotropic quantitative phase microscopy based on color-multiplexed differential phase contrast,” APL Photonics 4(12), 121301 (2019). [CrossRef]  

12. H. H. Chen, Y. Z. Lin, and Y. Luo, “Isotropic differential phase contrast microscopy for quantitative phase bio-imaging,” J. Biophotonics 11, e201700364 (2018). [CrossRef]  

13. Y. Fan, J. Sun, Q. Chen, X. Pan, L. Tian, and C. Zuo, “Optimal illumination scheme for isotropic quantitative differential phase contrast microscopy,” Photonics Res. 7(8), 890–904 (2019). [CrossRef]  

14. C. J. R. Sheppard, “Defocused transfer function for a partially coherent microscope and application to phase retrieval,” J. Opt. Soc. Am. A 21(5), 828–831 (2004). [CrossRef]  

15. T. Peng, Z. Ke, S. Zhang, M. Shao, H. Yang, X. Liu, and J. Zhou, “A Compact Real-time Quantitative Phase Imaging System,” preprint 202204.0109.v1 (2022), https://doi.org/10.20944/preprints202204.0109.v1.

16. H. Lu, J. Chung, X. Ou, and C. Yang, “Quantitative phase imaging and complex field reconstruction by pupil modulation differential phase contrast,” Opt. Express 24(22), 25345–25361 (2016). [CrossRef]  

17. G. Ongie, A. Jalal, C. A. Metzler, R. G. Baraniuk, A. G. Dimakis, and R. Willett, “Deep learning techniques for inverse problems in imaging,” IEEE J. Sel. Areas Inf. Theory 1(1), 39–56 (2020). [CrossRef]  

18. M. Genzel, J. Macdonald, and M. Marz, “Solving inverse problems with deep neural networks-robustness included,” IEEE Trans. Pattern Anal. Mach. Intell. 45(1), 1119–1134 (2023). [CrossRef]  

19. T. Zeng, Y. Zhu, and E. Y. Lam, “Deep learning for digital holography: a review,” Opt. Express 29(24), 40572–40593 (2021). [CrossRef]  

20. L. Tian, B. Hunt, M. A. L. Bell, J. Yi, J. T. Smith, M. Ochoa, X. Intes, and N. J. Durr, “Deep learning in biomedical optics,” Lasers Surg. Med. 53(6), 748–775 (2021). [CrossRef]  

21. C. Zuo, J. Qian, S. Feng, W. Yin, Y. Lin, P. Fan, J. Han, K. Qian, and Q. Chen, “Deep learning in optical metrology: a review,” Light: Sci. Appl. 11(1), 39 (2022). [CrossRef]  

22. X. Li, H. Qi, S. Jiang, P. Song, and G. Zeng, “Quantitative phase imaging via a cGAN network with dual intensity images captured under centrosymmetric illumination,” Opt. Lett. 44(11), 2879–2882 (2019). [CrossRef]  

23. G. E. Spoorthi, R. K. S. S. Gorthi, and S. Gorthi, “PhaseNet 2.0: Phase unwrapping of noisy data based on deep learning approach,” IEEE Trans. on Image Process. 29, 4862–4872 (2020). [CrossRef]  

24. M. R. Kellman, E. Bostan, N. Repina, and L. Waller, “Physics-based learned design: optimized coded-illumination for quantitative phase imaging,” IEEE Trans. Comput. Imaging 5(3), 344–353 (2019). [CrossRef]  

25. V. Lempitsky, A. Vedaldi, and D. Ulyanov, “Deep image prior,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 9446–9454 (2018), [CrossRef]  

26. K. C. Zhou and R. Horstmeyer, “Diffraction tomography with a deep image prior,” Opt. Express 28(9), 12872–12896 (2020). [CrossRef]  

27. Z. Y. Chen, Z. Wei, R. Chen, and J. W. Dong, “Focus shaping of high numerical aperture lens using physics-assisted artificial neural networks,” Opt. Express 29(9), 13011–13024 (2021). [CrossRef]  

28. Q. Chen, D. Huang, and R. Chen, “Fourier ptychographic microscopy with untrained deep neural network priors,” Opt. Express 30(22), 39597–39612 (2022). [CrossRef]  

29. F. Wang, C. Wang, M. Chen, W. Gong, Y. Zhang, S. Han, and G. Situ, “Far-field super-resolution ghost 32 imaging with a deep neural network constraint,” Light: Sci. Appl. 11(1), 1–54 (2022). [CrossRef]  

30. E. Bostan, R. Heckel, M. Chen, M. Kellman, and L. Waller, “Deep phase decoder: self-calibrating phase microscopy with an untrained deep neural network,” Optica 7(6), 559–562 (2020). [CrossRef]  

31. F. Wang, Y. Bian, H. Wang, M. Lyu, G. Pedrini, W. Osten, G. Barbastathis, and G. Situ, “Phase imaging with an untrained neural network,” Light: Sci. Appl. 9(1), 77 (2020). [CrossRef]  

32. Z. F. Phillips and M. Chen, “Technical report: benchmark technologies quantitative phase target,”.

33. M. Petrou and C. Petrou, Image Processing: The Fundamentals, 2nd ed. Chichester, U.K: John Wiley & Sons (2010), 818, ISBN 9780470745861.

34. M. T. McCann, K. H. Jin, and M. Unser, “Convolutional neural networks for inverse problems in imaging: A review,” IEEE Signal Process. Mag. 34(6), 85–95 (2017). [CrossRef]  

35. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” International Conference on Medical Image Computing and Computer-assisted Intervention, 234–241 (2015).

36. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXivarXiv:1412.6980 (2014). [CrossRef]  

37. T. Otaki, “Artifact Halo Reduction in Phase Contrast Microscopy Using Apodization,” Opt. Rev. 7(2), 119–122 (2000). [CrossRef]  

38. Q. Zhou, C. Zhou, H. Hu, Y. Chen, S. Chen, and X. Li, “Towards the automation of deep image prior,” arXivarXiv:1911.07185 (2019). [CrossRef]  

39. H. Wang, T. Li, Z. Zhuang, T. Chen, H. Liang, and J. Sun, “Early stopping for deep image prior,” arXivarXiv:2112.06074 (2021). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       supplemental figures for model training

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. The schematic diagram of the proposed method. (a) Setup of a QDPC microscope for image acquisition. (b) DIP algorithm for phase retrieval.
Fig. 2.
Fig. 2. The U-net architecture for phase prediction in the proposed algorithm.
Fig. 3.
Fig. 3. Results of resolution target with the thickness equal to 100 nm and 200 nm. (a1-a3), (c1-c3) Phase prediction with the proposed DIP method of the standard targets with 100 nm and 200 nm, respectively. (a2-a3), (c2-3) Zoomed-in phase distribution of the highlighted boxes in (a1) and (c1). (b1-b2), (d1-d2) Phase distribution of the dotted line in (a2-a3) and (c2-c3). The gray lines denote the theoretical phase distribution.
Fig. 4.
Fig. 4. Results of focus star phase target with the thickness equal to 100 nm and 200 nm. (a1-a3), (c1-c3) Phase prediction with the proposed DIP method. (b1-b2), (d1-d2) Phase distribution of the dotted line in (a2-a3) and (c2-c3), respectively. The gray lines denote the theoretical phase distribution.
Fig. 5.
Fig. 5. Results of the micro-lens array. (a) Captured intensity images from the CCD. Four phase-contrast images are required. (b) Retrieved phase image of the lens array. (c) Phase distribution of the lines in (b) are shown in blue (horizontal) and red (vertical) lines, and the black dotted line shows the ideal distribution of the lens array.
Fig. 6.
Fig. 6. Comparison of results using different phase retrieval algorithms. (a1), (a2) Results of the phase target with a thickness equal to 200 nm using DIP and Tikhonov regularization. (b1), (b2) Zoomed-in phase images of (a1) and (a2), respectively. The background SD of (a1) and (a2) is 0.0062 and 0.0296, respectively.

Tables (1)

Tables Icon

Table 1. Comparison of results of different standard targets.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

I c ( r c ) = | [ o ( r ) S ( u ) e i 2 π u r e i 2 π u r d 2 r ] P ( u ) e i 2 π u r c d 2 u | 2 ,
I D P C , j = ( I 1 , j I 2 , j ) ( I 1 , j + I 2 , j ) ,
D w = a r g m i n w L ( D w ( | I D P C | ) , I D P C ) ,
L ( ϕ , I D P C ) = 1 J N M j J n N m M | | h ( ϕ , j ) I D P C , j ( m , n ) | | 2 2 ,
h ( ϕ , j ) = I 1 , j ( ϕ ) I 2 , j ( ϕ ) I 1 , j ( ϕ ) + I 2 , j ( ϕ ) ,
I i , j ( ϕ ) = | e i ϕ s i , j ( r ) | 2 ,
ϕ ¯ = D w ( | I D P C | ) .
Δ ϕ = 2 π D ( n b n s ) λ ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.