Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Deep-learning based flat-fielding quantitative phase contrast microscopy

Open Access Open Access

Abstract

Quantitative phase contrast microscopy (QPCM) can realize high-quality imaging of sub-organelles inside live cells without fluorescence labeling, yet it requires at least three phase-shifted intensity images. Herein, we combine a novel convolutional neural network with QPCM to quantitatively obtain the phase distribution of a sample by only using two phase-shifted intensity images. Furthermore, we upgraded the QPCM setup by using a phase-type spatial light modulator (SLM) to record two phase-shifted intensity images in one shot, allowing for real-time quantitative phase imaging of moving samples or dynamic processes. The proposed technique was demonstrated by imaging the fine structures and fast dynamic behaviors of sub-organelles inside live COS7 cells and 3T3 cells, including mitochondria and lipid droplets, with a lateral spatial resolution of 245 nm and an imaging speed of 250 frames per second (FPS). We imagine that the proposed technique can provide an effective way for the high spatiotemporal resolution, high contrast, and label-free dynamic imaging of living cells.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Optical microscopy has revolutionized biology since the 17th century by providing a minimally invasive, high-contrast imaging tool for transparent live samples [15]. Fluorescence microscopy can visualize the specific structures of interest of biological samples by specific fluorescent labeling [68]. However, the phototoxicity and photobleaching induced during the fluorescent excitation hinder the long-term observation of living biological samples to a certain extent. On the contrary, quantitative phase microscopy (QPM) enables high-contrast and quantitative detection of transparent samples without labeling by recovering the phase delay induced when an illumination beam passes through the sample. And it has been widely used in various fields, including surface contour detection [9,10], biomedical imaging [11], and adaptive optics imaging [12,13], etc.

Among different QPM approach, digital holographic microscopy (DHM) combines optical interference and digital holography to recover the amplitude and phase information of samples with high accuracy, and it has been playing an important role in many fields [14]. Since the traditional DHM requires a high coherence light source to record a hologram, it suffers from the speckle noise and is sensitive to environmental vibrations [15,16]. To improve the image quality and the stability of DHM, diffraction phase microscopy (DPM) has been developed by using a low coherence light source and a common path interference structure through a grating [17,18]. Subsequently, polarized DPM based on a polarized grating further improves the reconstruction accuracy of the DHM [19] In recent years, deep learning has revolutionized the field of QPM, providing breakthroughs and new perspectives in the development of quantitative phase microscopy techniques [2022]. Wang et al. designed a deep convolutional neural network model called Y-Net based on the U-Net network structure, which achieved simultaneous reconstruction of the intensity and phase images from a single hologram [23]. In 2018, Rivenson et al. proposed a single-frame phase recovery technique based on deep learning. The idea of this method is to use only one defocus-intensity image captured by a camera for phase recovery [24]. Compared to the traditional G-S iterative algorithm, it not only avoids the tedious calculation process but also further reduces the number of images required for imaging. At the same time, the proposed method also eliminates the need for complex axial mechanical displacement devices in the system. In recent years, diffraction-based computational QPM has attracted a lot of attention due to its simple structure and strong anti-interference ability. For example, transport of intensity equation (TIE) based QPM using three diffraction intensity images at three different axial planes in a bright-field microscopy has developed into one of the most representative phase recovery methods, which opens up a new “non-interference” way for quantitative phase imaging [25]. Fourier ptychography microscopy (FPM) can achieve ultra-high spatial resolution in a large field of view (FOV) by using diffraction intensity images captured under various illumination angles [26], and it is a novel computational imaging technique for high space-bandwidth product imaging. Subsequently, people further improved FPM by using different experimental devices or reconstruction algorithms [27,28]. Quantitative differential phase contrast (DPC) microscopy can also achieve high-quality phase images of transparent samples by using asymmetrical LED illuminations and a deconvolution process [29]. It is certain that deep learning has also been combined into diffraction-based computational QPM to improve the imaging performance. For example, Wang et al. combined deep learning with TIE to overcame the disadvantage of requiring multiple image calculations in TIE [30]. Nguyen et al. developed a new network framework to reconstruct video sequences of dynamic living cells captured using computational microscopy techniques such as FPM [31]. Mann et al. proposed a self-calibrating DPC microscope based on deep learning, which combines a nonlinear image forming model to alleviate the limitations of treating imaging objects. Complex object information and aberrations were simultaneously reconstructed without any training dataset [32]. The application of deep learning in diffraction-based QPM facilitates real-time target detection and recognition, enhancing the performance of systems in fields such as biomedical research and industrial inspection.

Compared with the above-mentioned QPM, quantitative phase contrast microscopy (QPCM) based on the Zernike phase contrast principle has unique advantages in imaging speed, spatial resolution, image quality, and system stability. The common path optical structure of QPCM makes the QPCM very immune to environmental disturbances. At the same time, wide spectral illumination greatly improves image quality and avoids speckle noise caused by laser illumination. The group of Popescu was the first to come up with spatial light interference microscopy (SLIM) [33], which combines a phase-type spatial light modulator (SLM) into a commercial Zernike phase contrast microscopy. Gao et al. further proposed QPCM based on a phase plate etched in a silica plate to reduce the halo artifacts and improve the phase reconstruction accuracy [34]. Subsequently, different optical structures and reconstruction algorithms have been proposed to further improve the imaging performance of QPCM [3538]. Recently, the high-resolution imaging of sub-organelles inside live cells was realized with flat-fielding quantitative phase contrast microscopy (FF-QPCM) [39]. FF-QPCM uses annular distributed LEDs for illumination and a phase-type SLM for phase modulation of the zero- and low-frequency components of a sample, realizing a lateral spatial resolution of 245 nm and an imaging speed of 250 FPS. Notably, FF-QPCM still requires at least three phase-shifted intensity images to recover the high-quality phase distribution of a sample. An iterative algorithm has been proposed to retrieve the phase from a single phase-contrast intensity image in spite of the spatial resolution somewhat limited by the vertical illumination used [40].

Recently, deep-learning has been demonstrated as a powerful tool in solving various inverse problems through training a network with a large quantity of paired images [41]. In this paper, we develop a convolutional neural network based quantitative phase contrast microscopy for fast and high spatial-temporal imaging of biosamples. FF-QPCM is upgraded to simultaneously record by a single shot two phase-shifted intensity images, from which the quantitative phase image can be reconstructed by using the neural network. We believe that the proposed technique can provide a favorable means for the high spatiotemporal resolution, high contrast, and label-free dynamic imaging of living cells.

2. Methods

2.1 Experimental implementation of FF-QPCM

The proposed two-channel FF-QPCM was constructed on a commercial microscope body (NIB900, Ningbo Yongxin Optics Co., Ltd, China). The schematic diagram of the FF-QPCM is shown in Fig. 1(a), where an annular illuminator containing 38 LEDs evenly distributed on a circular ring is used as the illumination source. All LEDs are identical and have a wavelength of 470 ± 10 nm (center wavelength ± half width). The annular illuminator is placed 50 mm above the sample, and the emitted partially coherent light directly illuminates the sample from different angles, the illuminator has an effective illumination numerical aperture (NA) of 0.71, contributing to improved lateral resolution and axial tomography imaging capability.

 figure: Fig. 1.

Fig. 1. Data acquisition flowchart. (a) Experimental setup of the FF-QPCM. (b) and (c) Bright-field intensity image and phase distribution of a live COS7 cell in the same field of view. Scalebar in (b): 10 µm. AI, annular illuminator; S, sample; Obj, objective; M, mirror; TL, tube lens; P, polarizer; BS, beam splitter; L, lens; SLM, spatial light modulator; sCMOS, scientific complementary metal oxide semiconductor.

Download Full Size | PDF

An object wave containing the information of a sample placed on the focal plane of an objective lens is formed immediately when the illumination light passes through the sample. The object wave is propagated onto the confocal plane of lens system of TL- L1 by the confocal telescope system MO-TL, which is then propagated onto the working plane of a sCMOS camera (Andor, Zyla 4.2) by the confocal telescope system L1-L2. At the same time, the frequency spectrum of the object wave appears on the confocal plane of the telescope system MO-TL, with the zero-frequency components appearing as 38 focused dots and the high-frequency component covering the whole pupil aperture. The frequency spectrum of the object wave is then transmitted to the confocal plane of the telescope system L1-L2, where a phase-type spatial light modulator (SLM) (Meadowlark Optics, MSP1920-400-800-HSP8) is placed to phase-modulate the zero- and low-frequency components of the object wave with a phase series of 0.5mπ (m = 0,1,2). Wherein a linear polarizer P is positioned before the SLM with its polarization direction along the fast axis of the SLM. Finally, the sCMOS camera synchronously records three phase-shifted intensity images containing the phase information of the sample, which can be expressed as:

$${I_m}(x,y) = {\beta _0}(x,y) + {\beta _c}(x,y) \cdot \cos \left( {(m - 1) \cdot \frac{\pi }{2}} \right) + {\beta _s}(x,y) \cdot \sin \left( {(m - 1) \cdot \frac{\pi }{2}} \right), $$
where (x, y) is the lateral coordinate vector; β0(x, y),  βc(x, y), and βs(x, y) are the intermediate functions containing the distribution of the amplitude and phase of the sample field and can be reconstructed by the standard linear least-squares techniques. Through the simple phase shifting operation, the phase distribution of the samples is calculated as follows:
$$\varphi (x,y) = {\tan ^{ - 1}}\left( {\frac{{{\beta_s}(x,y)}}{{{\beta_c}(x,y) + 2 \cdot C{{(x,y)}^2}}}} \right) + \alpha (x,y). $$

Here, tan-1 represents the anti-tangent function. C(x, y) and α(x, y) are the amplitude ratio and phase difference between the high-frequency components and the composite components containing zero- and low-frequency ones. As a result, the high spatiotemporal resolution and high contrast phase imaging of sub-organelles inside live cells is realized with the slow-varying bulky phase from cell bodies flattened. We note that the phase image reconstructed using Eq. ((1)–(2)) is quasi-quantitative, since both the dc term (the non-scattered component) and the low-frequency components were treated together as the reference wave. The phase image obtained highlights the high-frequency components in the sample that for most of the time is comprised of sub-cellular organelles.

Due to the common path interference structure, FF-QPCM has an excellent immunity to external disturbances. In addition, ultra-oblique partial coherent illumination significantly enhances the lateral spatial resolution (theoretically as 213 nm) and optical-sectioning capability (theoretically as 441 nm). The sCMOS camera has a sampling rate of 500 frames per second (FPS) in a sub window exposure mode with 512 × 512 pixels, and therefore, FF-QPCM has a high temporal resolution of 250 FPS for quantitative phase imaging by using the interleaved reconstruction algorithm [42]. Despite being already fast, sometimes the imaging speed of FF-QPCM is still not fast enough for some fast dynamics. Therefore, reducing the raw phase-shifted intensity images required for the phase reconstruction is the key for capturing the ultra-fast dynamics inside live cells. Aiming to this, herein we combine a novel convolutional neural network based reconstruction approach for FF-QPCM, with which the phase distribution of a sample can be quantitatively obtained by only using two phase-shifted intensity images (0π and 0.5π). Furthermore, we optimized the optical structure of the FF-QPCM to simultaneously record the required two phase-shifted intensity images by a single shot, making the temporal resolution of the system limited only by the exposure of the camera. It should be noted that we used the LEDs with a broad wavelength band (470 ± 10 nm) to compromise the speckle noise reduction and phase imaging accuracy. Such a wavelength band will cause a chromatic issue when using a spatial light modulator (SLM), which, yielding a maximal phase error that can be estimated by 2πΔλ/λ = 0.134 rad. Such phase error is often negligible in phase contrast microscopy.

2.2 Convolutional neural network architecture

As mentioned before, deep learning has been utilized in various quantitative phase imaging systems, and it has played a significant role in solving inverse problems that cannot be solved by conventional methods. In this part, we develop a convolutional neural network to establish a nonlinear mapping between two phase-shifted intensity images (0π and 0.5π) and the corresponding phase image, reducing the amount of the phase-shifted intensity images required in FF-QPCM. In FF-QPCM, by the phase modulation of the zero- and low-frequency components of the sample, the phase information of the sample is nonlinearly converted into the intensity distribution. The phase shifting modulation is typically nonlinear, and can be expressed as

$${I_\textrm{i}}(x,y,{\delta _\textrm{i}}) = \textrm{H}\{{\varphi (x,y),{\delta_\textrm{i}}} \}+ \varepsilon (x,y). $$

Here, (x, y) are the lateral coordinates on the sample plane, Ii(x, y) the intensity images, φ(x, y) the true phase information of the sample, ε(x, y) the noise. δi = 0 and π/2 is the phase shift induced during the ith phase shifting. Therefore, calculating the nonlinear operator H{∼,∼} is the key for the phase recovery for FF-QPCM. To enhance the imaging speed, two phase-shifted intensity images (with phase shifts of 0π and 0.5π) are recorded, from which the phase of a sample can be reconstructed assisted by the prior learnt from the training dataset.

The convolutional neural network (CNN) is realized with a U-Net with residual blocks, which was initially proposed by Ronneberger [43]. The CNN network consists of three modules: encoding branch (the left one), skip connections, and decoding branch (the right one), as shown in Fig. 2. The encoding branch repeatedly used two Conv-BN-ReLU units, the basic residual block between two units, and max-pooling for down-sampling. And, in each Conv-BN-ReLU unit, a two 3 × 3 convolutional layer is followed by a batch normalization (BN) and a ReLU nonlinear layer. In the encoding branch, the number of feature channels were doubled in each down-sampling step, so that the number increases from 2 in the beginning to 64 channels in the end. The decoding branch (the right one) also repeatedly used two Conv-BN-ReLU units, the basic residual block between two convolution operations, and max-pooling for down-sampling. The residual block consists of three convolutional layers, and in each residual block the skip connection adds the input directly to the output of the third convolutional layer. And, in the decoding branch, the number of feature channels is halved through each up-sampling step. In order to assist the decoding process, the skip connections were used to copy concatenation feature maps from the encoding to the decoding after passing through two 3 × 3 convolutional layers and the basic residual block between the two convolution operations. The skip connection of residual blocks works to speed up model convergence and enhances the accuracy. And, the parameters of the network are updated by optimizing the loss between the predicted image and the ground truth (GT) image reconstructed with the three-step phase-shifting algorithm. For the purpose, the mean squared error (MSE) was used as the loss function:

$$\textrm{L}(\theta ) = \frac{1}{N}\sum\nolimits_{i = 1}^\textrm{N} {||{R_\theta }({I_i})} - {S_i}|{|^2}, $$
where θ represents the network parameter, i denotes the i-th sample in the batch, N is the total number of images in one training batch, Rθ(·) represents the predicted phase image from the CNN, and Si is the GT image reconstructed with the three-step phase-shifting algorithm.  Ιi denotes the i-th group of intensity images for quantitative phase contrast imaging in the batch.

 figure: Fig. 2.

Fig. 2. Detailed schematic of the CNN architecture. The residual block used in the encoder and decoder is shown at the bottom of the figure.

Download Full Size | PDF

2.3 Data acquisition and training

The work-flow of deep-learning based flat-fielding quantitative phase contrast microscopy is illustrated in Fig. 3. We acquired 110 groups of raw images-GT pairs for training by translating the sample across the field of views (FOVs) of the microscope. The pixel size of the intensity images and the reconstructed phase images was 2048 × 2048. The GT data were acquired by recording three-step phase-shifted intensity images (0π, 0.5π, and π) and reconstructing the phase image φ(x, y) by the traditional three-step phase-shifting algorithm [44]. The raw images were acquired by recording two-step phase-shifted intensity images (0π, 0.5π). The convolutional neural network was trained using two-step phase-shifted intensity images (lower right in Fig. 3) as the input and the GT phase image (upper right in Fig. 3) as the output. To enrich the training dataset, each raw image and corresponding GT images were cropped into 10 × 10 sub-images with 256 × 256 pixels, and the neighboring two images with 30% area overlapping. Eventually, 11,000 pairs of the intensity (256 × 256 pixels) and the GT (256 × 256 pixels) images were obtained. For test dataset generation, separate experiment was performed, yielding 10 groups of the raw images-GT pairs, which are further cropped into 1000 pairs of intensity (256 × 256 pixels) and GT (256 × 256 pixels) images. In the implementation, the initial learning rate for network training was set to 0.0001, the training was set to 200 epochs, and the batch size parameter was set to 8. We performed the network training and testing on a PC with Intel Xeon Gold 6146 CPU 3.2 GHz and 64 GB of RAM, using NVIDIA GeForce RTX 3060 GPU. The entire training process was conducted on the Windows 10.0 system, and all the code was written in Python 3.10.9 and PyTorch 1.11.0. The entire training process took approximately 16 hours.

 figure: Fig. 3.

Fig. 3. The proposed deep learning method for phase contrast imaging. Scalebar in phase image: 3 µm.

Download Full Size | PDF

3. Results

3.1 Feasibility and accuracy tests

To verify the feasibility and accuracy of the proposed CNN, we used live COS7 cells (Procell Life Science and Technology Co., Ltd., China) as the sample. Figures 4(a), (e), and (i) are the three output phase distributions of a live COS7 cell within three different FOVs reconstructed using the CNN. For comparison, Figs. 4(b), (f), and (j) are the corresponding ground truth phase images reconstructed using three-step phase-shifting algorithm [44]. Figures 4(c), (g), and (k) reveal the difference between the CNN output phase images and the ground-truth phase images, with maximum errors of 0.067 rad, 0.058 rad, and 0.1725rad, respectively. Clearly, mitochondria together with other fine structures inside live COS7 cells can be seen with high contrast in the phase images reconstructed using CNN. And, to intuitively display the accuracy and feasibility of the CNN, the phase distributions along the same lines in CNN output phase images and the ground-truth phase images are plotted, as shown in Fig. 4(d), (h) and (l). The comparison results show that the proposed CNN can provide an accurate phase distribution of live COS7 cell comparable with that reconstructed using a three-step phase-shifting algorithm. Notably, CNN requires ∼ 0.016 seconds to reconstruct a phase map in contrast to ∼ 0.151 seconds in the three-step phase-shifting algorithm, saving approximately 89.4% of processing time. Furthermore, structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR) were used as evaluation indicators to evaluate the performance of the proposed CNN. The SSIM is defined as

$$\textrm{SSIM}(a,b) = \frac{{(2{\mu _a}{\mu _b} + {c_1})(2{\sigma _{ab}} + {c_2})}}{{(\mu _a^2 + \mu _b^2 + {c_1})(\sigma _a^2 + \sigma _b^2 + {c_2})}}.$$

Here, µa and µb are the mean values of images a and b, respectively. σ2 a and σ2 b are the variance; σab is the covariance of a and b; c1 and c2 are regularization parameters. In brief, the larger the SSIM is, the more accurate the sample’s structure obtained by CNN is. And the higher the PSNR score is, the more accurate the phase values provided by CNN are. The PSNR is defined as

$$\textrm{PSNR = 10} \cdot {\log _{10}}(\frac{{\textrm{MA}{\textrm{X}^2}}}{{\textrm{MSE}}}), $$
where MAX is the maximum possible pixel value in the image and MSE stands for Mean Squared Error that is calculated as
$$\textrm{MSE} = \frac{1}{{mn}}{\sum\nolimits_{i = 0}^{m - 1} {\sum\nolimits_{j = 0}^{n - 1} {[I(i,j) - K(i,j)]} } ^2},$$
wherein I(i, j) is the pixel value of the original image, K(i, j) is the pixel value of the processed image, m is the number of rows in the image, and n is the number of columns in the image. By comparing CNN output with the ground truth phase images, it is found that the SSIM and PSNR of the two are 0.8647 and 29.13, respectively, revealing that the proposed CNN is capable of high-quality quantitative phase measurement of transparent live cells.

 figure: Fig. 4.

Fig. 4. Comparison of the output of CNN with the Ground Truth (GT) on imaging live COS7 cells. (a), (e), (i) CNN output phase images of COS7 cells in three different FOVs. (b), (f), (j) Ground-truth phase images in the same FOV with (a), (e), (i). (c), (g), (k) Phase difference between (a) and (b), (e) and (f), and (i) and (k). (d) Phase distributions along the lines in (a) and (b). (h) Phase distributions along the lines in (e) and (f). (l) Phase distributions along the lines in (i) and (j). Scalebar in (k): 3 µm.

Download Full Size | PDF

The CNN trained with live COS7 cells was then used to recover the phase distributions of live 3T3 cells to further verify its generalization ability, as shown in Fig. 5. Figures 5(a), (e), and (i) are the typical CNN output phase images in three FOVs, while Figs. 5(b), (f), and (j) the corresponding ground truth phase images. Figures 5(c), (g), and (k) reveal the difference between the CNN output and the ground truth phase images, and the maximum errors are 0.079 rad, 0.072 rad, and 0.063 rad, respectively. And to intuitively display the accuracy of the CNN, phase distributions along the same lines in CNN output phase images and the ground truth phase images are plotted, as shown in Figs. 5(d), (h) and (l), revealing a high consistency. According to the evaluation method described in Eqs. (4), (5) and (6), we further test the generalization capability and accuracy of the CNN quantitatively. The statistics of SSIM and PSNR for COS7 cells and 3T3 cells are carried on, as shown in Fig. 6, We randomly selected 100 group of data-pairs of 3T3 cells and COS7 cells and plotted the SSIM and PSNR values between the network output and the GT-images. The SSIM and PSNR are 0.8 ± 0.1 (mean ± s. d.) and 25 ± 5 for 3T3 cells. And, the SSIM and PSNR are 0.8 ± 0.1 (mean ± s. d.) and 31 ± 4 for COS7 cells. The statistics of SSIM and PSNR reveal that the phase images obtained by the proposed CNN have a high degree of consistency with ground-truth phase images in terms of sample structure and phase values.

 figure: Fig. 5.

Fig. 5. Comparison of the output of CNN with the Ground Truth (GT) on imaging 3T3 cells. (a), (e), (i) CNN output phase images of 3T3 cells in three different FOVs. (b), (f), (j) Ground truth phase images in the same FOV with (a), (e), (i). (c), (g), (k) Phase difference between (a) and (b), (e) and (f), and (i) and (k). (d) Phase distributions along the lines in (a) and (b). (h) Phase distributions along the lines in (e) and (f). (l) Phase distributions along the lines in (i) and (j). Scalebar in (k): 3 µm.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Statistics of SSIM and PSNR for COS7 cells and 3T3 cells.

Download Full Size | PDF

3.2 Investigation of the dynamic process of live COS7 cells by using the CNN

The generalization capability and adaptability of the proposed CNN was further tested using the raw images from another FF-QPCM system constructed on a Leica body (DMi8, Leica, Germany). The proposed CNN can reconstruct the phase distribution of a sample using only two phase-shifted intensity images, and hence it is suitable for study the dynamics of the subcellular organelles inside live cells. The left image in Fig. 7 is a recovered phase image of live COS7 cells at a certain time points, revealing the comprehensive observation of several organelles, such as mitochondria, lipid droplets, cell nucleus, and pseudopodia. And the image series on the right of Fig. 7 show the dynamics of subcellular interactions. To be specific, a donut-shaped mitochondrion (pointed by orange arrows) has the close interaction with several noodle-like mitochondria (pointed by red arrows). It was reported that the formation of donut-shaped mitochondria was triggered by the opening of the permeability transition pore or K + channels during hypoxia in glucose-free and during reoxygenation in a glucose-containing medium [45], our results realized by the proposed CNN were all performed under normal culture conditions without any intervention. It is meant that there are other mechanisms for the formation of donut-shaped mitochondria, and it needs to be explored in the future. This dynamic observation reveals the dynamic regulation of mitochondria within the live cells and the complexity of cellular metabolism. During the observation, we also noted significant variations in lipid droplets, which may be associated with the activity of mitochondria and energy metabolism. Therefore, the quantitative phase dynamics of live cells revealed by out CNN provide crucial clues for a deeper understanding of the intricate relationship various sub-organelles.

 figure: Fig. 7.

Fig. 7. Quantitative dynamic observation of live COS7 cells by using the CNN. Scalebar: 10 µm. LD, lipid droplets; CN, cell nucleus; M, mitochondria; P, pseudopodia.

Download Full Size | PDF

3.3 CNN-based single-shot FF-QPCM

Using the proposed CNN, the quantitative phase distribution of a sample can be well recovered by two phase-shifted intensity images recorded by the FF-QPCM, effectively reducing the data amount for phase recovery. In this part, we ameliorated the optical structure of the FF-QPCM into parallel phase-shifting FF-QPCM setup (as shown in the Fig. 8(a-b)) to simultaneously record two phase-shifted intensity images (0π and 0.5π) by a single shot, making the temporal resolution of the system limited only by the exposure of the camera. In the ameliorated FF-QPCM, the polarization direction of the linear polarizer P is set to 45° from the fast axis of the SLM. In this case, the frequency spectrum of the object wave reaching the working plane of the SLM is divided into two identical copies along the fast and slow axes of the SLM, respectively, and they are incoherent to each other. For convenience, they are termed as fast-axis and slow-axis object waves. Since the SLM only modulates the phase information of the light field with a polarization direction along the fast axis of the SLM, and the slow-axis object wave is not phase-modulated by the SLM (forming the bright-field intensity image). To spatially separate these two orthogonal polarized object waves on the sCMOS camera plane, a combined phase shift that superimposes the annular phase shifter with a blazed grating is loaded onto the SLM. Finally, the bright-field slow-axis object wave and the phase-modulated fast-axis object wave are recorded parallelly on the sCMOS plane, as shown in Fig. 8(c). The intensity distributions of the two object waves are I0(x,y) and I1(x,y), respectively. Note that the phase modulation pattern (blazed grating + annular phase shifter) added to the SLM is wrapped between 0 and 2π to better utilize the limited phase modulation range of the SLM, as shown Fig. 8(b). To avoid the overlapping of I0 and I1, a rectangular blocking window with a narrow width of x0 is added on the confocal plane of the tube lens TL and the lens L1. Once the blazed grating has a phase modulation function in term of exp(ixx0/(λf)), I0 and I1 will be well separated on the sCMOS plane. Here, x is the spatial coordinate along the slow axis of the SLM, λ is the central wavelength of the illumination beam, and f is the focal length of the lens L2. Notably, before filling the input into the CNN, the intensity images of I0 and I1 are cropped and matched pixel to pixel. As a result, we can simultaneously obtain two phase-shifted intensity images by a single shot despite scarifying the FOV. It is worth mentioning that, I0 and I1 are aligned along the row direction of the camera to avoid the blurring issue caused by the rolling-shutter in the sCMOS (Zyla 4.2), so that lag time in each image is below 1 ms. By using the single-shot CNN based FF-QPCM, the fast dynamics of lipid droplets inside live COS7 cells was captured with high resolution and high contrast, as shown in Fig. 8(c). As a whole, the incorporation of the proposed CNN and the ameliorated FF-QPCM provide a high spatiotemporal resolution quantitative phase imaging of sub-organelles inside live cells. It is also interesting to compare the proposed Single-shot FF-QPCM with the existing single-shot QPCM based on a CCD with a pixelated micro-polarizer array [46,47]. The proposed method utilizes spatial division of the sensor [48] and hence has a higher spatial resolution but a smaller field of view (FOV). By contrast, the latter one utilizes pixel-multiplexing scheme, and therefore, has a larger FOV but a lower sampling resolution.

 figure: Fig. 8.

Fig. 8. CNN-based single-shot FF-QPCM. (a) Two-step parallel phase-shifting FF-QPCM. (b) Experimental setup of the FF-QPCM with phase modulation pattern adding a blazed grating with a 0.5π ring phase modulation distribution. (c) Separated 0.5π phase-shifted intensity image (left) and bright-field intensity image (right) as input, real-time fast dynamics of lipid droplets inside live COS7 cells. Scalebar in (c): 1 µm. AI, annular illuminator; S, sample; Obj, objective; M, mirror; TL, tube lens; P, polarizer; BS, beam splitter; L, lens; SLM, spatial light modulator; RBW, rectangle blocking window; sCMOS, scientific complementary metal oxide semiconductor.

Download Full Size | PDF

4. Conclusion

In this paper, we developed a convolutional neural network (CNN) based flat-fielding quantitative phase contrast microscopy (FF-QPCM) for the high spatiotemporal resolution quantitative phase imaging of sub-organelles inside live cells. The CNN has a simple structure, high feasibility, and generalization ability. The combination of the CNN and the FF-QPCM greatly promotes the application of deep learning in high-resolution QPM. By using the CNN the number of raw phase-shifted intensity images required in FF-QPCM is reduced from three to two, greatly enhancing the temporal resolution. Additionally, we ameliorated the optical structure of the FF-QPCM into a two-step phase-shifting FF-QPCM module, which simultaneously record two phase-shifted intensity images (0π and 0.5π) by a single shot, making the temporal resolution of the system limited only by the exposure of the camera. The proposed CNN-based FF-QPCM has been applied to reveal the quantitative phase distribution of sub-organelles inside live cells, including mitochondria and lipid droplets. It is worth mentioning that the concept of using deep learning to reduce the phase-shifting steps from three to two can also be applied to general laser phase-shifting interferometry and spatially incoherent digital holography. Furthermore, the fast dynamics and interactions of mitochondria and lipid droplets have been recorded by the single-shot CNN-based FF-QPCM, indicating the feasibility and the broad application prospects. It is also worth mentioning that single-shot CNN-based FF-QPCM has a limited FOV due to the fact that the undesirable diffracted light will be induced when a high-frequency blazed grating (sampled with a small number of pixels) is loaded onto the SLM.

We believe the CNN-based FF-QPCM will offer an effective approach for high spatiotemporal resolution, high contrast, and label-free dynamic imaging of sub-organelles inside live cells.

Funding

National Key Research and Development Program of China (2022YFE0100700); National Natural Science Foundation of China (12104354, 62075177, 62105251, 62335018); Natural Science Basic Research Program of Shaanxi (2022J0-788, 2022JQ-122); Natural Science Foundation of Shaanxi Province (2023-JC-YB-518, 2022JQ-122, 2023JCON0731); Key Research and Development Program of Shaanxi Province (2024GH-ZDXM-05); Fundamental Research Funds for the Central Universities (OTZX23008, OTZX23013, OTZX23024, XJSJ23137).

Acknowledgments

P. G. and Y. M. conceived and supervised the project. W. W. and K. Z. performed experiments and data analysis. X. L., W. F., Z. X., R. L., N. A., J. Z., and S. A. contributed to data analysis. W. W. wrote the draft of the manuscript. Y. M. and P. G. revised the manuscript. All the authors edited the manuscript.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. C. S. Lim, E. S. Kim, J. Y. Kim, et al., “Measurement of the Nucleus Area and Nucleus/Cytoplasm and Mitochondria/Nucleus Ratios in Human Colon Tissues by Dual-Colour Two-Photon Microscopy Imaging,” Sci. Rep. 5(1), 18521 (2015). [CrossRef]  

2. L. Schermelleh, A. Ferrand, T. Huser, et al., “Super-resolution microscopy demystified,” Nat. Cell Biol. 21(1), 72–84 (2019). [CrossRef]  

3. S. Shin, K. Kim, J. Yoon, et al., “Active illumination using a digital micromirror device for quantitative phase imaging,” Opt. Lett. 40(22), 5407–5410 (2015). [CrossRef]  

4. P. Gao and C. Yuan, “Resolution enhancement of digital holographic microscopy via synthetic aperture: a review,” Light: Adv. Manuf. 3(1), 105–120 (2022). [CrossRef]  

5. Y. Liu, X. Zhang, F. Su, et al., “Contrast-enhanced fluorescence microscope by LED integrated excitation cubes,” 4, 94–103 (2023).

6. K. Okabe, N. Inada, C. Gota, et al., “Intracellular temperature mapping with a fluorescent polymeric thermometer and fluorescence lifetime imaging microscopy,” Nat. Commun. 3(1), 705 (2012). [CrossRef]  

7. P. A. Summers, B. W. Lewis, J. Gonzalez-Garcia, et al., “Visualising G-quadruplex DNA dynamics in live cells by fluorescence lifetime imaging microscopy,” Nat. Commun. 12(1), 162 (2021). [CrossRef]  

8. P. Gao, B. Prunsche, L. Zhou, et al., “Background suppression in fluorescence nanoscopy with stimulated emission double depletion,” Nat. Photonics 11(3), 163–169 (2017). [CrossRef]  

9. T. Kozacki, M. Mikula-Zdankowska, J. Martinez-Carranza, et al., “Single-shot digital multiplexed holography for the measurement of deep shapes,” Opt. Express 29(14), 21965–21977 (2021). [CrossRef]  

10. J. Martinez-Carranza, M. Mikula-Zdankowska, M. Ziemczonok, et al., “Multi-incidence digital holographic profilometry with high axial resolution and enlarged measurement range,” Opt. Express 28(6), 8185–8199 (2020). [CrossRef]  

11. C. J. Mann, L. F. Yu, C. M. Lo, et al., “High-resolution quantitative phase-contrast microscopy by digital holography,” Opt. Express 13(22), 8693–8698 (2005). [CrossRef]  

12. M. J. Booth, “Adaptive optical microscopy: the ongoing quest for a perfect image,” Light: Sci. Appl. 3(4), e165 (2014). [CrossRef]  

13. Y. F. Shu, J. S. Sun, J. M. Lyu, et al., “Adaptive optical quantitative phase imaging based on annular illumination Fourier ptychographic microscopy,” PhotoniX 3(1), 15 (2022). [CrossRef]  

14. P. Memmolo, L. Miccio, M. Paturzo, et al., “Recent advances in holographic 3D particle tracking,” Adv. Opt. Photonics 7(4), 713–755 (2015). [CrossRef]  

15. V. Micó, J. J. Zheng, J. Garcia, et al., “Resolution enhancement in quantitative phase microscopy,” Adv. Opt. Photonics 11(1), 135–214 (2019). [CrossRef]  

16. J. J. Zheng, P. Gao, and X. P. Shao, “Opposite-view digital holographic microscopy with autofocusing capability,” Sci. Rep. 7(1), 9 (2017). [CrossRef]  

17. B. Bhaduri, C. Edwards, H. Pham, et al., “Diffraction phase microscopy: principles and applications in materials and life sciences,” Adv. Opt. Photonics 6(1), 57–119 (2014). [CrossRef]  

18. G. Popescu, T. Ikeda, R. R. Dasari, et al., “Diffraction phase microscopy for quantifying cell structure and dynamics,” Opt. Lett. 31(6), 775–777 (2006). [CrossRef]  

19. M. L. Zhang, Y. Ma, Y. Wang, et al., “Polarization grating based on diffraction phase microscopy for quantitative phase imaging of paramecia,” Opt. Express 28(20), 29775–29787 (2020). [CrossRef]  

20. C. C. Wu, Z. Y. Qiao, N. Zhang, et al., “Phase unwrapping based on a residual en-decoder network for phase images in Fourier domain Doppler optical coherence tomography,” Biomed. Opt. Express 11(4), 1760–1771 (2020). [CrossRef]  

21. T. Zhang, S. W. Jiang, Z. X. Zhao, et al., “Rapid and robust two-dimensional phase unwrapping via deep learning,” Opt. Express 27(16), 23173–23185 (2019). [CrossRef]  

22. J. M. Qian, S. J. Feng, T. Y. Tao, et al., “Deep-learning-enabled geometric constraints and phase unwrapping for single-shot absolute 3D shape measurement,” APL Phontonics 5(4), 10 (2020). [CrossRef]  

23. K. Q. Wang, J. Z. Dou, Q. Kemao, et al., “Y-Net: a one-to-two deep learning framework for digital holographic reconstruction,” Opt. Lett. 44(19), 4765–4768 (2019). [CrossRef]  

24. Y. Rivenson, Y. B. Zhang, H. Gnaydin, et al., “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(1), 16 (2018). [CrossRef]  

25. C. Zuo, Q. Chen, L. Tian, et al., “Transport of intensity phase retrieval and computational imaging for partially coherent fields: The phase space perspective,” Opt. Lasers Eng. 71, 20–32 (2015). [CrossRef]  

26. G. A. Zheng, C. Shen, S. W. Jiang, et al., “Concept, implementations and applications of Fourier ptychography,” Nat. Rev. Phys. 3(3), 207–223 (2021). [CrossRef]  

27. X. Z. Ou, R. Horstmeyer, C. H. Yang, et al., “Quantitative phase imaging via Fourier ptychographic microscopy,” Opt. Lett. 38(22), 4845–4848 (2013). [CrossRef]  

28. X. Z. Ou, G. A. Zheng, and C. H. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22(5), 4960–4972 (2014). [CrossRef]  

29. C. G. Rylander, D. P. Davé, T. Akkin, et al., “Quantitative phase-contrast imaging of cells with phase-sensitive optical coherence microscopy,” Opt. Lett. 29(13), 1509–1511 (2004). [CrossRef]  

30. K. Q. Wang, J. L. Di, L. Ying, et al., “Transport of intensity equation from a single intensity image via deep learning,” Opt. Lasers Eng. 134, 106233 (2020). [CrossRef]  

31. T. Nguyen, Y. J. Xue, Y. Z. Li, et al., “Deep learning approach to Fourier ptychographic microscopy,” Opt. Express 26(20), 26470–26484 (2018). [CrossRef]  

32. B. Seong, I. Kim, T. Moon, et al., “Untrained deep learning-based differential phase-contrast microscopy,” Opt. Lett. 48(13), 3607–3610 (2023). [CrossRef]  

33. Y. Park, C. Depeursinge, and G. Popescu, “Quantitative phase imaging in biomedicine,” Nat. Photonics 12(10), 578–589 (2018). [CrossRef]  

34. P. Gao, B. L. Yao, I. Harder, et al., “Phase-shifting Zernike phase contrast microscopy for quantitative phase measurement,” Opt. Lett. 36(21), 4305–4307 (2011). [CrossRef]  

35. Q. Wei, Y. Y. Li, J. Vargas, et al., “Principal component analysis-based quantitative differential interference contrast microscopy,” Opt. Lett. 44(1), 45–48 (2019). [CrossRef]  

36. Y. Ma, L. Ma, M. Liu, et al., “Dual-modality quantitative phase-contrast microscopy based on pupil phase modulation (DQPCM),” Opt. Commun. 522, 128685 (2022). [CrossRef]  

37. D. W. E. Noom, K. S. E. Eikema, and S. Witte, “Lensless phase contrast microscopy based on multiwavelength Fresnel diffraction,” Opt. Lett. 39(2), 193–196 (2014). [CrossRef]  

38. H. H. Chen, Y. Z. Lin, and Y. Luo, “Isotropic differential phase contrast microscopy for quantitative phase bio-imaging,” J. Biophotonics 11, 7 (2018). [CrossRef]  

39. Y. Ma, T. Q. Dai, Y. Z. Lei, et al., “Label-free imaging of intracellular organelle dynamics using flat-fielding quantitative phase contrast microscopy (FF-QPCM),” Opt. Express 30(6), 9505–9520 (2022). [CrossRef]  

40. N. Hai and J. Rosen, “Phase contrast-based phase retrieval: a bridge between qualitative phase contrast and quantitative phase imaging by phase retrieval algorithms,” Opt. Lett. 45(20), 5812–5815 (2020). [CrossRef]  

41. Y. Li, Y. Luo, D. Mengu, et al., “Quantitative phase imaging (QPI) through random diffusers using a diffractive optical network,” Light: Adv. Manuf. 4(3), 206–221 (2023). [CrossRef]  

42. Y. Ma, D. Li, Z. J. Smith, et al., “Structured illumination microscopy with interleaved reconstruction (SIMILR),” J. Biophotonics 11, 9 (2018). [CrossRef]  

43. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention (Springer International Publishing, 2015), 234–241.

44. P. S. Huang and S. Zhang, “Fast three-step phase-shifting algorithm,” Appl. Opt. 45(21), 5086–5091 (2006). [CrossRef]  

45. X. Liu and G. Hajnóczky, “Altered fusion dynamics underlie unique morphological changes in mitochondria during hypoxia-reoxygenation stress,” Cell Death Differ. 18(10), 1561–1572 (2011). [CrossRef]  

46. W. You, Y. Jiao, J. Wang, et al., “Single-path single-shot phase-shifting quantitative phase microscopy with annular bright-field illumination,” Opt. Continuum 1(6), 1305–1313 (2022). [CrossRef]  

47. W. You, W. Lu, and X. J. O. E. Liu, “Single-shot wavelength-selective quantitative phase microscopy by partial aperture imaging and polarization-phase-division multiplexing,” Opt. Express 28(23), 34825–34834 (2020). [CrossRef]  

48. T. Tahara, “Incoherent digital holography with two polarization-sensitive phase-only spatial light modulators and reduced number of exposures,” Appl. Opt. 63(7), B24–B31 (2024). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Data acquisition flowchart. (a) Experimental setup of the FF-QPCM. (b) and (c) Bright-field intensity image and phase distribution of a live COS7 cell in the same field of view. Scalebar in (b): 10 µm. AI, annular illuminator; S, sample; Obj, objective; M, mirror; TL, tube lens; P, polarizer; BS, beam splitter; L, lens; SLM, spatial light modulator; sCMOS, scientific complementary metal oxide semiconductor.
Fig. 2.
Fig. 2. Detailed schematic of the CNN architecture. The residual block used in the encoder and decoder is shown at the bottom of the figure.
Fig. 3.
Fig. 3. The proposed deep learning method for phase contrast imaging. Scalebar in phase image: 3 µm.
Fig. 4.
Fig. 4. Comparison of the output of CNN with the Ground Truth (GT) on imaging live COS7 cells. (a), (e), (i) CNN output phase images of COS7 cells in three different FOVs. (b), (f), (j) Ground-truth phase images in the same FOV with (a), (e), (i). (c), (g), (k) Phase difference between (a) and (b), (e) and (f), and (i) and (k). (d) Phase distributions along the lines in (a) and (b). (h) Phase distributions along the lines in (e) and (f). (l) Phase distributions along the lines in (i) and (j). Scalebar in (k): 3 µm.
Fig. 5.
Fig. 5. Comparison of the output of CNN with the Ground Truth (GT) on imaging 3T3 cells. (a), (e), (i) CNN output phase images of 3T3 cells in three different FOVs. (b), (f), (j) Ground truth phase images in the same FOV with (a), (e), (i). (c), (g), (k) Phase difference between (a) and (b), (e) and (f), and (i) and (k). (d) Phase distributions along the lines in (a) and (b). (h) Phase distributions along the lines in (e) and (f). (l) Phase distributions along the lines in (i) and (j). Scalebar in (k): 3 µm.
Fig. 6.
Fig. 6. Statistics of SSIM and PSNR for COS7 cells and 3T3 cells.
Fig. 7.
Fig. 7. Quantitative dynamic observation of live COS7 cells by using the CNN. Scalebar: 10 µm. LD, lipid droplets; CN, cell nucleus; M, mitochondria; P, pseudopodia.
Fig. 8.
Fig. 8. CNN-based single-shot FF-QPCM. (a) Two-step parallel phase-shifting FF-QPCM. (b) Experimental setup of the FF-QPCM with phase modulation pattern adding a blazed grating with a 0.5π ring phase modulation distribution. (c) Separated 0.5π phase-shifted intensity image (left) and bright-field intensity image (right) as input, real-time fast dynamics of lipid droplets inside live COS7 cells. Scalebar in (c): 1 µm. AI, annular illuminator; S, sample; Obj, objective; M, mirror; TL, tube lens; P, polarizer; BS, beam splitter; L, lens; SLM, spatial light modulator; RBW, rectangle blocking window; sCMOS, scientific complementary metal oxide semiconductor.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

I m ( x , y ) = β 0 ( x , y ) + β c ( x , y ) cos ( ( m 1 ) π 2 ) + β s ( x , y ) sin ( ( m 1 ) π 2 ) ,
φ ( x , y ) = tan 1 ( β s ( x , y ) β c ( x , y ) + 2 C ( x , y ) 2 ) + α ( x , y ) .
I i ( x , y , δ i ) = H { φ ( x , y ) , δ i } + ε ( x , y ) .
L ( θ ) = 1 N i = 1 N | | R θ ( I i ) S i | | 2 ,
SSIM ( a , b ) = ( 2 μ a μ b + c 1 ) ( 2 σ a b + c 2 ) ( μ a 2 + μ b 2 + c 1 ) ( σ a 2 + σ b 2 + c 2 ) .
PSNR = 10 log 10 ( MA X 2 MSE ) ,
MSE = 1 m n i = 0 m 1 j = 0 n 1 [ I ( i , j ) K ( i , j ) ] 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.