Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Deep learning Mueller matrix feature retrieval from a snapshot Stokes image

Open Access Open Access

Abstract

A Mueller matrix (MM) provides a comprehensive representation of the polarization properties of a complex medium and encodes very rich information on the macro- and microstructural features. Histopathological features can be characterized by polarization parameters derived from MM. However, a MM must be derived from at least four Stokes vectors corresponding to four different incident polarization states, which makes the qualities of MM very sensitive to small changes in the imaging system or the sample during the exposures, such as fluctuations in illumination light and co-registration of polarization component images. In this work, we use a deep learning approach to retrieve MM-based specific polarimetry basis parameters (PBPs) from a snapshot Stokes vector. This data post-processing method is capable of eliminating errors introduced by multi-exposure, as well as reducing the imaging time and hardware complexity. It shows the potential for accurate MM imaging on dynamic samples or in unstable environments. The translation model is designed based on generative adversarial network with customized loss functions. The effectiveness of the approach was demonstrated on liver and breast tissue slices and blood smears. Finally, we evaluated the performance by quantitative similarity assessment methods in both pixel and image levels.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Polarization imaging is a non-invasive, label-free and non-contact technique. It is sensitive to micro morphological changes in tissues, and has shown promising potential in many biomedical studies and clinical diagnosis in both backward scattering and transmission configurations [1,2]. The polarization state of light can be described by the four-element Stokes vector $\boldsymbol {S}=(S_0 \ S_1 \ S_2 \ S_3)^{\top }$. The polarization property of a sample can be represented by a Mueller matrix (MM) that transforms an incident Stokes vector $\boldsymbol {S}_{in}$ to $\boldsymbol {S}_{out}$ after interaction with the sample as $\boldsymbol {S}_{out}=\boldsymbol {MS}_{in}$ where $\boldsymbol {M}$ is a $4\times 4$ matrix [3]. MM is a comprehensive description of polarization characteristics of scattering samples. Individual MM element and polarimetry basis parameters (PBPs) derived from MM can be used to characterized the microstructural features of tissues. The derived parameters are related with the structures of samples and have clearer physical meanings such as polarization optics [46], microstructures [7] or symmetry [8]. In biomedical applications, specific PBPs can be used to characterize the pathological features. For example, the retardance parameter is sensitive to fibrous structures, and can be used to distinguish healthy and abnormal regions in cervical [9] and liver tissue slices [10,11].

To obtain the 16 elements of the MM image, at least 16 independent intensity images must be measured under different polarization modulations [1]. For example, by rotating the retarders in both polarization state generator (PSG) and polarization state analyzer (PSA), 30 polarization component images are recorded to calculate the 16 elements of MM [1214]. Using two division of the focal plane (DoFP) polarimeters to capture images of a Stokes vector in a single shot, a full MM can be derived through only four exposures [15]. Inspired by compressive sensing, some snapshot polarization imaging systems have been developed [1618]. However, they aimed to reconstruct polarization state or Stokes component images rather than MM or derived PBPs. There are also efforts to develop snapshot MM polarimeters [1921], but these techniques can only be used on homogeneous medium.

Obtaining a MM image needs to take at least four Stokes images. As shown in Fig. 1, strict image co-registration and high stability of the light source are critical for accurate measurements of MM, but may not be guaranteed easily especially for living samples or unstable external environment. In this work, we use a well-designed deep learning network to generate MM derived polarization parameter images from a snapshot Stokes vector. The snapshot Stokes polarimetry is based on dual DoFPs, each of which is equipped with a pixelated micro-polarimeter array. The PSA can capture eight intensity images for different polarization components in a single shot, which are used to calculate the four-channel ($S_0$-$S_3$) Stokes images combining with the instrument matrix of PSA [15]. The input of the deep learning model is the three channels $S_1$, $S_2$ and $S_3$ images normalized by the intensity image $S_0$. The fluctuation of light source will not affect the Stokes images fed into the deep learning model. The Stokes imaging system needs no moving or active parts when the pretrained model is built. This will eliminate errors due to changes in the sample, motions in the system and fluctuations in the illumination intensity among the exposures.

 figure: Fig. 1.

Fig. 1. The impact of co-registration and stability of light source on the image quality for retardance parameter (MMPD-$\delta$) of liver tissues. (a) Three channels ($S_1$-$S_3$) of a Stokes image in a snapshot, and ground truth MMPD-$\delta$ image with well registration and stable light source. (b) Simulated MMPD-$\delta$ images with 1% (first row) and 2% (second row) light intensity fluctuation respectively in the MM imaging. (c) Simulated MMPD-$\delta$ images with shifts of one (first row) and two (second row) pixels respectively in the MM imaging. (d) Generated image from the snapshot Stokes image using our method.

Download Full Size | PDF

Deep learning has shown its capability of solving ill-posed or inverse problems, and has been adopted in various image enhancement and reconstruction applications in optical imaging [2224]. For example, deep learning can improve the quality of images to match higher resolution images (captured with higher numerical apertures objective lens) in bright-field [25], fluorescence [26] and digital holographic [27] microscopes. Miao et al. [28] and Wang et al. [29] designed neural networks to reconstruct hyperspectral images from a single shot. In addition, deep learning method can be used to improve the performance of image reconstruction in computed tomography (CT) [30], photoacoustic (PA) system [31], computational ghost imaging (CGI) [32] and compressed ultrafast photography (CUP) [33] respectively. Usually, the development of models consists of training and predicting. In the training phases, image pairs (input and ground truth) are fed into the deep learning model to learn the optimal parameters by back propagation. After that, the well-trained model can output reconstructed images corresponding to the input. In this work, we propose an end-to-end reconstruction network to generate polarization parameter images from a single Stokes vector. The translation model is based on conditional generative adversarial network (GAN) [34,35] consisting of an encoder-decoder-based generator and an effective receptive field-based discriminator [36]. Besides the GAN loss function, we use $L_1$ and $SSIM$ [37] functions to improve the quality of output images in pixel and image level respectively. Furthermore, we add total variance (TV) [38] loss function to reduce the noise in the generated images.

In summary, we have made the following contributions in this paper.

  • - We design a deep-learning-based model to generate MM derived PBP images from a Stokes image that can be obtained in a single shot. This method can avoid normal precision issues caused by the exceptional high sensitivity of experimentally retrieved MM images to the instability in the system and sample.
  • - We use conditional GAN to learn the statistical mapping from Stokes polarimetry and MM polarimetry, and show the efficacy on MM polar decomposition (MMPD) parameters of liver, breast tissues and blood smears.
  • - We demonstrate that this method is not sensitive to the polarization states of the incident light when taking snapshot Stokes imaging via DoFP, which reduces the complexity of the hardware design, and is more robust and practical.

The rest of this paper is organized as follows: Section 2 introduces the experimental setup and data preprocessing in the data collection procedure. Section 3 describes the basic idea of conditional GAN and the architecture of the networks. The experimental results are given in Section 4. The conclusions and discussions are given in Section 5.

2. Experimental setup and data collection

2.1 Stokes polarimetry and Mueller matrix polarimetry

In the data collection procedure, we used the MM microscope based on dual division of focal plane polarimeters (DoFPs-MMM) [15]. As shown in Fig. 2(a) and (b), in DoFPs-MMM, the PSA is capable of detecting the full polarization states simultaneously using two DoFP polarimeters, a beam splitter cube and a fixed quarter-wave plate R2. The PSG consists of a fixed-angle polarizer P1 and a rotatable quarter-wave plate R1. The light emitted from the LED (3W, $632 nm$, $\Delta \lambda =20 nm$) undergoes the polarization modulation by the PSG, passes through the sample, and then is analyzed by the PSA. Before putting into use, the PSA is well calibrated by a standard polarization light source or a PSG. At least four independent polarization states should be detected and analyzed to reconstruct the $8\times 4$ instrument matrix $\boldsymbol {A}_{PSA}$ of PSA pixel by pixel according to $\boldsymbol {A}_{PSA}=[\boldsymbol {I}_{air}][\boldsymbol {S}_{in}]^{-1}$. After calibration, the full Stokes vector of any outgoing light from the sample can be obtained according to $\boldsymbol {S}_{out}=\boldsymbol {A}_{PSA}^{-1}\boldsymbol {I}$, where $\boldsymbol {I}$ is an $8\times 1$ matrix that contains four polarization channel images after interpolation of two DoFP polarimeters, expressed as $\boldsymbol {I}=\left [\boldsymbol {I}_{0}^{(1)}, \boldsymbol {I}_{45}^{(1)}, \boldsymbol {I}_{90}^{(1)}, \boldsymbol {I}_{135}^{(1)}, \boldsymbol {I}_{0}^{(2)}, \boldsymbol {I}_{45}^{(2)}, \boldsymbol {I}_{90}^{(2)}, \boldsymbol {I}_{135}^{(2)}\right ]^{\top }$. To calculate MM of the sample, the quarter-wave plate R1 in the PSG rotates to four preset angles to generate four independent incident polarization states, as shown in Fig. 2(c), which compose the instrument matrix $\left [\boldsymbol {S}_{\textrm {in}}\right ]$ of the PSG. After the interaction between the light and the sample, four corresponding Stokes vectors $\left [\boldsymbol {S}_{\textrm {out}}\right ]$ of outgoing light are measured by the PSA consisting of dual DoFP polarimeters, then the full MM of the sample can be reconstructed according to $\boldsymbol {M}=\left [\boldsymbol {S}_{\textrm {out}}\right ]\left [\boldsymbol {S}_{\textrm {in}}\right ]^{-1}$. We estimated the precision of the setup by measuring standard polarization samples including linear polarizer and quarter-wave plate under different azimuths. The overall errors are less than 1% [15].

 figure: Fig. 2.

Fig. 2. Experimental setup and data acquisition. (a) Photograph of DoFPs-MMM. (b) Diagram of DoFPs-MMM. (c) Four polarization states of incident light on Poincare sphere.

Download Full Size | PDF

2.2 Data acquisition and preprocessing

Although the complete polarization information of samples is encoded in the MM, it is inconvenient to directly use these 16 elements since they lack explicit associations with microstructures. To tackle this problem, many techniques such as MM decomposition [46] and transformation methods [7] have been proposed to derive new polarization parameters which are more explicitly related to the optical or microstructural features of the samples [39]. Specifically, MM polar decomposition (MMPD) [5] approximates MM of a complex sample as the product of three sub-matrices representing diattenuation ($\boldsymbol {M}_D$), retardation ($\boldsymbol {M}_R$) and depolarization ($\boldsymbol {M}_\Delta$), and derives three polarization parameters MMPD-$\delta$, MMPD-$D$ and MMPD-$\Delta$ whose values correspond to retardance, dichroism and depolarization of the sample. These parameters, as well as those representing orientations of the anisotropic structures, are often used as PBPs in many biomedical applications [11,40,41]. In this work, we train the deep learning model to translate Stokes images into PBP images directly. PBPs used in this work are listed in Table 1.

Tables Icon

Table 1. The formulas and description of PBPs used in the experiments.

All the sample types in this work have been examined with MM microscopy in previous studies [10,11,15,4244]. We used 38 liver tissue samples, 22 breast tissue samples and 8 blood smear samples. The liver tissue samples were prepared by Mengchao Hepatobiliary Hospital of Fujian Medical University. The breast tissue and blood smear samples were prepared by University of Chinese Academy of Science Shenzhen Hospital. The tissues were formalin-fixed and paraffin-embedded, and sectioned into $4um$ slices. Then, these sections were deparaffinized by Xylene and mounted on a standard glass slide. After hematoxylin & eosin (H&E) staining, a coverslip was then placed. In the blood smear preparation, a droplet of blood was placed close to the edge of the slide, and then a ground cover glass spreads the droplet along the edge of the slide [45]. Tissue samples were imaged with $4\times$ objective lens and blood samples were imaged with $20\times$ objective lens. All works were approved by the Ethics Committee of these two hospitals. Figure 3 shows examples of these samples in the training set. The experimental results of the testing set will be discussed in the Section 4.

 figure: Fig. 3.

Fig. 3. The Stokes and PBP images in the training set. (a), (b) and (c) are normalized images of $S_1$, $S_2$ and $S_3$ in $\boldsymbol {S}_{out}$ respectively, illuminated with right-hand circular incident light. (d) and (e) are PBP images of liver tissue, breast tissue and blood smear samples.

Download Full Size | PDF

3. Computational translation method

The goal of this work is to learn a statistical mapping between two data domains: Stokes images (denoted as $X$) and PBP images (denoted as $Y$). The PBP retrieval model is based on conditional generative adversarial networks (GAN) [35] that consists of a generator ($G$) and a discriminator ($D$). The mapping function can be denoted as $G:X \rightarrow Y$ which takes a sample from Stokes domain and translate into the counterpart in PBP domain. The computational PBP retrieval schematic is shown in Fig. 4. For Stokes images $x \in X$, $G$ translates it as $y^\prime =G(x)$ to PBP domain $Y$. $G$ learns to deceive $D$ by generating more and more realistic images, while the discriminator $D$ learns to classify the PBP images are real or synthetic (labelled as 1 and 0 respectively). This can be formulated as an adversarial loss in 1.

$$\begin{aligned} L_{G A N}(G, D) & =\mathbb{E}_{(x,y) \sim (X, Y)}[\log D(x, y)]+\mathbb{E}_{x \sim X}[\log (1-D(x, G(x)))] \end{aligned}$$

 figure: Fig. 4.

Fig. 4. The schematic of deep-learning-based PBP retrieval model, and the network architectures of (a) generator and (b) discriminator in the conditional generative adversarial network. Stokes image $x$ and PBP image $y$ are examples in the training set. $y^\prime$ is the generated PBP image in the training phase.

Download Full Size | PDF

The generator is not only trained to fool the discriminator, but to output PBP images that are close to the ground truth. Besides the adversarial loss, reconstruction loss functions can be used to evaluate the quality of output images. $L_1$ penalty measures the absolute difference in the pixel level defined in 2. Compared with $L_2$ loss, it can reduce the blurring and artifact in the output [46]. The generated images are expected to conform to the human visual system (HVS) since the results will be observed and analyzed by researchers or pathologists. Structural similarity index (SSIM) [37] measures the similarity between two images considering luminance, contrast, and structure, to which the HVS is sensitive. Given two images $a$ and $b$, the value of SSIM between them is defined in 3 where $\mu$ is the average, $\sigma$ is the standard deviation, $\sigma _{ab}$ is the cross covariance of images $a$ and $b$, and $C_i$ is constant to avoid division by zero. The interval is [0, 1] and higher SSIM score indicates more structural similarities. The SSIM loss is defined in 4. In this work, we use SSIM in a small perceptive region ($11\times 11$) and average all values as the final loss of one patch. Furthermore, to improve the spatial continuity and smoothness, we use total variation (TV) function [46] as a regularization term. The TV loss is defined in 5.

$$L_{L 1}(G)=\mathbb{E}_{(x, y) \sim (X, Y)}\left[\|G(x)-y\|_{1}\right]$$
$$\operatorname{SSIM}(a, b)=\frac{\left(2 \mu_{a} \mu_{b}+C_{1}\right)\left(2 \sigma_{a b}+C_{2}\right)}{\left(\mu_{a}^{2}+\mu_{b}^{2}+C_{1}\right)\left(\sigma_{a}^{2}+\sigma_{b}^{2}+C_{2}\right)}$$
$$L_{S S I M}(G)=\mathbb{E}_{(x, y) \sim (X, Y)}[1-\operatorname{SSIM}(G(x), y)]$$
$$\begin{aligned} L_{T V}(G) & = \sum_{i, j} [(G(x)_{i+1, j}-G(x)_{i, j})^{2} + G(x)_{i, j+1}-G(x)_{i, j})^{2}] \end{aligned}$$

The overall objective loss function is the combination of adversarial and reconstruction loss functions, and is given in 6 where $\alpha$ and $\beta$ are hyper-parameters representing the relative weights of each part. In this work, we set $\alpha = 100$ and $\beta = 10$ experimentally. In the training phase, the parameters in the generator and the discriminator are upgraded in turn. This procedure can be regarded as a game optimization problem. When the prediction probability of discriminator is about 0.5 (random guess), the synthetic PBP images are thought to be sufficiently close to the ground truth.

$$\begin{aligned} \min _{G} \max _{D}(L_{G A N}(G, D)+\alpha(L_{L 1}(G)+L_{S S I M}(G))+\beta L_{T V}(G)) \end{aligned}$$

We adopt a U-Net [47] encoder-decoder network for the generator. The three channel Stokes image goes through a series of downsampling convolutional layers (blue arrows in Fig. 4) until it reaches a bottleneck, and then is upsampled through a series of deconvolutions (red arrows in Fig. 4) to its original size. In the downsampling, the encoder downsizes the image from the previous layer by a factor of 2, while the decoder doubles the image size in each step in the upsampling. To introduce nonlinearity in the network, every convolution and deconvolution layer is followed by a leaky rectified linear unit (ReLU) [48], except that the last layer is followed by a hyperbolic tangent function to generate the final PBP image. The skip connections are used between equivalently sized layers to pass low-level information.

The discriminator is a simple convolution neural network classifier. The input starts with the generator output or the target PBP image with the corresponding Stokes image. After passing a series of convolution, it outputs a $30\times 30$ prediction map. Every point in this map implies one activation to a $70\times 70$ patch (effective receptive field) on the original input image [36]. All activations will be averaged to provide the ultimate output of the discriminator. Both the generator and the discriminator networks require a patch size of $256\times 256$ pixels as input.

4. Experimental results

To evaluate the performance of the method, we applied the deep-learning-based PBP retrieval model on liver and breast tissue slices, and blood smear samples, all of which have been examined previously with MM polarimetry [10,11,15,4244]. The patches in the testing set did not overlap with the patches used in the training set. The experiments were carried out on a desktop computer with Intel Core-i9-9900K and a Geforce GTX 2080Ti card. The operating system is Ubuntu 18.04 with kernel 5.4.0. The model was built with Python 3.8.5 and Pytorch 1.7.0. The number of samples and other training details are listed in Table 2. Each model was trained by 200 epochs.

Tables Icon

Table 2. The details of samples used in the experiments.

The existence and organization of fibrous microstructures can affect the birefringence properties on tissues. For the liver cirrhosis and cancer tissues, there are always inflammatory reactions accompanied with fibrosis formations [38]. One of the representative applications of MM polarimetry is the detection of fibrosis in liver tissues [10,11,42,43]. The MMPD parameter $\delta$ can be used to distinguish the different stages of fibrosis from F1 to F4 [11]. In this work, we used the Stokes images to generate $\delta$ images on liver tissues. The results of different fibrosis stages are given in Fig. 5. The distribution and orientation of fibers are relatively clear compared with the ground truth. We use normalized root mean square error (NRMSE) and peak signal to noise ratio (PSNR) to evaluate the output in the pixel and image level respectively. NRMSE is an extended of root mean square error (RMSE) that measures the difference between the output and ground truth values. In the experiments, we generated different PBPs at different scales, for which the normalized version of RMSE is suitable. The generator has approximately 54.4 million parameters, and works in a forward propagation way in the prediction phase. The prediction time of one $256\times 256$ patch is less than 0.1 second, which show the potential to achieve real-time performance. It is noted that the prediction time can be further reduced by using a more powerful computer. The image acquisition time depends on the frame rate of the CCD (0.1 second), which is much faster than DRR-MM microscopy (192 seconds) and DoFPs-MM microscopy (14 seconds) [15].

 figure: Fig. 5.

Fig. 5. Results of MMPD-$\delta$ on different stages from F1 to F4 of liver fibrosis tissues. (a), (b) and (c) are normalized images of $S_1$, $S_2$ and $S_3$ in $\boldsymbol {S}_{out}$ respectively, illuminated with right-hand circular incident light. (d) Generated images $G(x)$. (e) Ground truth images $y$.

Download Full Size | PDF

The quality of data determines the performance of the learning-based model. In the Stokes polarimetry, the output of Stokes vectors is determined by the incident state of polarization (SOP). We compared six SOPs, including two linear polarization ($0^\circ$ and $135^\circ$), two circular polarization (right-hand and left-hand), and two elliptical polarization ($E_1$ and $E_2)$. The values of $E_1$ and $E_2$ are $(1.000 \ 0.750 \ 0.443 \ 0.500)^{\top }$ and $(1.000 \ 0.750 \ -0.433 \ -0.500)^{\top }$ respectively. The quantitative comparison results of MMPD-$\delta$ on liver fibrosis tissues are given in Table 3. As we can see, the model achieved the best performance when illuminating the tissue slices with $E_1$. In the deep-learning-based cross-domain image translation, both the input and target must share the same structure or the sketch. Figure 6 illustrates the results generated from single Stokes component with linear and circular SOPs of incident light. For the circular SOP, the linear components ($S_1$ and $S_2$) of $\boldsymbol {S}_{out}$ have higher contrast than $S_3$. Conversely, the $S_3$ has higher contrast for the linear SOP. The input components with clearer structures generated more similar outputs. Using all three components as input, at least one channel can provide structure information for the model. In addition, when taking all three components from right-hand circular or linear SOP, the model outperformed only taking individual component (as shown in Fig. 6(e) and (f)). This shows the advantage of using DoFP, which can obtain the full Stokes vector by a single shot.

 figure: Fig. 6.

Fig. 6. Generated MMPD-$\delta$ images of liver fibrosis tissues with different single component as input in $\boldsymbol {S}_{in}$. (a) $S_1$, $S_2$ and $S_3$ in $\boldsymbol {S}_{out}$ with right-hand circle polarization state as $\boldsymbol {S}_{in}$. (b) Generated images from single Stokes components in (a). (c) $S_1$, $S_2$ and $S_3$ in $\boldsymbol {S}_{out}$ with $0^\circ$ linear polarization state as $\boldsymbol {S}_{in}$. (d) Generated images from single Stokes components in (c). (e) Generated images taking three-channel right-hand circle polarization input. (f) Generated images taking three-channel 0$^{\circ }$ linear polarization input. (g) Ground truth image.

Download Full Size | PDF

Tables Icon

Table 3. Quantitative comparison with different SOPs on MMPD-$\delta$ images of liver tissues.

MM polarimetry can be used to distinguish pathological features. Tumorigenesis and metastatic progression are usually accompanied with malignant cell proliferation and extracellular matrix remodeling [49]. Dong et. al. linearly combined PBPs to classify cell nuclei, aligned collagen, and disorganized collagen in the breast carcinoma tissues [44]. This work demonstrates that polarization parameters have high sensitivity for characterizing specific microstructures in the low-resolution and wide-field systems. Figure 7 shows the generated MMPD-$\delta$ and MMPD-$\Delta$ images and their corresponding ground truth on breast tissue slices.

 figure: Fig. 7.

Fig. 7. Results of MMPD-$\delta$ and MMPD-$\Delta$ images of breast tissues. (a), (b) and (c) are normalized images of $S_1$, $S_2$ and $S_3$ in $\boldsymbol {S}_{out}$ respectively, illuminated with right-hand circle polarization light. (d) Generated MMPD-$\delta$ images. (e) Ground truth of MMPD-$\delta$ images. (f) Generated MMPD-$\Delta$ images. (g) Ground truth of MMPD-$\Delta$ images.

Download Full Size | PDF

Usually, the deep learning algorithms work in isolation. When building models on other PBPs or other samples, the generation model must be retrained from scratch. Transfer learning (TL) is capable of leveraging knowledge learnt from one task to solve another similar problem [50]. The learning process can be faster, more accurate and need less training data with the warm start of TL. Different types of tissues have some similar components, e.g., cells and extracellular matrix. For a well-trained PBP retrieval model, the knowledge can be shared with other related learning problems. In the training of MMPD-$\delta$ and MMPD-$\Delta$ images of breast tissues, we initialized the model with the parameters learnt from the liver tissues as the starting point, which reduced the convergence time and improve the generality. The history of loss values can help review the performance of deep learning model over time during training. Our goal is to generate synthetic PBP images that are close to the targets. The similarity can be quantitatively measured by the reconstruction loss. We plotted the sum of $L_{L_1}$ and $L_{SSIM}$ values as illustrated in Fig. 8. Initialized the weights learned from other samples, the model converged much faster than the random initialization. TL approach can help transplant the proposed model to other application scenarios, e.g., other tissues and PBPs, more efficiently in practice.

 figure: Fig. 8.

Fig. 8. The sum of $L_{L_1}$ and $L_{SSIM}$ losses with increasing number of iterations on breast tissues. (a) and (b) are loss values of MMPD-$\delta$ and MMPD-$\Delta$ respectively with random and transfer learning initialization.

Download Full Size | PDF

Finally, we tested our method on cell samples. Red blood cells are normally very regular biconcave discoid shape and have been used to test the performance of a MM polarimetry system [15]. In this work, we trained the model to learn the translation of the diattenuation parameter MMPD-$D$ and the azimuth $\alpha _D$ of $M_{12}$ and $M_{13}$. The direction of diattenuation and $\alpha _D$ follows with the shape of the spherical cells and the generated PBP images provide relatively high signal to noise ratio as shown in Fig. 9.

 figure: Fig. 9.

Fig. 9. Results of MMPD-$D$, and the azimuth of $M_{12}$ and $M_{13}$ images on red blood cell samples. (a), (b) and (c) are normalized images of $S_1$, $S_2$ and $S_3$ in $\boldsymbol {S}_{out}$ respectively, illuminated with right-hand circle incident polarization state. (d) Generated MMPD-$D$ images. (e) Ground truth images of MMPD-$D$. (f) Generated azimuth of $M_{12}$ and $M_{13}$ images. (g) Ground truth images of azimuth of $M_{12}$ and $M_{13}$.

Download Full Size | PDF

The modulation of different SOPs are always achieved by rotating wave plate or polarizer precisely in the PSG. We compared six different SOPs used on breast tissues and blood smears. The output PBP images were evaluated quantitatively with NRMSE and PSRN methods as given in Table 4. Results with different SOPs are close to each other, which implies the model is not sensitive to the incident polarization state. In practice, researcher may use different PBPs to detect different microstructures or abnormal regions. The insensitiveness to SOP reduces the complexity of PSG and improves the robustness of the deep learning model.

Tables Icon

Table 4. Quantitative comparison with different SOPs on PBP images of breast tissues and red blood smears.

5. Conclusions

In this work, we designed an end-to-end deep learning model that can generated PBP images based on one Stokes image. By using a dual DoFP polarimeter in Stokes imaging, this approach shows the capability of retrieving images of MM derived polarization parameters from a single exposure, which can avoid artifacts due to image misregistration, or instability of light source and sample, therefor improves the precision and stability of the measurement. We demonstrated the effectiveness of our approach on MMPD and azimuth parameters on liver tissue, breast tissue and blood smears, and quantitatively evaluated the output by NRMSE and PSRN measurements. The comparisons show that the synthetic PBP images provide clear polarimetric features close to the ground truth in both pixel and image level. The convergence time can be sharply reduced by initializing with a pretrained network. The transfer learning approach improves the efficiency of the learning procedure and extends the application field in practice. It is noted that some other optical modalities also require multi-exposure [51,52]. This deep learning method has the potential to reduce errors and improve the stability in these systems.

The precision is critical in MM polarimetry. For dynamic and moving samples, we can select well-registered and stable images in the data collection for the training of model. When the model is well-trained, the input only requires one exposure. In this work, instead of the whole MM, we only generated PBP images that are more suitable in biomedical applications. Our next work is to predict the whole MM directly based on one single Stokes vector. In this case, all PBPs can be calculated once we get the MM, and the training for all required PBP is not necessary. This is a challenging task since each pixel is a $4\times 4$ matrix whose elements are highly correlated. Recently, transformer approach has shown its preeminent performance by self-supervision on many imaging and computer vision tasks [53]. We will try to design transformer model based on the pair of Stokes and MM data to build the snapshot MM polarimetry in the future.

Funding

National Key Research and Development Program of China (2018YFC1406600); National Natural Science Foundation of China (11974206, 61527826); Guangdong Development Project of Science and Technology (2020B1111040001).

Acknowledgments

We thank Mengchao Hepatobiliary Hospital of Fujian Medical University and University of Chinese Academy of Sciences Shenzhen Hospital for the sample preparation. We thank Ruiqi Huang and Zhi Wang for helpful discussions.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. H. He, R. Liao, N. Zeng, P. Li, Z. Chen, X. Liu, and H. Ma, “Mueller matrix polarimetry—an emerging new tool for characterizing the microstructural feature of complex biological specimen,” J. Lightwave Technol. 37(11), 2534–2548 (2019). [CrossRef]  

2. C. He, H. He, J. Chang, B. Chen, H. Ma, and M. J. Booth, “Polarisation optics for biomedical and clinical applications: a review,” Light: Sci. Appl. 10(1), 194 (2021). [CrossRef]  

3. C. Brosseau, Fundamentals of Polarized Light: A Statistical Optics Approach (Wiley, 1998).

4. R. Ossikovski, “Differential and product Mueller matrix decompositions: a formal comparison,” Opt. Lett. 37(2), 220–222 (2012). [CrossRef]  

5. S. Y. Lu and R. A. Chipman, “Interpretation of Mueller matrices based on polar decomposition,” J. Opt. Soc. Am. A 13(5), 1106–1113 (1996). [CrossRef]  

6. N. Ortega-Quijano and J. L. Arce-Diego, “Mueller matrix differential decomposition,” Opt. Lett. 36(10), 1942–1944 (2011). [CrossRef]  

7. H. He, N. Zeng, E. Du, Y. Guo, D. Li, R. Liao, and H. Ma, “A possible quantitative Mueller matrix transformation technique for anisotropic scattering media,” Photonics Lasers Med. 2129–137 (2013). [CrossRef]  

8. P. Li, D. Lv, H. He, and H. Ma, “Separating azimuthal orientation dependence in polarization measurements of anisotropic media,” Opt. Express 26(4), 3791–3800 (2018). [CrossRef]  

9. A. Pierangelo, S. S. Manhas, A. Benali, C. Fallet, J. laurent Totobenazara, M.-R. Antonelli, T. Novikova, B. Gayet, A. de Martino, and P. Validire, “Multispectral Mueller polarimetric imaging detecting residual cancer and cancer regression after neoadjuvant treatment for colorectal carcinomas,” J. Biomed. Opt. 18(4), 046014 (2013). [CrossRef]  

10. M. Dubreuil, P. Babilotte, L. Martin, D. Sevrain, S. Rivet, Y. L. Grand, G. L. Brun, B. Turlin, and B. L. Jeune, “Mueller matrix polarimetry for improved liver fibrosis diagnosis,” Opt. Lett. 37(6), 1061–1063 (2012). [CrossRef]  

11. Y. Wang, H. He, J. Chang, C. He, S. Liu, M. Li, N. Zeng, J. Wu, and H. Ma, “Mueller matrix microscope: a quantitative tool to facilitate detections and fibrosis scorings of liver cirrhosis and cancer tissues,” J. Biomed. Opt. 21(7), 071112 (2016). [CrossRef]  

12. R. M. A. Azzam, “Photopolarimetric measurement of the Mueller matrix by fourier analysis of a single detected signal,” Opt. Lett. 2(6), 148 (1978). [CrossRef]  

13. D. H. Goldstein, “Mueller matrix dual-rotating retarder polarimeter,” Appl. Opt. 31(31), 6676–6683 (1992). [CrossRef]  

14. V. V. Tuchin, “Polarized light interaction with tissues,” J. Biomed. Opt. 21(7), 071114 (2016). [CrossRef]  

15. T. Huang, R. Meng, J. Qi, Y. Liu, X. Wang, Y. Chen, R. Liao, and H. Ma, “Fast Mueller matrix microscope based on dual DoFP polarimeters,” Opt. Lett. 46(7), 1676–1679 (2021). [CrossRef]  

16. T.-H. Tsai and D. J. Brady, “Coded aperture snapshot spectral polarization imaging,” Appl. Opt. 52(10), 2153–2161 (2013). [CrossRef]  

17. K. Shinoda, Y. Ohtera, and M. Hasegawa, “Snapshot multispectral polarization imaging using a photonic crystal filter array,” Opt. Express 26(12), 15948–15961 (2018). [CrossRef]  

18. W. Ren, C. Fu, D. Wu, Y. Xie, and G. R. Arce, “Channeled compressive imaging spectropolarimeter,” Opt. Express 27(3), 2197–2211 (2019). [CrossRef]  

19. J. C. Suárez-Bermejo, J. C. G. de Sande, M. Santarsiero, and G. Piquero, “Mueller matrix polarimetry using full Poincaré beams,” Opt. Lasers Eng. 122, 134–141 (2019). [CrossRef]  

20. M. Dubreuil, S. Rivet, B. L. Jeune, and J. M. Cariou, “Snapshot Mueller matrix polarimeter by wavelength polarization coding,” Opt. Express 15(21), 13660–13668 (2007). [CrossRef]  

21. N. Hagen, K. Oka, and E. L. Dereniak, “Snapshot Mueller matrix spectropolarimeter,” Opt. Lett. 32(15), 2100–2102 (2007). [CrossRef]  

22. D. J. Brady, L. Fang, and Z. Ma, “Deep learning for camera data acquisition, control, and image estimation,” Adv. Opt. Photonics 12(4), 787–846 (2020). [CrossRef]  

23. K. de Haan, Y. Rivenson, Y. Wu, and A. Ozcan, “Deep-learning-based image reconstruction and enhancement in optical microscopy,” Proc. IEEE 108(1), 30–50 (2020). [CrossRef]  

24. X. Yuan, D. J. Brady, and A. K. Katsaggelos, “Snapshot compressive imaging: Principle, implementation, theory, algorithms and applications,” IEEE Signal Process. Mag. 38(2), 65–88 (2021). [CrossRef]  

25. Y. Rivenson, Z. Gorocs, H. Gunaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017). [CrossRef]  

26. H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Günaydin, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019). [CrossRef]  

27. T. Liu, K. de Haan, Y. Rivenson, Z. Wei, X. Zeng, Y. Zhang, and A. Ozcan, “Deep learning-based super-resolution in coherent imaging systems,” Sci. Rep. 9(1), 3926 (2019). [CrossRef]  

28. X. Miao, X. Yuan, Y. Pu, and V. Athitsos, “lambda-net: Reconstruct hyperspectral images from a snapshot measurement,” 2019 IEEE/CVF International Conference on Computer Vision (ICCV) pp. 4058–4068 (2019).

29. L. Wang, C. Sun, Y. Fu, M. H. Kim, and H. Huang, “Hyperspectral image reconstruction using a deep spatial-spectral prior,” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) pp. 8024–8033 (2019).

30. J. Greffier, A. Hamard, F. R. Pereira, C. Barrau, H. Pasquier, J.-P. Beregi, and J. Frandon, “Image quality and dose reduction opportunity of deep learning image reconstruction algorithm for ct: a phantom study,” Eur. Radiol. 30(7), 3951–3959 (2020). [CrossRef]  

31. M. Kim, G.-S. Jeng, I. Pelivanov, and M. O’Donnell, “Deep-learning image reconstruction for real-time photoacoustic system,” IEEE Trans. Med. Imaging 39(11), 3379–3390 (2020). [CrossRef]  

32. Y. Ni, D. fu Zhou, S. Yuan, X. Bai, Z. Xu, J. Chen, C. Li, and X. Zhou, “Color computational ghost imaging based on a generative adversarial network,” Opt. Lett. 46(8), 1840–1843 (2021). [CrossRef]  

33. Y. Ma, X. Feng, and L. Gao, “Deep-learning-based image reconstruction for compressed ultrafast photography,” Opt. Lett. 45(16), 4400–4403 (2020). [CrossRef]  

34. I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. C. Courville, and Y. Bengio, “Generative adversarial nets,” NIPS, vol. 27 (2014).

35. P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp. 5967–5976 (2017).

36. C. Li and M. Wand, “Precomputed real-time texture synthesis with markovian generative adversarial networks,” European conference on computer vision. Springer pp. 702–716 (2016).

37. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

38. Y. Rivenson, H. Wang, Z. Wei, K. de Haan, Y. Zhang, Y. Wu, H. Günaydin, J. E. Zuckerman, T. Chong, A. E. Sisk, L. Westbrook, W. D. Wallace, and A. Ozcan, “Virtual histological staining of unlabelled tissue-autofluorescence images via deep learning,” Nat. Biomed. Eng. 3(6), 466–477 (2019). [CrossRef]  

39. P. Li, Y. Dong, J. Wan, H. He, T. Aziz, and H. Ma, “Polaromics: deriving polarization parameters from a Mueller matrix for quantitative characterization of biomedical specimen,” J. Phys. D: Appl. Phys. 55(3), 034002 (2022). [CrossRef]  

40. N. Ghosh, M. F. G. Wood, and I. A. Vitkin, “Mueller matrix decomposition for extraction of individual polarization parameters from complex turbid media exhibiting multiple scattering, optical activity, and linear birefringence,” J. Biomed. Opt. 13(4), 044036 (2008). [CrossRef]  

41. Y. Shen, R. Huang, H. He, S. Liu, Y. Dong, J. Wu, and H. Ma, “Comparative study of the influence of imaging resolution on linear retardance parameters derived from the Mueller matrix,” Biomed. Opt. Express 12(1), 211–225 (2021). [CrossRef]  

42. J. Chang, H. He, Y. Wang, Y. Huang, X. Li, C. He, R. Liao, N. Zeng, S. Liu, and H. Ma, “Division of focal plane polarimeter-based 3x4 Mueller matrix microscope: a potential tool for quick diagnosis of human carcinoma tissues,” J. Biomed. Opt. 21(5), 056002 (2016). [CrossRef]  

43. T. Liu, T. Sun, H. He, S. Liu, Y. Dong, J. Wu, and H. Ma, “Comparative study of the imaging contrasts of Mueller matrix derived parameters between transmission and backscattering polarimetry,” Biomed. Opt. Express 9(9), 4413–4428 (2018). [CrossRef]  

44. Y. Dong, J. Wan, L. Si, Y. Meng, Y. Dong, S. Liu, H. He, and H. Ma, “Deriving polarimetry feature parameters to characterize microstructural features in histological sections of breast tissues,” IEEE Trans. Biomed. Eng. 68(3), 881–892 (2021). [CrossRef]  

45. T. Haferlach, Color atlas of hematology: Practical microscopic and clinical diagnosis, (Thieme, 2004).

46. H. Zhao, O. Gallo, I. Frosio, and J. Kautz, “Loss functions for image restoration with neural networks,” IEEE Trans. Comput. Imaging 3(1), 47–57 (2017). [CrossRef]  

47. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 (2015). [CrossRef]  

48. V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” Proceedings of the 27th International Conference on International Conference on Machine Learning, pp. 807–814 (2010). [CrossRef]  

49. D. T. Butcher, T. Alliston, and V. M. Weaver, “A tense situation: forcing tumour progression,” Nat. Rev. Cancer 9(2), 108–122 (2009). [CrossRef]  

50. S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010). [CrossRef]  

51. D. Dong, X. Huang, L. Li, H. Mao, Y. Mo, G. Zhang, Z. Zhang, J. Shen, W. Liu, Z. Wu, G. Liu, Y. Liu, H. Yang, Q. Gong, K. Shi, and L. Chen, “Super-resolution fluorescence-assisted diffraction computational tomography reveals the three-dimensional landscape of the cellular organelle interactome,” Light: Sci. Appl. 9(1), 11–15 (2020). [CrossRef]  

52. J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012). [CrossRef]  

53. Y. Xu, H. Wei, M. Lin, Y. Deng, K. Sheng, M. Zhang, F. Tang, W. Dong, F. Huang, and C. Xu, “Transformers in computational visual media: A survey,” Comp. Visual Media 8(1), 33–62 (2022). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. The impact of co-registration and stability of light source on the image quality for retardance parameter (MMPD-$\delta$) of liver tissues. (a) Three channels ($S_1$-$S_3$) of a Stokes image in a snapshot, and ground truth MMPD-$\delta$ image with well registration and stable light source. (b) Simulated MMPD-$\delta$ images with 1% (first row) and 2% (second row) light intensity fluctuation respectively in the MM imaging. (c) Simulated MMPD-$\delta$ images with shifts of one (first row) and two (second row) pixels respectively in the MM imaging. (d) Generated image from the snapshot Stokes image using our method.
Fig. 2.
Fig. 2. Experimental setup and data acquisition. (a) Photograph of DoFPs-MMM. (b) Diagram of DoFPs-MMM. (c) Four polarization states of incident light on Poincare sphere.
Fig. 3.
Fig. 3. The Stokes and PBP images in the training set. (a), (b) and (c) are normalized images of $S_1$, $S_2$ and $S_3$ in $\boldsymbol {S}_{out}$ respectively, illuminated with right-hand circular incident light. (d) and (e) are PBP images of liver tissue, breast tissue and blood smear samples.
Fig. 4.
Fig. 4. The schematic of deep-learning-based PBP retrieval model, and the network architectures of (a) generator and (b) discriminator in the conditional generative adversarial network. Stokes image $x$ and PBP image $y$ are examples in the training set. $y^\prime$ is the generated PBP image in the training phase.
Fig. 5.
Fig. 5. Results of MMPD-$\delta$ on different stages from F1 to F4 of liver fibrosis tissues. (a), (b) and (c) are normalized images of $S_1$, $S_2$ and $S_3$ in $\boldsymbol {S}_{out}$ respectively, illuminated with right-hand circular incident light. (d) Generated images $G(x)$. (e) Ground truth images $y$.
Fig. 6.
Fig. 6. Generated MMPD-$\delta$ images of liver fibrosis tissues with different single component as input in $\boldsymbol {S}_{in}$. (a) $S_1$, $S_2$ and $S_3$ in $\boldsymbol {S}_{out}$ with right-hand circle polarization state as $\boldsymbol {S}_{in}$. (b) Generated images from single Stokes components in (a). (c) $S_1$, $S_2$ and $S_3$ in $\boldsymbol {S}_{out}$ with $0^\circ$ linear polarization state as $\boldsymbol {S}_{in}$. (d) Generated images from single Stokes components in (c). (e) Generated images taking three-channel right-hand circle polarization input. (f) Generated images taking three-channel 0$^{\circ }$ linear polarization input. (g) Ground truth image.
Fig. 7.
Fig. 7. Results of MMPD-$\delta$ and MMPD-$\Delta$ images of breast tissues. (a), (b) and (c) are normalized images of $S_1$, $S_2$ and $S_3$ in $\boldsymbol {S}_{out}$ respectively, illuminated with right-hand circle polarization light. (d) Generated MMPD-$\delta$ images. (e) Ground truth of MMPD-$\delta$ images. (f) Generated MMPD-$\Delta$ images. (g) Ground truth of MMPD-$\Delta$ images.
Fig. 8.
Fig. 8. The sum of $L_{L_1}$ and $L_{SSIM}$ losses with increasing number of iterations on breast tissues. (a) and (b) are loss values of MMPD-$\delta$ and MMPD-$\Delta$ respectively with random and transfer learning initialization.
Fig. 9.
Fig. 9. Results of MMPD-$D$, and the azimuth of $M_{12}$ and $M_{13}$ images on red blood cell samples. (a), (b) and (c) are normalized images of $S_1$, $S_2$ and $S_3$ in $\boldsymbol {S}_{out}$ respectively, illuminated with right-hand circle incident polarization state. (d) Generated MMPD-$D$ images. (e) Ground truth images of MMPD-$D$. (f) Generated azimuth of $M_{12}$ and $M_{13}$ images. (g) Ground truth images of azimuth of $M_{12}$ and $M_{13}$.

Tables (4)

Tables Icon

Table 1. The formulas and description of PBPs used in the experiments.

Tables Icon

Table 2. The details of samples used in the experiments.

Tables Icon

Table 3. Quantitative comparison with different SOPs on MMPD- δ images of liver tissues.

Tables Icon

Table 4. Quantitative comparison with different SOPs on PBP images of breast tissues and red blood smears.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

L G A N ( G , D ) = E ( x , y ) ( X , Y ) [ log D ( x , y ) ] + E x X [ log ( 1 D ( x , G ( x ) ) ) ]
L L 1 ( G ) = E ( x , y ) ( X , Y ) [ G ( x ) y 1 ]
SSIM ( a , b ) = ( 2 μ a μ b + C 1 ) ( 2 σ a b + C 2 ) ( μ a 2 + μ b 2 + C 1 ) ( σ a 2 + σ b 2 + C 2 )
L S S I M ( G ) = E ( x , y ) ( X , Y ) [ 1 SSIM ( G ( x ) , y ) ]
L T V ( G ) = i , j [ ( G ( x ) i + 1 , j G ( x ) i , j ) 2 + G ( x ) i , j + 1 G ( x ) i , j ) 2 ]
min G max D ( L G A N ( G , D ) + α ( L L 1 ( G ) + L S S I M ( G ) ) + β L T V ( G ) )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.