Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Quantitative phase image stitching guided by reconstructed intensity images in one-shot double field of view multiplexed digital holographic microscopy

Open Access Open Access

Abstract

In digital holographic microscopy (DHM), achieving large field of view (FOV) imaging while maintaining high resolution is critical for quantitative phase measurements of biological cell tissues and micro-nano structures. We present a quantitative phase image stitching guided by reconstructed intensity images in one-shot double FOV multiplexed DHM. Double FOVs are recorded simultaneously through frequency division multiplexing; intensity feature pairs are accurately extracted by multi-algorithm fusion; aberrations and non-common baselines are effectively corrected by preprocessing. Experimental results show that even if phase images have coherent noise, complex aberrations, low overlap rate and large size, this method can achieve high-quality phase stitching.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

DHM is a label-free quantitative imaging technique that records the complex wavefront of light interacting with a sample to simultaneously reconstruct the amplitude and phase information of the sample [1]. DHM is widely used in fields such as biological cell monitoring [2,3], micro-electromechanical systems testing [4], and micro-optical device characterization [5]. However, the lateral resolution and FOV in DHM are mutually limited [6,7]. Typically, objectives with high numerical aperture (NA) are used to achieve high resolution, but this results in a smaller imaging FOV. Therefore, how to expand the imaging FOV while maintaining high resolution is a key issue that needs to be solved in DHM.

The traditional solution is to capture multiple sub-holograms by mechanically moving the sample, camera, or optical device, and then stitch the phase images of multiple FOVs through image stitching technique [8]. Dai et al. applied the phase stitching method to digital holography (DH) for the first time, recording the sample frame by frame through a movable DH device, and stitching sub-aperture phase images [9]. However, alignment errors between the overlap sub-apertures may lead to phase stitching failure, which is less robust in real measurements. Wen et al. established a mathematical model of the DH phase error and corrected the relative rotation error between CCD coordinate system and two-dimensional (2D) motion stage [10], but could not fully compensate for the stitching error caused by axial misalignment between the overlap sub-apertures. Xie et al. integrated multiple algorithms such as phase correlation, Harris corner detection, and full matching search into the DH phase stitching to improve the stitching efficiency [11]. This method requires at least 50% overlap areas between the phase images to be stitched, and cannot handle the phase images with complex textures.

The above-mentioned methods of mechanical scanning combined with phase stitching to expand FOV mainly have three problems. First, mechanical motion of the camera, sample, or any optical devices reduces the temporal resolution of DHM [12]. Secondly, optical aberrations in DHM, such as tilt, defocus, astigmatism, and higher-order aberrations [13], can reduce the correlation between phase overlap areas and cause phase registration errors. The phase aberration compensation methods can be divided into physical methods [14] and numerical methods. Currently, widely used numerical methods mainly include principal component analysis [15], Zernike polynomial fitting (ZPF) [16], spectral analysis [17] and deep learning [18,19], which can correct the phase aberrations without the need for additional holograms, precisely knowing the system parameters, or manual intervention. Finally, due to the nature of phase imaging [20,21], the continuous phase data for object-free areas in each individual FOV (ideally serving as a natural baseline of 0 rad) may have different offsets, lacking a unified baseline. Stępień P et al. proposed the preprocessing methods for quantitative phase image stitching to compensate for systematic aberrations, linear aberrations, and offsets [22], thereby suppressing error propagation in the phase stitching steps. As a “data-driven” method [23,24], deep learning has been applied in the field of image stitching [25]. However, deep learning-based image stitching methods [26] are usually limited by computational performance and model complexity, allowing the input image sizes to be smaller (such as 128 × 128 pixels, 256 × 256 pixels, etc.), which cannot meet the requirements of high-precision and large-size phase image stitching.

Double FOV multiplexed DHM utilizes the frequency division multiplexing technique [27,28] to expand the imaging FOV of off-axis interference without using mechanical moving devices or changing the DHM magnification and lateral resolution. The wrapped phase images (values within the range [-π, π]) reconstructed from the multiplexed hologram are unwrapped and compensated for aberrations to recover three-dimensional (3D) profile of the sample. Compared with the wrapped and unwrapped phase images, the reconstructed intensity images can intuitively reflect the sample structural information, have close illumination intensity, and are not affected by the DHM optical aberrations, quantitative phase characteristics and phase recovery errors. Therefore, the reconstructed intensity images of multiplexed FOVs can be directly used to identify the phase overlap areas and guide the phase image stitching.

In this paper, we propose a quantitative phase image stitching guided by reconstructed intensity images in one-shot double FOV multiplexed DHM. The double FOV multiplexed hologram recorded in a single exposure is reconstructed to simultaneously obtain the intensity and phase images for each FOV. And the intensity and phase images with extended FOV are accurately generated by performing three modules: intensity image feature extraction, phase stitching preprocessing, and dual FOV stitching. This method effectively solves the problems of limited overlap rate, small size of the stitched image, and low correlation between the phase overlap areas, and achieves high-quality phase stitching.

2. Principle and methods

2.1 One-shot double FOV multiplexed DHM

The optical setup of the one-shot double FOV multiplexed DHM is shown in Fig. 1(a). The laser beam emitted by 632.8 nm He-Ne laser is generated into a 45° linearly polarized beam through a neutral density filter (NDF) and a half wave plate (HWP). Then, the beam is split into an object beam and a reference beam after passing through a 10X beam expander (BE) and a beam splitter (BS1). The microscope objective (MO, NA = 0.1, fMO =45 mm) and a tube lens (TL, fTL =180 mm) form a 4× telecentric microscope system to correct quadratic phase aberration. Lens L1 and L2 (fL1= fL2 = 100 mm) are positioned in a 4f system. The object beam with the complete FOV information output from the telecentric microscope system is optically Fourier transformed via L1, and is split into two orthogonal polarized object beams O1 and O2 by using a polarization beam splitter (PBS) and two quarter-wave plates (QWP1 and QWP2). O1 and O2 are reflected by a retro-reflector (RR) and a mirror (M2) respectively, and then optically inverse Fourier transformed by L2 back to CMOS (1/1.8′′, 2048 × 1536 pixels, pixel size of 3.45 µm). The experimental device built is shown in Fig. 1(b), and the theoretical resolution is calculated to be 0.82λ/NA = 5.19 µm [7].

 figure: Fig. 1.

Fig. 1. Principle of one-shot double FOV multiplexed DHM. (a) Optical setup. (b) Experimental device. (c) Schematic diagram of double FOV multiplexing. (d) Double FOV multiplexed hologram of USAF 1951 resolution target. (e) Hologram spectrum. NDF1/NDF2, neutral density filter; HWP, half wave plate; BE, beam expander; BS1/BS2, beam splitter; M1/M2, mirror; MO, microscope objective; TL, tube lens; L1/L2, lens; PBS, polarization beam splitter; QWP1/QWP2, quarter-wave plate; RR, retro-reflector; CMOS, CMOS camera.

Download Full Size | PDF

Since the cross-sectional areas of the plane beams O1 and O2 are larger than the photosensitive area of the CMOS chip (7.07 × 5.30 mm2), the CMOS can only capture the local FOV information in O1 and O2. In the experiment, by changing the angles of RR and M2, O1 and O2 beams produce lateral displacements in the CMOS plane. Two FOV images (FOV1 and FOV2) could be generated with different directions of chief ray propagation, orthogonal polarization states and adjustable FOV overlap rate, which are projected in the same CMOS acquisition area, as shown in Fig. 1(c). O1 and O2 beams interfere with the 45° linearly polarized reference beam R. In this double FOV multiplexing arrangement, two intersecting off-axis interference fringe patterns are created by adjusting the appropriate interfering angle of the reference beam, which are recorded simultaneously by using single exposure of CMOS to generate a double FOV multiplexed hologram, as shown in Fig. 1(d).

The double FOV multiplexed hologram has high fringe contrast, because the two object beams separated by PBS (split ratio of 50:50) have almost the same intensity, that is, |O1|≈|O2|. For this reason, the FOV1 and FOV2 intensity images reconstructed by the multiplexed hologram have similar grayscale distributions, avoiding the image registration error caused by uneven illumination intensity.

The spectrum F after Fourier transformation of the double FOV multiplexed hologram can be expressed as

$$\begin{aligned} F &= FFT\{{{{|{{O_1} + {O_2} + R} |}^2}} \}= FFT\{{|{O_1}{|^2} + |{O_2}{|^2} + |R{|^2}} \}\\ &+ FFT\{{{O_1}{R^ \ast }} \}+ FFT\{{{O_1}^ \ast R} \}+ FFT\{{{O_2}{R^ \ast }} \}+ FFT\{{{O_2}^ \ast R} \}\end{aligned}$$
where FFT represents Fourier transformation; * symbolizes the conjugate; FFT{|O1|2+|O2|2+|R|2} represents auto-correlation spectrum; FFT{|O1R*|} and FFT{|O2R*|} are cross-correlation spectra of FOV1 and FOV2, respectively; FFT{|O1*R|} and FFT{|O2*R|} are conjugate spectra of FOV1 and FOV2, respectively.

As shown in Fig. 1(e), the cross-correlation spectra of FOV1 and FOV2 marked with blue rectangles can be extracted by spatial filtering technique [29] and reconstructed by inverse Fourier transformation to directly obtain the FOV1 and FOV2 intensity images and wrapped phase images without numerical diffraction. The experimental device does not have any mechanical movement during the microscopic imaging process, and the observation speed is only limited by the camera frame rate.

2.2 Quantitative phase image stitching method

The framework of the quantitative phase image stitching method is shown in Fig. 2, which mainly includes three modules: intensity image feature extraction, phase stitching preprocessing and double FOV stitching. Figures 2(a1), 2(a2), 2(d1) and 2(d2) are all generated from the simulation model in Section 3.1.

 figure: Fig. 2.

Fig. 2. Framework of quantitative phase image stitching method.

Download Full Size | PDF

In the intensity image feature extraction module, multi-algorithm fusion is used to adaptively locate the overlap areas, extract the feature matching point pairs, and calculate the affine transformation matrix. Canny operator is first combined with cross-correlation algorithm [30] to adaptively detect the overlap areas between the FOV1 and FOV2 intensity images to improve the efficiency of subsequent feature extraction.

Figures 3(a1) and 3(a2) are the edge binary images (I1 and I2) of Figs. 2(a1) and 2(a2) obtained by the Canny operator. The cross-correlation algorithm is adopted to translate the sliding window on I2, and calculate the cross-correlation coefficient matrix C between each pixel on in I1 and I2, which can be expressed as

$$C(u,v) = \frac{{\sum\limits_{i = 1}^M {\sum\limits_{j = 1}^N {[{{I_1}({x_i},{y_i}) - \overline {{I_1}} ({x_i},{y_i})} ][{{I_2}({x_i} + u,{y_i} + v) - \overline {{I_2}} ({x_i} + u,{y_i} + v)} ]} } }}{{\sqrt {\sum\limits_{i = 1}^M {\sum\limits_{j = 1}^N {{{\{{[{{I_1}({x_i},{y_i}) - \overline {{I_1}} ({x_i},{y_i})} ][{{I_2}({x_i} + u,{y_i} + v) - \overline {{I_2}} ({x_i} + u,{y_i} + v)} ]} \}}^2}} } } }}$$
where M × N denotes the image size; (xi, yi) represents the pixel coordinates, 1≤ xiM, 1≤ yiN; (u, v) is the displacement between (xi, yi) and the center of the sliding window, 0≤ uM-1, 0≤ vN-1; the sliding window size here is set to 5 × 5;${\mathop{I}\limits^{-}}_{1}$ and ${\mathop{I}\limits^{-}}_{2}$ are the average pixel intensities of I1 and I2 within the sliding windows centered on (xi, yi) and (xi + u, yi + v), respectively.

 figure: Fig. 3.

Fig. 3. Overlap area location results of Canny operator combined with cross-correlation algorithm. (a1) and (a2) Edge binary images of 2(a1) and 2(a2) obtained by the Canny operator. (b1) and (b2) 0-1 positioning masks of FOV1 and FOV2 obtained by the cross-correlation algorithm.

Download Full Size | PDF

The most relevant pixels (x1max, y1max) and (x2max, y2max) in I1 and I2 can be retrieved by searching for the location of the maximum value in C. Two 0-1 positioning masks are generated by taking y1max and N as the starting and ending columns of the I1 overlap area, and 1 and y2max as the starting and ending columns of the I2 overlap area, as shown in Figs. 3(b1) and 3(b2). And the red dashed rectangles in Figs. 2(a1) and 2(a2) represent the location results of the overlap areas in the FOV1 and FOV2 intensity images.

Then, scale-invariant feature transform (SIFT) algorithm [31] is utilized to extract the feature points from the overlap areas in the FOV1 and FOV2 intensity images, and calculate the feature description matrix and the corresponding information matrix. The extracted feature points are shown in Figs. 2(b1) and 2(b2). To screen out the high-quality feature matching point pairs, cosine similarity analysis is performed on the two sets of the feature description matrices. The selection of the cosine similarity threshold (value in the range [0, 1]) will affect the matching accuracy of feature point pairs. A lower threshold means that even if the similarity between the feature point pairs is not very high, they will be judged as similar. This may cause many feature point pairs with low similarity to be misclassified as similar, leading to the errors in the calculation of the affine transformation matrix and reducing computational efficiency. Here, the empirical threshold of the cosine similarity is set to 0.8 to ensure that the number of feature matching point pairs is at least greater than 6 [32]. In the cosine similarity matrix, only the elements greater than the empirical threshold are retained and indexed in the information matrix to obtain the feature matching point set coordinates (X1, Y1) and (X2, Y2) between FOV1 and FOV2. The filtered feature matching point pairs are shown in Figs. 2(c1) and 2(c2).

The affine transformation matrix can be obtained by performing least square calculation on (X1, Y1) and (X2, Y2), which can be expressed as

$$\left[ \begin{array}{l} {X_2}\\ {Y_2}\\ 1 \end{array} \right] = \left[ {\begin{array}{ccc} a&b&c\\ d&e&f\\ 0&0&1 \end{array}} \right] \ast \left[ {\begin{array}{c} {{X_1}}\\ {{Y_1}}\\ 1 \end{array}} \right]$$
where a, b, c, d, e, and f are the parameters of the affine transformation matrix to be solved.

The wrapped phase images (Figs. 4(e1) and 4(e2)) of FOV1 and FOV2 are unwrapped by iterative least squares unwrapping algorithm [33], as shown in Figs. 2(d1) and 2(d2). Although the telecentric microscope system is built to correct the quadratic phase aberration, and spectrum centering is adopted to eliminate the tilt aberration, the unwrapped phase images still contain some residual aberrations [13], resulting in low correlation between phase overlap areas.

 figure: Fig. 4.

Fig. 4. Simulation model. (a1) and (a2) Ideal FOV1 and FOV2 phase images; (b1) and (b2) phase aberration distributions of FOV1 and FOV2 generated by ZPF; (c) simulated double FOV multiplexed hologram; (d) hologram spectrum; (e1) and (e2) wrapped phase images of FOV1 and FOV2.

Download Full Size | PDF

In the phase stitching preprocessing module, aberration compensation is performed on each unwrapped phase images through Zernike polynomial fitting method [34] with curve fitting and background segmentation (ZPF-CFBS). A 0-1 background mask is adaptively generated to extract the background phase data for calculating the accurate Zernike coefficients. The residual aberrations φa can be estimated by ZPF, written as

$${\varphi _a}(x,y) = \sum\limits_{j = 0}^{k - 1} {{a_j}{Z_j}} (x,y)$$
where Zj(x, y) denotes the j-order Zernike polynomial in Cartesian coordinates, aj is the Zernike coefficient, and k is the term number of Zernike polynomials.

Ideally, the object-free background areas should be flat (phase values of 0 rad), serving as a natural baseline. However, after phase unwrapping and aberration compensation, the background phase data in each FOV phase image is usually different. Therefore, a common baseline needs to be established for all the phase images before stitching [22].

The 0-1 positioning masks generated by the cross-correlation algorithm are used to extract the phase overlap areas. And the 0-1 background masks generated by ZPF-CFBS are adopted to obtain the background phase data within the overlap areas. The common baseline can be established by analyzing these background phase data differences and subtracting the estimated offset values from each phase image. Moreover, the phase noise is eliminated by bilateral filtering. The phase stitching preprocessing results are shown in Figs. 2(e1) and 2(e2).

In the double FOV stitching module, The FOV2 image is linearly transformed through the affine transformation matrix, and then registered with the FOV1 image to achieve rough stitching of the intensity or phase images of the two FOVs. Finally, weighted fusion algorithm [35] is used to optimize the rough stitching result and gradually reduce the difference between the original image and the registered image. Figures 2(f) and 2(g) are the intensity image and 3D phase distribution with extended FOV after weighted fusion, respectively.

3. Results and discussion

To test the performance of the proposed method, the quantitative phase stitching experiments were carried out using the simulation models, USAF 1951 resolution target, female ascaris transection, frog cleavage stage slice and ovarian slice. Meanwhile, the phase stitching results were compared with direct phase image stitching algorithm (DPIS). The specific steps of DPIS are to directly use the SIFT algorithm and cosine similarity to screen out the feature matching point pairs between the two FOV phase images after performing ZPF-CFBS aberration compensation on the unwrapped phase images, and calculate the corresponding affine transformation matrix. The FOV2 phase image is linearly transformed, then registered with the FOV1 phase image, and finally the weighted fusion algorithm is adopted to remove the seams.

To test the stitching accuracy under different overlap rates, in addition to DPIS, two commonly used feature extraction algorithms were also compared. Following the phase image stitching strategy guided by the reconstructed intensity image, only in the intensity map feature extraction module, the combination of the Canny operator, cross-correlation algorithm and SIFT algorithm is replaced by: (1) phase correlation method (overlap areas location) and Harris corner detection (feature extraction) [11], referred to as PCHCD; (2) speed up robust feature algorithm [36] (for the entire image), referred to as SURF; other steps are the same as the proposed method.

All these algorithms were executed in the environment of MATLAB 2020a with Intel Core i5-12400F CPU at 2.50 GHz and 64.0 GB RAM.

3.1 Simulation results

To simulate the double FOV multiplexed hologram with the coherent noise and various aberrations in practical applications, the simulation principle and setting parameters are as follows:

  • a) The ideal FOV1 and FOV2 phase images (φs1 and φs2) of the resolution target are simulated, in which the heights of the element structure and the flat background are 1 rad and 0 rad, respectively. Figures 4(a1) and 4(a2) are the ideal phase images with the double FOV overlap rate set to 30%.
  • b) The phase aberrations (φa1 and φa2) generated by ZPF are added to the FOV1 and FOV2 phase images to simulate the optical aberrations in double FOV multiplexed DHM. The Zernike coefficients and aberration types set in FOV1 and FOV2 are listed in Table 1, and the generated phase aberration distributions are shown in Figs. 4(b1) and 4(b2).
  • c) The object beam OFi carrying different FOV information incident on the CMOS can be approximately estimated as
    $${O_{Fi}} = {A_i} \cdot \exp [{j({{\varphi_{si}} + {\varphi_a}_i} )+ jk({\cos {\alpha_i} \cdot x + \cos {\beta_i} \cdot y} )} ]$$
    where i denotes the FOV number; A denotes the sample amplitude; k is the wave number 2π/λ, and λ is the laser wavelength; α and β represent the incident angles of the object beam in the x and y directions, respectively. Considering that OF1 and OF2 are the orthogonally polarized object beams, the double FOV multiplexed hologram Hm can be approximately estimated as
    $${H_m} = {|{{O_{F1}} + R} |^2} + {|{{O_{F2}} + R} |^2} + N$$
    where N denotes the coherent noise and the instrumental noise during image acquisition procedure.
  • d) In the simulation, the incident angles of the object beams α1, β1, α2, and β2 are set to 3.5°, 1.5°, 3.5°, and −1.5°, respectively. The laser wavelength is 632.8 nm, the multiplexed hologram size is 1000 × 1000 pixels, and the pixel size is 3.45 × 3.45 µm. The random Gaussian white noise with the mean 3.5 and the standard deviation 1 is added to simulate the noise N [37].

Tables Icon

Table 1. Zernike Polynomials and Coefficients in Simulation

Figures 4(c) and 4(d) are the simulated double FOV multiplexed hologram and the corresponding spectrum. The reconstructed intensity images and the wrapped phase images after spatial filtering are shown in Figs. 2(a1), 2(a2), 4(e1) and 4(e2), respectively. After stitching by the proposed method, the sizes of the intensity image (Fig. 2(f)) and 3D phase distribution (Fig. 2(g)) are expanded from the original 1000 × 1000 pixels to 1000 × 1692 pixels, indicating that the imaging FOV is laterally increased by 1.69 times.

Figures 5(a1)−(a4) are 2D phase stitching results of the simulation model obtained by the proposed method, DPIS, PCHCD and SURF, respectively. Compared with the severely deformed stitching results of DPIS and PCHCD, the proposed method and SURF achieve better stitching quality. DPIS is sensitive to the phase noise, and feature point mismatch leads to large calculation error of the affine transformation matrix. In PCHCD, the phase correlation method is used to locate the overlap areas, which requires the overlap rate to be above 50%. And the Harris corner detection operator may miss the feature points in the areas with less texture or uniform texture, resulting in considerable image registration error and stitching error.

 figure: Fig. 5.

Fig. 5. Phase stitching results of simulation model. (a1)−(a4) Phase stitching images of the proposed method, DPIS, PCHCD, and SURF, respectively; (b1)−(b4) 3D distributions of local phase stitching regions (200 × 200 pixels) marked by the white rectangles in 5(a1)−(a4); (c1) and (c2) sample phase curves extracted along lines 1 and 2 from the ideal sample, the proposed method (a1), and SURF (a4), respectively.

Download Full Size | PDF

The local phase stitching regions (200 × 200 pixels) marked by the white rectangles in Figs. 5(a1)−(a4) are enlarged and drawn into the 3D distributions, as shown in Figs. 5(b1)−(b4). The stitched phase image of SURF has a slight misalignment at the seam, which can be seen more clearly in Fig. 5(c1).

Figures 5(c1) and 5(c2) are the sample phase curves extracted along lines 1 and 2 from the stitched phase images of the ideal sample, the proposed method and SURF, respectively. The sample phase curves (yellow) of the proposed method are almost consistent with the ideal curves (red). However, due to stitching misalignment, the sample phase curve (blue) of SURF in Fig. 5(c1) has two unexpected bumps on the flat background area. Compared to SIFT employed in our method, SURF detects fewer feature points and is not robust to large rotations, resulting in more misalignments using SURF than SIFT [38].

To further test the robustness and quantitative phase stitching accuracy of the four methods, the double FOV models with overlap rates of 50% and 70% were added to the simulation test. Figure 6 shows the statistical results of root mean square error (RMSE), mean absolute error (MAE), structural similarity (SSIM) between the phase stitching results obtained by the four methods and the ideal phase images, and time consumption. Compared with the other three methods, the proposed method has the smallest RMSE, MAE, and largest SSIM under three overlap rates. In terms of stitching efficiency, we use the SIFT algorithm with better performance and longer time consumption to extract the feature points, ensuring accurate image registration and high-quality phase stitching.

 figure: Fig. 6.

Fig. 6. Quantitative evaluation of simulated phase stitching results. (a) RMSE; (b) MAE; (c) SSIM; and (d) time consumption.

Download Full Size | PDF

The above simulation results show that the proposed method can realize quantitative phase image stitching with different overlap rates, multiple aberration components and the coherent noise, and accurately expand the imaging FOV of DHM.

3.2 Experiment results

Figures 7(a1) and 7(a2) are the FOV1 and FOV2 intensity images reconstructed from the experimental hologram (Fig. 1(d)) of USAF 1951 resolution target, which have the overlap rate of 58% and the lateral resolution of 4.39 µm (Group 6-Element 6). The overlap areas located by the Canny operator and the cross-correlation algorithm are marked with the red dashed rectangles. Figures 7(b1) and 7(b2) are the results of the phase stitching preprocessing on the FOV1 and FOV2 unwrapped phase images. Affected by the coherent noise and the phase recovery error, the lateral resolution of the phase images is 4.92 µm (Group 6-Element 5), which is close to the theoretical resolution (5.19 µm) of the DHM device. The sizes of Figs. 7(a1), 7(a2), 7(b1) and 7(b2) are all 900 × 900 pixels. Figures 7(c1), 7(c2) and 7(d1) are the intensity stitching image, the 3D phase stitching distribution, and the phase stitching image of the USAF 1951 resolution target, all with sizes of 900 × 1275 pixels. The imaging FOV is laterally expanded by 1.42 times. Figures 7(d2)−(d4) are the phase stitching images of the USAF 1951 resolution target obtained by DPIS, PCHCD and SURF, respectively. The registration error generated by DPIS results in transformation distortion of the FOV2 phase image, and obvious stitching seam, as shown by the white and red rectangles in Fig. 7(d2).

 figure: Fig. 7.

Fig. 7. Stitching experiment results of USAF 1951 resolution target with overlap rate of 58%. (a1) and (a2) FOV1 and FOV2 intensity images reconstructed from 1(d); (b1) and (b2) phase stitching preprocessing results of FOV1 and FOV2; (c1) intensity stitching image; (c2) 3D phase stitching distribution; (d1) phase stitching image of the proposed method; (d2) phase stitching image of DPIS; (d3) phase stitching image of PCHCD; (d4) phase stitching image of SURF.

Download Full Size | PDF

The positioning error in the overlap areas generated by PCHCD leads to the shortening of the non-overlap area in the FOV2 phase image after stitching, as shown by the red rectangle in Fig. 7(d3). From the red rectangle in Fig. 7(d4), some structural misalignments and seam appear in the phase stitching image of SURF. In contrast, the phase stitching image of the proposed method exhibits the 3D sample structure with large FOV, without resolution limitation, regional distortion, structural misalignment, loss of details, and obvious seam.

Figure 8(a) is the 2D tissue structure of a female ascaris transection observed through an optical microscope (VHX-6000; 200X). Figures 8(b1) and 8(b2) are the FOV1 and FOV2 intensity images of the female ascaris transection, with the overlap rate of 30%. And Figs. 8(c1) and 8(c2) are the 3D phase distributions of FOV1 and FOV2 after phase stitching preprocessing. The sizes of Figs. 8(b1), 8(b2), 8(c1) and 8(c2) are all 520 × 520 pixels. Figures 8(d1) and 8(d2) are the intensity stitching image and the 3D phase stitching distribution of the female ascaris transection obtained by the proposed method, with the sizes expanded to 520 × 880 pixels. Figures 8(e1)−(e4) are the phase stitching images of the female ascaris transection obtained by DPIS, PCHCD, SURF, and the proposed method, respectively.

 figure: Fig. 8.

Fig. 8. Stitching experiment results of female ascaris transection with overlap rate of 30%. (a) 2D tissue structure observed by optical microscope (VHX-6000; 200X); (b1) and (b2) FOV1 and FOV2 intensity images; (c1) and (c2) 3D phase distributions of FOV1 and FOV2 after phase stitching preprocessing; (d1) and (d2) intensity stitching image and 3D phase stitching distribution obtained by the proposed method; (e1) phase stitching image of DPIS; (e2) phase stitching image of PCHCD; (e3) phase stitching image of SURF; (e4) phase stitching image of the proposed method.

Download Full Size | PDF

Taking Fig. 8(a) as a reference, the proposed method can restore and stitch the tissue structure of the female ascaris transection with high quality, while the phase stitching results of DPIS, PCHCD and SURF all have varying degrees of stitching misalignment, regional deformation and registration errors, as shown by the red rectangles in Figs. 8(e1)−(e4).

To quantitatively evaluate the experimental phase stitching accuracy of the USAF 1951 resolution target and the female ascaris transection, the overlap areas in the linearly transformed FOV1/FOV2 phase images are selected. Figure 9 shows the statistical results of RMSE, MAE and SSIM before and after phase stitching in these areas. It can be seen that the RMSEs and MAEs obtained by the proposed method are the smallest, and the SSIMs are as high as 99.5%.

 figure: Fig. 9.

Fig. 9. Quantitative evaluation of experimental phase stitching results. (a) RMSE; (b) MAE; and (c) SSIM.

Download Full Size | PDF

Figure 10 shows the stitching experiment results of the frog cleavage stage slice and the ovarian slice. The FOV overlap rates are 44% and 45% respectively. After phase stitching by the proposed method, the phase image of the frog cleavage stage slice is expanded from the original 520 × 520 pixels to 520 × 809 pixels, and the phase image of the ovarian slice is expanded from the original 430 × 430 pixels to 430 × 654 pixels. Comparing with Figs. 10(a1) and 10(a2) captured by the optical microscope, our method can accurately expand the imaging FOV while obtaining the 3D phase distributions close to the tissue structures of the frog cleavage stage slice and the ovarian slice.

 figure: Fig. 10.

Fig. 10. Stitching experiment results of frog cleavage stage slice and ovarian slice obtained by the proposed method. (a1) and (a2) 2D tissue structures observed by optical microscope (VHX-6000; 200X); (b1)−(b4) FOV1 and FOV2 intensity images; (c1)−(c4) 3D phase distributions of FOV1 and FOV2 after phase stitching preprocessing; (d1) and (d2) intensity stitching images; (f1) and (f2) 3D phase stitching distributions.

Download Full Size | PDF

Finally, multiple simulation tests are conducted on two FOV stitching with the overlap area of 13%, four FOV stitching with the overlap rate of 25%, and six FOV stitching with the overlap rate of 18% and 25%. Through phase image stitching of six FOVs, the imaging FOV can be accurately expanded to 4.5 times, and the SSIM remains above 98%. Specific experimental results and quantitative evaluation data are provided in the Supplement 1.

Table 2 summarizes all the stitching experimental data obtained by the proposed method. Simulation and experimental results demonstrate that the proposed method is capable of quantitative phase image stitching for different samples with various overlap rates and multiple image sizes. Especially at a low overlap rate of 13%, the imaging FOV of DHM can be accurately expanded to 1.87 times.

Tables Icon

Table 2. Stitching experimental data obtained by the proposed method

4. Conclusion

In summary, we propose a quantitative phase image stitching guided by reconstructed intensity images in one-shot double FOV multiplexed DHM for 3D reconstruction and phase measurements of biological tissues in large FOV. This method has three significant advantages: (1) the double FOV multiplexed DHM system is constructed to improve the temporal resolution of DHM and avoid the image registration error caused by the uneven illumination intensity; (2) the Canny operator combined with the cross-correlation algorithm can adaptively and accurately locate the overlap areas between the double FOVs without being interfered by the DHM optical aberrations, quantitative phase characteristics, and phase recovery error; (3) the phase stitching preprocessing module is developed to perform ZPF-CFBS aberration compensation and baseline unification, improving the phase recovery accuracy and the correlation between the phase overlap areas.

The performance and effectiveness of the proposed method are mainly limited by some factors such as the overlap rate between FOVs, hologram SNR, sample texture features, and weighted fusion methods. Simulation and experimental results show that if the overlap rate between FOVs is larger 13% and the hologram SNR is in the range of 9 to 29 dB, the proposed method can accurately stitch the intensity images and phase images of multiple FOVs, and the SSIM remains at 96% above. when the overlap rate is less than 13%, sufficient sample texture features cannot be extracted; if the coherent noise level is too high and the hologram SNR is lower than 9 dB, it will cause the sample texture feature extraction errors; these will lead to image stitching failure.

Experimental results show that even if the phase images have the coherent noise, complex aberrations, low overlap rate and large size, the proposed method can achieve high-quality phase stitching without stitching misalignment, regional deformation, obvious seam, or loss of details. By adding multiple FOV images, the proposed method will be able to further expand the imaging FOV. Therefore, our method has significant application for the biological tissue observation and microstructure measurement in the DHM systems.

Funding

National Natural Science Foundation of China (52035015); Natural Science Foundation of Zhejiang Province (LQ23E050020); Zhejiang Sci-Tech University (21022309-Y).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. B. Javidi, A. Carnicer, A. Anand, et al., “Roadmap on digital holography,” Opt. Express 29(22), 35078–35118 (2021). [CrossRef]  

2. T. O’Connor, A. Anand, B. Andemariam, et al., “Deep learning-based cell identification and disease diagnosis using spatio-temporal cellular dynamics in compact digital holographic microscopy,” Biomed. Opt. Express 11(8), 4491–4508 (2020). [CrossRef]  

3. M. Baczewska, P. Stępień, M. Mazur, et al., “Method to analyze effects of low-level laser therapy on biological cells with a digital holographic microscope,” Appl. Opt. 61(5), B297–B306 (2022). [CrossRef]  

4. L. Huang, J. Tang, L. Yan, et al., “Wrapped phase aberration compensation using deep learning in digital holographic microscopy,” Appl. Phys. Lett. 123(14), 141109 (2023). [CrossRef]  

5. W. Qu, C. O. Choo, Y. Yu, et al., “Microlens characterization by digital holographic microscopy with physical spherical phase compensation,” Appl. Opt. 49(33), 6448–6454 (2010). [CrossRef]  

6. N. Patel, V. Trivedi, S. Mahajan, et al., “Wavefront division digital holographic microscopy,” Biomed. Opt. Express 9(6), 2779–2784 (2018). [CrossRef]  

7. V. Micó, J. Zheng, J. Garcia, et al., “Resolution enhancement in quantitative phase microscopy,” Adv. Opt. Photonics 11(1), 135–214 (2019). [CrossRef]  

8. D. Chen, J. Peng, S. Valyukh, et al., “Measurement of high numerical aperture cylindrical surface with iterative stitching algorithm,” Appl. Sci. 8(11), 2092 (2018). [CrossRef]  

9. C. Dai, Y. Yu, G. Chen, et al., “Study of the holographic phase stitching technique,” Proc. SPIE 7000, 70001T (2008). [CrossRef]  

10. Y. Wen, W. Qu, H. Cheng, et al., “Further investigation on the phase stitching and system errors in digital holography,” Appl. Opt. 54(2), 266–276 (2015). [CrossRef]  

11. Z. Xie, T. Guo, W. Liu, et al., “Phase splicing method based on multi-algorithm fusion in holography,” Chin. J. Laser 48(7), 0709001 (2021). [CrossRef]  

12. J. Long, P. Cai, S. Pan, et al., “Phase stitching based multi-CCDs deformation measurement in digital speckle pattern interferom,” Acta Photonica Sinica 51(4), 0412003 (2022).

13. X. Lai, S. Xiao, Y. Ge, et al., “Digital holographic phase imaging with aberrations totally compensated,” Biomed. Opt. Express 10(1), 283–292 (2019). [CrossRef]  

14. B. Kemper and G. Von, “Digital holographic microscopy for live cell applications and technical inspection,” Appl. Opt. 47(4), A52–A61 (2008). [CrossRef]  

15. X. Lai, S. Xiao, C. Xu, et al., “Aberration-free digital holographic phase imaging using the derivative-based principal component analysis,” J. Biomed. Opt. 26(4), 046501 (2021). [CrossRef]  

16. S. Liu, Q. Lian, and Z. Xu, “Phase aberration compensation for digital holographic microscopy based on double fitting and background segmentation,” Opt. Lasers Eng. 115, 238–242 (2019). [CrossRef]  

17. J. Min, B. Yao, S. Ketelhut, et al., “Simple and fast spectral domain algorithm for quantitative phase imaging of living cells with digital holographic microscopy,” Opt. Lett. 42(2), 227–230 (2017). [CrossRef]  

18. S. Ma, Q. Liu, Y. Yu, et al., “Quantitative phase imaging in digital holographic microscopy based on image inpainting using a two-stage generative adversarial network,” Opt. Express 29(16), 24928–24946 (2021). [CrossRef]  

19. W. Xiao, L. Xin, R. Cao, et al., “Sensing morphogenesis of bone cells under microfluidic shear stress by holographic microscopy and automatic aberration compensation with deep learning,” Lab Chip 21(7), 1385–1394 (2021). [CrossRef]  

20. P. Stępień, D. Korbuszewski, and M. M. Kujawińska, “Digital holographic microscopy with extended field of view using tool for generic image stitching,” ETRI J. 41(1), 73–83 (2019). [CrossRef]  

21. J. Qian, S. Feng, T. Tao, et al., “Deep-learning-enabled geometric constraints and phase unwrapping for single-shot absolute 3D shape measurement,” APL Photonics 5(4), 046105 (2020). [CrossRef]  

22. P. Stȩpień, W. Krauze, and M. Kujawińska, “Preprocessing methods for quantitative phase image stitching,” Biomed. Opt. Express 13(1), 1–13 (2022). [CrossRef]  

23. S. Feng, C. Zuo, L. Zhang, et al., “Generalized framework for non-sinusoidal fringe analysis using deep learning,” Photonics Res. 9(6), 1084–1098 (2021). [CrossRef]  

24. C. Zuo, J. Qian, S. Feng, et al., “Deep learning in optical metrology: A review,” Light: Sci. Appl. 11(1), 39 (2022). [CrossRef]  

25. M. Fu, H. Liang, C. Zhu, et al., “Image stitching techniques applied to plane or 3-D models: a review,” IEEE Sens. J. 23(8), 8060–8079 (2023). [CrossRef]  

26. L. Nie, C. Lin, K. Liao, et al., “Unsupervised deep image stitching: Reconstructing stitched features to images,” IEEE T. Image Process. 30, 6184–6197 (2021). [CrossRef]  

27. P. Girshovitz and N. T. Shaked, “Doubling the field of view in off-axis low-coherence interferometric imaging,” Light: Sci. Appl. 3(3), e151 (2014). [CrossRef]  

28. Z. Huang and L. Cao, “High bandwidth-utilization digital holographic multiplexing: an approach using Kramers–Kronig relations,” Adv. Photonics Res. 3(2), 2100273 (2022). [CrossRef]  

29. J. Zhang, L. Huang, B. Chen, et al., “Accurate extraction of the + 1 term spectrum with spurious spectrum elimination in off-axis digital holography,” Opt. Express 30(15), 28142–28157 (2022). [CrossRef]  

30. A. L. Pilchak, A. R. Shiveley, P. A. Shade, et al., “Using cross-correlation for automated stitching of two-dimensional multi-tile electron backscatter diffraction data,” J. Microsc. 248(2), 172–186 (2012). [CrossRef]  

31. F. Wang, P. Tu, C. Wu, et al., “Multi-image mosaic with SIFT and vision measurement for microscale structures processed by femtosecond laser,” Opt. Lasers Eng. 100, 124–130 (2018). [CrossRef]  

32. A. S. A. AL-Jumaili, H. K. Tayyeh, and A. Alsadoon, “AlexNet convolutional neural network architecture with cosine and hamming similarity/distance measures for fingerprint biometric matching,” Baghdad Sci. J. 20(6, 2559–2567 (2023). [CrossRef]  

33. H. Xia, S. Montresor, R. Guo, et al., “Phase calibration unwrapping algorithm for phase data corrupted by strong decorrelation speckle noise,” Opt. Express 24(25), 28713–28730 (2016). [CrossRef]  

34. L. Huang, L. Yan, B. Chen, et al., “Phase aberration compensation of digital holographic microscopy with curve fitting preprocessing and automatic background segmentation for microstructure testing,” Opt. Commun. 462, 125311 (2020). [CrossRef]  

35. H. Gao, Z. Huang, H. Yang, et al., “Research on improved multi-channel image stitching technology based on fast algorithms,” Electronics 12(7), 1700 (2023). [CrossRef]  

36. W. Zhang, X. Li, J. Yu, et al., “Remote sensing image mosaic technology based on SURF algorithm in agriculture,” EURASIP J. Image Vide. 2018(1), 85 (2018). [CrossRef]  

37. P. Cheremkhin, N. Evtikhiev, V. Krasnov, et al., “Shot noise and fixed-pattern noise effects on digital hologram reconstruction,” Opt. Lasers Eng. 139, 106461 (2021). [CrossRef]  

38. Z. Wang and Z. Yang, “Review on image-stitching techniques,” Multimedia Syst. 26(4), 413–430 (2020). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Simulation experiments on multiple hologram SNRs, minimum overlap rate, and multi-FOV stitching

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Principle of one-shot double FOV multiplexed DHM. (a) Optical setup. (b) Experimental device. (c) Schematic diagram of double FOV multiplexing. (d) Double FOV multiplexed hologram of USAF 1951 resolution target. (e) Hologram spectrum. NDF1/NDF2, neutral density filter; HWP, half wave plate; BE, beam expander; BS1/BS2, beam splitter; M1/M2, mirror; MO, microscope objective; TL, tube lens; L1/L2, lens; PBS, polarization beam splitter; QWP1/QWP2, quarter-wave plate; RR, retro-reflector; CMOS, CMOS camera.
Fig. 2.
Fig. 2. Framework of quantitative phase image stitching method.
Fig. 3.
Fig. 3. Overlap area location results of Canny operator combined with cross-correlation algorithm. (a1) and (a2) Edge binary images of 2(a1) and 2(a2) obtained by the Canny operator. (b1) and (b2) 0-1 positioning masks of FOV1 and FOV2 obtained by the cross-correlation algorithm.
Fig. 4.
Fig. 4. Simulation model. (a1) and (a2) Ideal FOV1 and FOV2 phase images; (b1) and (b2) phase aberration distributions of FOV1 and FOV2 generated by ZPF; (c) simulated double FOV multiplexed hologram; (d) hologram spectrum; (e1) and (e2) wrapped phase images of FOV1 and FOV2.
Fig. 5.
Fig. 5. Phase stitching results of simulation model. (a1)−(a4) Phase stitching images of the proposed method, DPIS, PCHCD, and SURF, respectively; (b1)−(b4) 3D distributions of local phase stitching regions (200 × 200 pixels) marked by the white rectangles in 5(a1)−(a4); (c1) and (c2) sample phase curves extracted along lines 1 and 2 from the ideal sample, the proposed method (a1), and SURF (a4), respectively.
Fig. 6.
Fig. 6. Quantitative evaluation of simulated phase stitching results. (a) RMSE; (b) MAE; (c) SSIM; and (d) time consumption.
Fig. 7.
Fig. 7. Stitching experiment results of USAF 1951 resolution target with overlap rate of 58%. (a1) and (a2) FOV1 and FOV2 intensity images reconstructed from 1(d); (b1) and (b2) phase stitching preprocessing results of FOV1 and FOV2; (c1) intensity stitching image; (c2) 3D phase stitching distribution; (d1) phase stitching image of the proposed method; (d2) phase stitching image of DPIS; (d3) phase stitching image of PCHCD; (d4) phase stitching image of SURF.
Fig. 8.
Fig. 8. Stitching experiment results of female ascaris transection with overlap rate of 30%. (a) 2D tissue structure observed by optical microscope (VHX-6000; 200X); (b1) and (b2) FOV1 and FOV2 intensity images; (c1) and (c2) 3D phase distributions of FOV1 and FOV2 after phase stitching preprocessing; (d1) and (d2) intensity stitching image and 3D phase stitching distribution obtained by the proposed method; (e1) phase stitching image of DPIS; (e2) phase stitching image of PCHCD; (e3) phase stitching image of SURF; (e4) phase stitching image of the proposed method.
Fig. 9.
Fig. 9. Quantitative evaluation of experimental phase stitching results. (a) RMSE; (b) MAE; and (c) SSIM.
Fig. 10.
Fig. 10. Stitching experiment results of frog cleavage stage slice and ovarian slice obtained by the proposed method. (a1) and (a2) 2D tissue structures observed by optical microscope (VHX-6000; 200X); (b1)−(b4) FOV1 and FOV2 intensity images; (c1)−(c4) 3D phase distributions of FOV1 and FOV2 after phase stitching preprocessing; (d1) and (d2) intensity stitching images; (f1) and (f2) 3D phase stitching distributions.

Tables (2)

Tables Icon

Table 1. Zernike Polynomials and Coefficients in Simulation

Tables Icon

Table 2. Stitching experimental data obtained by the proposed method

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

F = F F T { | O 1 + O 2 + R | 2 } = F F T { | O 1 | 2 + | O 2 | 2 + | R | 2 } + F F T { O 1 R } + F F T { O 1 R } + F F T { O 2 R } + F F T { O 2 R }
C ( u , v ) = i = 1 M j = 1 N [ I 1 ( x i , y i ) I 1 ¯ ( x i , y i ) ] [ I 2 ( x i + u , y i + v ) I 2 ¯ ( x i + u , y i + v ) ] i = 1 M j = 1 N { [ I 1 ( x i , y i ) I 1 ¯ ( x i , y i ) ] [ I 2 ( x i + u , y i + v ) I 2 ¯ ( x i + u , y i + v ) ] } 2
[ X 2 Y 2 1 ] = [ a b c d e f 0 0 1 ] [ X 1 Y 1 1 ]
φ a ( x , y ) = j = 0 k 1 a j Z j ( x , y )
O F i = A i exp [ j ( φ s i + φ a i ) + j k ( cos α i x + cos β i y ) ]
H m = | O F 1 + R | 2 + | O F 2 + R | 2 + N
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.