Abstract

Based on the hologram inpainting via a two-stage Generative Adversarial Network (GAN), we present a precise phase aberration compensation method in digital holographic microscopy (DHM). In the proposed methodology, the interference fringes of the sample area in the hologram are firstly removed by the background segmentation via edge detection and morphological image processing. The vacancy area is then inpainted with the fringes generated by a deep learning algorithm. The image inpainting finally results in a sample-free reference hologram containing the total aberration of the system. The phase aberrations could be deleted by subtracting the unwrapped phase of the sample-free hologram from our inpainting network results, in no need of any complex spectrum centering procedure, prior knowledge of the system, or manual intervention. With a full and proper training of the two-stage GAN, our approach can robustly realize a distinct phase mapping, which overcomes the drawbacks of multiple iterations, noise interference or limited field of view in the recent methods using self-extension, Zernike polynomials fitting (ZPF) or geometrical transformations. The validity of the proposed procedure is confirmed by measuring the surface of preprocessed silicon wafer with a Michelson interferometer digital holographic inspection platform. The results of our experiment indicate the viability and accuracy of the presented method. Additionally, this work can pave the way for the evaluation of new applications of GAN in DHM.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

DHM is a quantitative phase imaging (QPI) and interferometric non-invasive technique for reconstructing phase maps. This technique has a significant influence in many areas such as micro-nano structure characterization [1,2], microfluidics [3,4], laser evaluation and laser optics [5,6], and micro-electro-mechanical systems (MEMS) [7,8]. Although the widely-used off-axis DHM can prevent the interference of the virtual image and the real image, the aberrations are inevitably introduced, mainly consisting of the tilt phase produced through the off-axis angle and the spherical phase curvature induced by the microscopic objectives (MO). These aberrations can affect the numerical reconstruction of phase information and prohibit accurate measurements, which must be rectified to get the accurate phase caused only by the sample.

To compensate these aberrations, many methods, which can generally be categorized as physical and numerical methods, have been proposed in the last several years. The physical techniques [920] are typically achieved by dint of electronically tunable lens [9], common-path DHM configurations [1014], telecentric configuration [1518], and the same objective lens [19,20]. Nevertheless, precise alignments of the optical device are tediously challenging to realize in practice, and more expensive and complex elements are required. Numerical methods, by comparison, could perform the compensation mission without additional devices or a second shot hologram. Till now, the numerical methods [2136] have been implemented through the proceeding of the digital reconstruction by a digital phase mask containing the distortion of the system, which can be generated based on numerical fitting procedures with functions and polynomials [2133] or a synthetic sample-free reference hologram [3436]. Among the numerical fitting procedures, spectral analysis has been applied to attain the arguments of phase aberrations model by parabolic function or ZPF [2123], but the sampling interval of the frequency domain after Fourier transform (FFT) may have a bad influence on the accuracy. Principal component analysis (PCA) has also been applied to obtain the phase aberrations map using parabolic function fitting [24]. Still, it will not compensate for all the distorted regions of the phase distribution. For other numerical fitting procedure-based methods, least-squares fitting [2529], Zernike Polynomials Fitting (ZPF) [3033], are conventionally employed. However, least-squares fitting methods request hand-operated manipulation to pick the known flat area [2527], or must be restricted to the thin samples whose phase is negligible compared to the aberrations, and thus the whole area can be assumed as the phase aberrations [28,29]. Fitting the background with samples removed by deep learning [30,31], traditional image processing [32,33] becomes more practicable when the sample is relatively thick. However, these methods [3033] are still realized by traditional procedures such as ZPF, whose results may decline accuracy due to the relative vulnerability to the phase variations. To further increase the accuracy, many researchers began to use a synthetic sample-free hologram (i.e., the hologram containing the whole aberrations excluding the sample phase) as a reference to obtain precise distortion parameters of the digital phase mask. The sample-free hologram is mainly achieved by wavefront folding [34], self-extension [35], and geometrical transformations [36]. The wavefront folding method [34] can compensate for almost all phase aberrations by self-referencing only one sample hologram. However, it is mostly challenging to select sample-free reference areas as the sample's background is not always flat. The self-extension method [35] can remove all distortion of the system by filling the mutilated hologram, but cannot fill large vacancy areas. The approach based on geometrical transformations [36] can also restore true phase information by the part area of sample-free reference hologram realized by reflection, rotation, and transpose of the original hologram, whereas the field of view is limited to locally small regions of the observed area. Therefore, a robust sample-free reference hologram-based method that can get rid of all the aberrations with the full field of view is still lacking.

Here, we propose to combine the background segmentation (BS) with image inpainting techniques to compensate the total phase aberrations in DHM without complex spectrum centering judgment, prior knowledge of the setting, and manual intervention. The BS firstly removes the sample stripe in the holographic image via edge detection and morphological image processing. The vacancy area is then intelligently inpainted by a deep learning algorithm using a two-stage GAN. The total aberration compensation of the system is finally achieved by training and learning of our professional network. The experiment results indicate that our method has a powerful denoising ability, and the accuracy of ours is higher than that of numerical methods ZPF and BS + ZPF.

2. Material and methods

2.1 Experimental system and method

Figure 1(a) depicts the schematic diagram of the experimental system, the Michelson interferometer, for off-axis digital holographic microscopy. Off-axis digital holography requires illumination of the sample with coherent light. A laser beam with a wavelength of λ = 632.8 nm is divided into 2 beams by a beam splitter (BSL), where one beam as the reference wave R(x, y) is expanded by a 10 × MO (NA = 0.25) and reflected by the mirror, while another beam as the object wave is expanded by an identical MO. Then the object wave O(x, y) is focused on the sample and then reflected from the sample [Fig. 1(b)], and further interferes with the reference wave at the exit of the interferometer. The 2 beams after interference are collected by the focus lens and recorded by a CCD camera (DMK 23G445, pixel number 1280H × 960 V, pixel size 3.75 µm × 3.75 µm). The sample used in our experiment is a monocrystalline silicon wafer whose surface has been etched into many different shapes. The wafer was firstly soaked in a 7:3 solution of hydrogen peroxide and sulfuric acid, rinsed with deionized water, and dried with a nitrogen gun. The cleaned wafer was then placed in the steam of HMDS and cooled. After that, a 1.6-µm-thick photoresist AZ5214 was spin-coated on one side of the wafer and soft baked for 1.5 minutes. After mask alignment, the wafer was patterned, exposed for 6.5 s (10−2 J/cm2) and a 90 s development was performed. Finally, the samples were etched by deep reactive ion etching (DRIE) using STS equipment. To improve the sample’s reflection, a thin aluminum film was deposited on the surface of the etched sample. The etched samples with different shapes are shown in the dashed boxes in Fig. 1(b).

 figure: Fig. 1.

Fig. 1. Experimental system: (a) experimental setup; (b) experimental sample.

Download Full Size | PPT Slide | PDF

The interference fringes (raw sample hologram), consisting of the amplitude and the phase message of the object’s optical wavefront, are recorded by a CCD. The complex amplitude of the object wave is described by:

$$O({x,y} )= {A_o}({x,y} )\exp [{i{\varphi_o}({x,y} )} ], $$
where x and y represent the coordinates in the spatial domain of the holographic recording plane, respectively; ${A_o}({x,y} )$ represents the amplitude allocation of the object wave; ${\varphi _o}({x,y} )$ is the phase allocation of the object wave.
$$R({x,y} )= {A_R}({x,y} )\exp [{i{\varphi_R}({x,y} )} ], $$
is the complex amplitude of the reference wave with real amplitude ${A_R}({x,y} )$ and phase ${\varphi _R}({x,y} )$.

The interference of the 2 waves on the surface of the recording device forms a hologram. Specifically, the form of the hologram is a range of fringes. The intensity of the raw hologram $H({x,y} )$ can be expressed as:

$$H({x,y} )= {|{O + R} |^2} = {|O |^2} + {|R |^2} + R{O^\ast } + {R^\ast }O, $$
where $R{O^\ast }$ and ${R^\ast }O$ are the interference items with ${R^\ast }$ and ${O^\ast }$ representing the complicated conjugate of the 2 waves. The virtual image is obtained from +1 order spectrum of the Fourier transform of $H({x,y} )$ by the angular spectrum algorithms [37]:
$$ {H_{ + 1\textrm{FP}}}({x,y} )= {R^\ast }O = |R||O|\exp [i\varphi (x,y)]\exp [iT(x,y)]\exp [iP(x,y)]\exp (iOT(x,y)). $$

The unwrapped phase from ${H_{ + 1\textrm{FP}}}({x,y} )$ could be depicted as:

$${\Phi _1}(x,y) = \varphi (x,y) + A(x,y), $$
where $\varphi (x,y)$ and $A(x,y)$ denote the sample phase and the total phase aberrations, respectively. Here, $A(x,y)$ includes tilt aberration $T(x,y)$, parabolic aberration $P(x,y)$ and other aberration $OT(x,y)$.

Under the modulation of the mirror, the curvature of the reference wave is kept as close as possible to the curvature of the target wave. During the experiment, the curvature of the 2 waves cannot be guaranteed to be entirely consistent, and a mild angle is presented between the sample and the reference waves by slanting the mirror in off-axis holography, so a later numerical compensation process is required.

After wavefront recording, phase and amplitude information of a measured sample are stored in the form of a digital hologram (H) through conducting a single-shot procedure. In this experiment, the first task is the detection and separation of the sample areas of H. The hologram (H) was firstly reconstructed and unwrapped, which yields the unwrapped phase map (Φ 1). To facilitate the gradient calculation for the edge detection, Φ 1 was first fitted by ZPF to ensure a sufficient contrast between the sample and the background areas. And the gradient of the fitted map was detected by using the canny operator. After a series of morphological processing of the gradient map, the binary mask (M) was obtained. Then M was applied to H, which gives a mutilated hologram with the sample removed $\left (\mathop H\limits^{1 - M} = H \odot (1 - M)\right )$. The crucial task is to intelligently patch up the vacant areas (the original sample areas) with the reference stripe, which, in our method, was performed by image inpainting based on two-stage GAN. In this way, a sample-free reference hologram (shown in Fig. 2 H 0) with all aberrations of the system is obtained. H 0 is reconstructed and unwrapped to get Φ 0, which could be expressed as:

$${\Phi _0}(x,y) = A(x,y) = T(x,y) + P(x,y) + OT(x,y). $$

The true phase information of the sample (shown in Fig. 2 φ) can be provided by the formula:

$$\varphi (x,y) = {\Phi _1}(x,y) - {\Phi _0}(x,y). $$

The flow diagram of this accurate algorithm is schematically illustrated in Fig. 2.

 figure: Fig. 2.

Fig. 2. Flow diagram for the aberration correction algorithm of our method (BS + Inpainting). BS: background segmentation; ⊙: the Hadamard product; GAN: generative adversarial network.

Download Full Size | PPT Slide | PDF

The BS procedure based on edge detection and basic morphology in Fig. 2, is explained in detail in Fig. 3. The hologram H [Fig. 3(a)] was firstly reconstructed and unwrapped, which yields the unwrapped phase map [Fig. 3(b)]. Sufficient contrast should be ensured between the areas of the sample and background to facilitate gradient calculation for edge detection. Therefore, the unwrapped phase map was firstly fitted by ZPF [Fig. 3(c)], and the gradient of the fitted map [Fig. 3(d)] was then detected by using the Canny operator. After a series of morphological processing operations [Figs. 3(d)–3(g)] of the gradient map, the initial mask M′ [Fig. 3(g)] (the sample region was set to 1, indicated by the white; the background region was set to 0, indicated by the black) was obtained. To obliterate the sample fringes, the sample area in M′ was further marginally expanded by smoothing and re-binarizing around the sample’s squared edges, resulting in the final mask M [Fig. 3(h)] This step is quite necessary as it guarantees the removal of a relatively large area enclosing the sample region and eliminates possible errors associated with improper masking. For example, as shown in Figs. 3(i)–3(l), the masking of the hologram by M′ and M were compared, and it can be observed that the remaining sample stripes in Fig. 3(k) caused by the improper masking of M′ was eliminated entirely by the masking of M [as shown in Fig. 3(l)].

 figure: Fig. 3.

Fig. 3. The detailed steps of BS procedure. ① Reconstructing + unwrapping; ② ZPF fitting; ③ Calculating gradient and binarizing; ④ Filling the internal gap; ⑤ Deleting the connected objects on the boundary of image; ⑥ Eroding image; ⑦ Smoothing and re-binarizing. ⊙: the Hadamard product.

Download Full Size | PPT Slide | PDF

2.2 Workflow of the framework: (I) dataset acquisition, and (II) training process

2.2.1 (I) Dataset acquisition

Figure 4 shows the preparation and processing of the dataset. To train the ability of the network to repair the sample-free hologram, we need 2 parts of the network dataset for inpainting the mutilated hologram: part (1) masks of various shapes, and part (2) sample-free hologram with random background phase aberrations. The sample-free hologram was experimentally obtained by conducting a single-shot DHM on the sample background area, which is considered as the ground truth of the network, and masks are used to get the mutilated holograms as the input of the network. The production process of the two-part of our datasets is as follows.

MO1 and MO2 in the experimental DHM configuration [shown in Fig. 4(a)] were purposely rotated. The mirror of the system is converted to different angles to create 316 sample-free holograms with varying aberrations of phase and stripes. Then, the sample-free hologram maps were augmented using rotating (90°, 180°, and 270°) and flipping (horizontally and vertically). Thus, part (2) of the datasets were enhanced to the amount of 1896 sample-free hologram images as shown in Fig. 4(b). To make the network more robust, we used 948 regular masks and 948 irregular masks, which is the combination of 50 million strokes drawn by human hand [38] in part (1) of the dataset. The 948 regular masks are derived from the background segmented masks of 158 holograms of different samples employing rotating (90°, 180°, and 270°) and flipping (horizontally and vertically). All collected holograms were cropped into standard pictures of 256 × 256 pixels.

2.2.2 (II) Training process: (1) data preprocess and network architecture, and (2) training

Figure 5(b) presents the architecture of our two-stage GAN for hologram inpainting task. The work was inspired by EdgeConnect [39]. EdgeConnect can use the edge map as a priori to better fill in the missing regions, and it is very suitable for repairing the fringe map, such as the hologram with lots of different edges in our experiment system.

(1) Data preprocess and network architecture: The network for filling the mutilated hologram without sample areas, has 2 stages: (1) edge generator, and (2) image completion network. Set G 1 and D 1 as the generator and discriminator for Stage 1 (edge generator), G 2 and D 2 be the generator and discriminator for Stage 2 (image completion network), respectively. Both stages follow an adversarial model [40], with each stage comprising a pair of generator/discriminator. Concretely, the generator is sequentially composed of an encoder, 8 residual blocks [41], and a decoder. The encoder performs 2 down-samplings while the decoder executes 2 up-samplings to return the image to its original size. In the residual layer, dilated convolution with an expansion coefficient of 2 is used to replace regular convolution, leading to a receptive field of 205 at the last residual block. The discriminator employs the 70 × 70 PatchGAN [42,43] architecture to confirm the overlapping image patch with size 70 × 70. Instance normalization [44] is applied to all layers of the network.

Figure 5(a) shows the preprocess of the training dataset 1 (ground truth sample-free holograms denoted by ${H_{gt}}$) and the training dataset 2 (masks indicated by M). The grayscale and the edge map of ${H_{gt}}$ are represented by ${H_{\textrm{gray}}}$ and ${E_{gt}}$, respectively. Both ${H_{\textrm{gray}}}$ and ${E_{gt}}$ are processed with M to get the mutilated grayscale map $\left (\mathop {H_{\textrm{gray}}}\limits^{1 - M} \textrm{ = }(1 - M) \odot {H_{\textrm{gray}}}\right )$ and the mutilated edge map $\left (\mathop {E_{gt}}\limits^{1 - M} = (1 - M) \odot {E_{gt}}\right )$. Here, ⊙ denotes the Hadamard product. M, $\mathop {H_{\textrm{gray}}}\limits^{1 - M}$ and $\mathop {E_{gt}}\limits^{1 - M}$ are set as the inputs of Stage 1. The results generated by G1 in stage 1 can be defined as:

$${E_{{G_1}}} = {G_1}\left (M,\mathop {H_{\textrm{gray}}}\limits^{1 - M},\mathop {E_{gt}}\limits^{1 - M} \right ).$$

The inputs of the discriminator D 1 are ${E_{{G_1}}}$ and ${E_{gt}}$ conditioned on ${H_{\textrm{gray}}}$. And D 1 forecasts whether the generated edge map ${E_{{G_1}}}$ is real or not. The ground truth of Stage 1 is label 1 (${E_{gt}}$). The training process of the stage 1 network was implemented by using a target composed of feature-matching loss [45] and an adversarial loss 1

$$\mathop {\min }\limits_{{G_1}} \mathop {\max }\limits_{{D_1}} {L_{{G_1}}} = \mathop {\min }\limits_{{G_1}} \left (\mathop {\max }\limits_{{D_1}} ({L_{\textrm{adv},1}}) + 10{L_{FM}}\right ). $$

The adversarial loss 1 is defined as

$${L_{\textrm{adv},1}} = {{\mathbb{E}}_{({E_{gt}},{H_{\textrm{gray}}})}}[{\log {D_1}({E_{gt}},{H_{\textrm{gray}}})} ]+ {{\mathbb{E}}_{{H_{\textrm{gray}}}}}\log [{1 - {D_1}({E_{{G_1}}},{H_{\textrm{gray}}})} ]. $$

${\mathbb{E}}$ is the expected value of the distribution function. The feature matching loss ${L_{FM}}$ is expressed as

$${L_{FM}} = {\mathbb{E}}\left[ {\sum\limits_{i = 1}^L {\frac{1}{{{N_i}}}} {{||{{D_1}^{(i)}({E_{gt}}) - {D_1}^{(i)}({E_{{G_1}}})} ||}_1}} \right], $$
where ${D_1}^{(i)}$ is the activation of the i th layer in the discriminator, ${N_i}$ is the number of components of the i th activation layer and L is the final convolution layer of the discriminator.

The inputs of Stage 2 are the mutilated holograms $\left (\mathop {H_{gt}}\limits^{1 - M} = (1 - M) \odot {H_{gt}}\right )$, conditioned using the filled edge maps $\left ({E_{\textrm{filled}}} = \mathop {E_{gt}}\limits^{1 - M} + {E_{{G_1}}} \odot M\right )$, and ${E_{\textrm{filled}}}$ is built by consisting of generated edges produced by in the gone areas (${E_{{G_1}}}$) and the background aera of ground truth edge map $\left (\mathop {E_{gt}}\limits^{1 - M}\right )$ of the hologram. The network of Stage 2 returns a predicted hologram image (${H_{{G_2}}}$), with corrupted areas filled in, which has the exact dimensions and channels as the input image:

$${H_{{G_2}}} = {G_2}\left (\mathop {H_{gt}}\limits^{1 - M},{E_{\textrm{filled}}}\right). $$

The ground-truth of Stage 2 is label 2 (${H_{gt}}$). This is trained against a joint loss that consists of adversarial loss, ${\ell _1}$ loss, style loss and perceptual loss. The appropriate scaling was guaranteed by normalizing the ${\ell _1}$ loss with the mask size. The adversarial loss 2 is defined similarly to adversarial loss 1 in Eq. (10), as

$${L_{\textrm{adv},2}} = {{\mathbb{E}}_{({H_{gt}},{E_{\textrm{filled}}})}}[{\log {D_2}({H_{gt}},{E_{\textrm{filled}}})} ]+ {{\mathbb{E}}_{{E_{\textrm{filled}}}}}\log [{1 - {D_2}({H_{{G_2}}},{E_{\textrm{filled}}})} ]. $$

Perceptual loss is represented as

$${L_{\textrm{perc}}} = {\mathbb{E}}\left[ {{{\sum\limits_{i = 1}^{} {\frac{1}{{{N_i}}}||{{\phi_i}({H_{gt}}) - {\phi_i}({H_{{G_2}}})} ||} }_1}} \right], $$
where ${\phi _i}$ is the activation maps from layers relu1_1, relu2_1, relu3_1, relu4_1 and relu5_1 of the VGG-19 network pretrained on the ImageNet dataset [46]. For feature maps of sizes ${C_j}{ \times }{H_j}{ \times }{W_j}$, style loss is calculated by
$${L_{\textrm{style}}} = {{\mathbb{E}}_j}\left[ {{{\left|\left|{G_j^\phi \left (\mathop {H_{G_2} }\limits^{1 - M}\right ) - G_j^\phi \left(\mathop {H_{gt}}\limits^{1 - M}\right )} \right|\right|}_1}} \right], $$
where $G_j^\phi $ is a ${C_j}{ \times }{C_j}$ Gram matrix built from activation maps ${\phi _j}$. Our joint loss function is
$${L_{{G_2}}} = {L_{{\ell _1}}} + 0.1{L_{\textrm{adv},2}} + 0.1{L_{\textrm{perc}}} + 250{L_{\textrm{style}}}. $$

(2) Training: The training is divided into three steps: (1) Step 1: training the edge model, (2) Step 2: training the inpainting model and (3) Step 3: training the joint model. In Steps 1-2, generators G 1 and G 2 were trained separately using the edge map (${E_{gt}}$) obtained by the Canny detector (σ = 1) with the learning rate of 10−4, where σ represents the standard deviation of the Gaussian smoothing filter that is related to the sensitivity of the edge generator. When the image contains little edge information, a large value is usually assigned to σ, for example, σ = 2 at the task of image inpainting for face data sets [39]. On the other hand, when the image, such as a hologram, is full of edge features, σ should be a smaller value. In this experiment, after many trials and optimization processes, an optical value of 1 was chosen for σ. In Step 3, the networks were firstly fine-tuned by removing D 1, and then followed by an end-to-end training of G 1 and G 2 at the learning rate of 10−5. It should be noted that the input prior of G 2 in Step 2 is the Canny edge maps (label 1 ${E_{gt}}$), whereas the input prior of G 2 in Step 3 is the edge maps (${E_{{G_1}}}$). Therefore, Step 3 can be considered as the fine-tuning process of Step 2. Besides, a batch size of 6 was set to improve the training speed. As 0.5 million iterations were run in each training step, and each epoch contains 316 iterations of training data, about 1582 epochs were trained in each step in the training. The network is updated using Adam optimizer [47] with ${\beta _1} = 0$ and ${\beta _2} = 0.9$. We created a virtual environment on Ubuntu16.04 using Anaconda to implement the network, which was configured as: Python = 3.6, PyTorch = 1.0.0, Cudatoolkit = 9.0 and TorchVision = 0.2.1. The network is implemented on a host composed of an NVIDIA GeForce RTX 2070 SUPER GPU, an Intel Core i5 9400F CPU (2.90 GHz) with 16 GB of RAM and a 650W power supply. Figures 6(a)–6(d) show the precision of G 1 in Step 1 and mean absolute error (MAE) of G 2 in Step 2, the loss of generator and discriminator in Step 3, respectively. The calculation formula of MAE in Step 2 can be presented as:

$$\textrm{MAE} = \frac{1}{{H \times W}}\sum\limits_{i = 1}^H {} \sum\limits_{j = 1}^W {} |{{X_{\textrm{ground-truth}}}(i,j) - {X_{\textrm{predicted}}}(i,j)} |, $$
where ${X_{\textrm{ground-truth}}}(i,j)$, ${X_{\textrm{predicted}}}(i,j)$ represent the ground-truth and prediction results of the hologram, respectively, and W, H are the width and height of the hologram, respectively.

As can be seen from Fig. 6 that the precision of G 1 is close to 0.9, which indicates that G 1 can learn the edge information of the hologram well. MAE of G 2 starts at a small value of 0.05 and decreases quickly during the first 100 epochs of the training, which suggests that the canny edge map (label 1) improves the learning ability of G 2 for the inpainting task. Generator and discriminator of GAN have opposite purposes; that is, they fight against each other. Thus, the loss of generator and discriminator behaves opposite in Figs. 6(c) and 6(d). This means that there is no abnormality in the training process.

 figure: Fig. 4.

Fig. 4. Dataset acquisition of the experiment: (a) the experimental system and sample; (b) the process of dataset acquisition to train the inpainting network model. Procedure of device adjusting: turn Mirror in (a) to different angles, rotate MO1 and MO2 in (a); procedure of data enhancement: rotating (90°, 180°, and 270°) and flipping (horizontally and vertically). MO: microscopic objectives.

Download Full Size | PPT Slide | PDF

 figure: Fig. 5.

Fig. 5. Data preprocess and network architecture: (a) data preprocess; (b) network architecture. The inputs of G 1 are the mask (M), the mutilated grayscale of hologram $\left (\mathop {H_{\textrm{gray}}}\limits^{1 - M}\right )$ and the mutilated edge map $\left (\mathop {E_{gt}}\limits^{1 - M}\right )$, which are used to predict intact edge map. The predicted intact edge map (${E_{{G_1}}}$) and the mutilated hologram $\left (\mathop {H_{gt}}\limits^{1 - M} \right )$ are transmitted to G 2 to execute the inpainting task. The ground truth of Stage 1 and Stage 2 are label 1 (${E_{gt}}$) and label 2 (${H_{gt}}$), respectively. GAN: Generative Adversarial Network.

Download Full Size | PPT Slide | PDF

 figure: Fig. 6.

Fig. 6. The training process: (a) precision of Step 1 (the edge model), (b) MAE of Step 2 (the inpainting model), (c) loss of generator 2 of Step 3 (the joint model) and (d) loss of discriminator 2 of Step 3 (the joint model). MAE: mean absolute error.

Download Full Size | PPT Slide | PDF

3. Results and discussion

3.1 Inpainting results of the trained model

The performance of the trained two-stage network model is evaluated both qualitatively and quantitatively. Figure 7 displays some examples of images generated by our trained model. Original hologram, input mutilated hologram, generated edges, and predicted inpainting results are shown in lines (a)–(d). We added the dividing line between the vacant and existing areas in Fig. 7(c) for visualization purposes. By comparing line (a) and line (d) in Fig. 7, we can see that our model can produce realistic holograms whether the shape of the missing region is regular or not, and the structure of fringes remains intact. It is worth noting that the second and the sixth columns of Fig. 7 are not present in the training dataset, which means that our trained model is generalizable. It should be mentioned that these excellent image inpainting of reproducing filled regions exhibiting fine details mainly benefit from the specific two-stage architecture of GAN: stage one contains an edge generator which can excellently predict the hologram texture of the vacant sample area as a priori for filling tasks in Stage 2; Stage 2 only needs to learn the gray intensity distribution, without worrying about preserving image structure as the edge information is already present. This style of simplifying a complicated problem by breaking it down into simpler sub-problems makes the two-stage GAN very suitable for repairing the fringe map, such as the hologram with lots of different edges in our experiment system.

 figure: Fig. 7.

Fig. 7. The qualitative evaluation of inpainting results: (a) original hologram, (b) input mutilated hologram, (c) generated edges, (d) inpainting results without any post-processing.

Download Full Size | PPT Slide | PDF

The quality of our inpainting results is also assessed quantitively by employing the following metrics: (1) MAE, (2) structural similarity index (SSIM) [48], (3) peak signal-to-noise ratio (PSNR), and (4) feature similarity index (FSIM) [49]. MAE, PSNR and SSIM are the most widely used objective measures to evaluate the reproducing finesse of the inpainting filled missing regions. In addition, FSIM can achieve much higher consistency with subjective evaluations than state-of-the-art image quality assessment (IQA) metrics [49]. The expression for MAE has been shown in Eq. (17). The calculation of PSNR is as follows:

$$\textrm{PSNR} = 10{\log _{10}}\left (\frac{{{{({{2^n} - 1} )}^2}}}{{\textrm{MSE}}}\right ), $$
where n ( = 8 in this evaluation) is the depth of bits per pixel. The SSIM index is calculated with a window size of 8. MSE in Eq. (18) can be expressed as follows:
$$\textrm{MSE} = \frac{1}{{H \times W}}\sum\limits_{i = 1}^H {} \sum\limits_{j = 1}^W {} {({{X_{\textrm{ground-truth}}}(i,j) - {X_{\textrm{predicted}}}(i,j)} )^2}, $$
where ${X_{\textrm{ground-truth}}}(i,j)$, ${X_{\textrm{predicted}}}(i,j)$ represent the ground-truth and prediction results of the hologram, respectively, and W, H are the width and height of the hologram, respectively. The measure of SSIM between our inpainting results (${H_G}_{_2}$) and original hologram (${H_{gt}}$) of a standard size 11 × 11 is:
$$\textrm{SSIM} = \frac{{(2{\mu _{{H_{{G_2}}}}}{\mu _{{H_{gt}}}} + {c_1})(2{\sigma _{{H_{{G_2}}}{H_{gt}}}} + {c_2})}}{{(\mu _{{H_{{G_2}}}}^2 + \mu _{{H_{gt}}}^2 + {c_1})(\sigma _{{H_{{G_2}}}}^2 + \sigma _{{H_{{G_{gt}}}}}^2 + {c_2})}}, $$
where ${\mu _{{H_{{G_2}}}}}$ is the mean pixel value of ${H_{{G_2}}}$; ${\mu _{{H_{gt}}}}$ is the mean pixel value of ${H_{gt}}$; $\sigma _{{H_{{G_2}}}}^2$ is the variance of ${H_{{G_2}}}$; $\sigma _{{H_{gt}}}^2$ is the variance of ${H_{gt}}$; ${\sigma _{{H_{{G_2}}}{H_{gt}}}}$ is the covariance of ${H_{{G_2}}}$ and ${H_{gt}}$; ${c_1} = {(0.01L)^2}$ and ${c_2} = {(0.03L)^2}$ are the constants used to keep the fraction meaningful; L is the dynamic range of pixel values. FSIM combines phase consistency (PC) and gradient magnitude (GM) to evaluate image quality. The FSIM value can be expressed as follows [49]:
$$\textrm{FSIM} = \frac{{\sum\nolimits_{x \in \varOmega } {{S_L}(x) \cdot \textrm{P}{\textrm{C}_m}(x)} }}{{\sum\nolimits_{x \in \varOmega } {\textrm{P}{\textrm{C}_m}(x)} }}, $$
where SL(x) is the comprehensive similarity of PC and GM, PCm(x) weight the importance of SL(x) in the overall similarly between original images and inpainting results, x and Ω indicate the position and the entire image spatial domain, respectively.

Five groups of inpainting results, with each of 20 images, were tested against the corresponding ground-truth labels. To test the generalization of the network, each group includes 4 images coming outside of the training dataset. Table 1 shows the PSNR and SSIM mean values of each group indicating the degree of the image distortion. The larger the values of PSNR, SSIM and FSIM are, the smaller the values of MAE will be, and thus the better job of reproducing filled regions our algorithm will do. A PANR value of 30-40 usually indicates good image quality, and the maximum value of SSIM and FSIM is 1. As can be seen from Table 1, the PSNR value of each group is above 30, and the SSIM and FSIM value of each group is close to the maximum value of 1 (FSIM = 1 and SSIM = 1 means that 2 pictures are the same picture), which implies that our trained model predicts robust holograms (${H_{{G_2}}}$).

Tables Icon

Table 1. Quantitative Evaluation Metrics

3.2 Compensation result of phase aberrations

Figure 8 illustrates the reconstruction process of our algorithm (BS + Inpainting) for numerical phase aberrations compensation. Figure 8(a) shows the sample used for our test, on which BS was executed, yielding the binary mask [Fig. 8(b)]. The mutilated hologram [Fig. 8(c)] was acquired by deducting the sample areas in Fig. 8(a) using the binary mask [Fig. 8(b)]. By comparing Figs. 8(a) and 8(c), we find that our BS algorithm can completely remove the streaks in the sample area, facilitating the followed image inpainting. Figure 8(d) displays the inpainting prediction of our trained network model with Figs. 8(b) and 8(c) as the input. Figures 8(e) and 8(f) are the unwrapped phase map of Figs. 8(a) and 8(d) after reconstruction and unwrapping, respectively. The angular spectrum algorithm and branch cut algorithm [50] were employed to perform phase reconstruction and phase unwrapping. System distortion and sample phase information are stored in Fig. 8(e), while Fig. 8(f) contains the system distortion information only. The distortion of the system can thus be removed by subtracting Fig. 8(f) from Fig. 8(e), and then the accurate phase information of the sample can be revealed.

 figure: Fig. 8.

Fig. 8. The results of each step in our algorithm: (a) the raw hologram containing phase aberrations and the sample phase; (b) the binary mask obtained by background segmentation; (c) the mutilated hologram without the sample areas processed by (b); (d) the prediction of (b) and (c) by our trained network; (e) the reconstruction and unwrapping result of (a), and the dotted line contains the sample phase; (f) the reconstruction and unwrapping result of (d).

Download Full Size | PPT Slide | PDF

Figure 9 compares the compensation results of the ZPF, the BS + ZPF, and the proposed methods. Numerical fitting procedures like ZPF are often sensitive to noise and easy to generate errors to the sample phase. As shown in Fig. 9(f), the ZPF method causes bending of the background and sample. When the BS + ZPF method is used, the aberration phase is fitted using ZPF on the background of unwrapped map with the sample removed by BS. This method is still sensitive to noise and does not compensate for all the distorted regions of the phase distribution, which will lead to the rough background and the inaccurate sample morphology, as shown in Fig. 9(d). Our inpainting-based algorithm does not use numerical fitting procedures to complete the aberrations compensation task, so this drawback is avoided. However, the purpose of DHM is generally to measure the phase of the sample and visualize it for further analysis without background noise. Therefore, a flat phase map in the background is essential for correct analysis. It can be seen from the 3D results in Fig. 9(b) that the compensated result of our algorithm has a flatter background, which proves that the denoising effect of our algorithm is remarkable. In order to evaluate the relative step height of the sample and the texture of the sample area, we compare the heights on the same line of the results of the three algorithms.

 figure: Fig. 9.

Fig. 9. Aberration compensation results of the experimental sample: (a) the result obtained by the proposed method; (b) the 3D rendering of (a); (c) the result obtained by the BS + ZPF; (d) the 3D rendering of (c); (e) the result obtained by the ZPF; (f) the 3D rendering of (e). The black lines mark the intersectional lines for cross-section profiling of the sample.

Download Full Size | PPT Slide | PDF

The same sample was also measured by the confocal laser scanning microscopy (LEXT OLS4100; Olympus, Shinjuku-ku, Tokyo, Japan). The accurate measurement of the LEXT OLS4100 for 3D structure is the key to our verification. Figure 10 shows a partial view of the sample under the LEXT OLS4100.

 figure: Fig. 10.

Fig. 10. Sample measurement by the 3D measuring laser microscope: (a) the local top view of the sample; (b) 3D view of the area in the red box of (a). The blue line represents the corresponding cross-section lines for cross-section profiling.

Download Full Size | PPT Slide | PDF

Figure 11 plots the cross-sectional profiles of the sample obtained by the ZPF, the BS + ZPF, the proposed method, and the LEXT OLS4100. It is manifest that the result acquired by the proposed method is more consistent with that achieved by the LEXT in terms of the microstructure height and has a very flat background region. Furthermore, both the profiles measured by the ZPF and the BS + ZPF methods have a significant height deviation of about 30 nm. The above experimental results indicate that, compared to the proposed method, the traditional numerical fitting procedures are susceptible to noise, which leads to a non-flatness background and under-estimated height measurement. The proposed algorithm, however, in remarkable agreement with the LEXT OLS4100, is more accurate and has a strong ability of denoising, which is promising in providing quantitative phase imaging in the digital holographic microscopy.

 figure: Fig. 11.

Fig. 11. Cross-sectional profiling of the sample: (a) phase profiles measured by the ZPF, the BS + ZPF, and the proposed method; (b) the phase profile measured by the LEXT OLS4100.

Download Full Size | PPT Slide | PDF

Toward the end of this section, a few additional points that follow are added to clarify the above discussions further:

Discussion 1: As shown in Fig. 10, the noise on top of the investigated sample appears to have a highly similar texture but large amplitude than the removed coherence induced background noise. This phenomenon is probably due to the limitation of the sample fabrication process. As mentioned before, the top surface of the sample structure and the background surface are fabricated using different microfabrication techniques. Thus they may have their own surface characteristics or features. The GAN has a relatively good ability to learn the background features. Given proper training, it can precisely predict the phase aberration of the system and the noise (i.e., the coherence noise) of the background as the training dataset is made of the background features. It should be kept in mind that the specific manufacturing process on the top surface of the sample may introduce its own manufacture roughness. Furthermore, due to the coherent nature of the laser, the digital holograms of the sample top surface can be corrupted by coherent noises, which is different from the background noise. Thus, the algorithm may fail to give a proper noise prediction of the sample top surface. Therefore, special care must be paid for samples whose surface noise is not identical to the background, and an extra coherent noise suppression may be suggested for high-resolution sample 3D imaging.

Discussion 2: By comparing the sample step measurements obtained by our method and the LEXT OLS4100 in Fig. 10, it can be observed that the sample height in DHM is lower than that of OSL4100. This discrepancy is probably caused by the measurement uncertainty of the confocal laser scan. The aluminum film deposited on the sample may increase transparency, resulting in relatively considerable uncertainty for the confocal laser scan measurement. On the other hand, the extra degree of transparency can slightly influence our DHM method within the uncertainty of coherence-induced noise due to the phase background subtraction technique. In addition, the vertical resolution (about 10 nm) and the limitation for step measurement of the confocal laser scan could also contribute to the measurement difference. Obviously, our DHM method demonstrates better performance relative to the confocal laser scan technique in measuring step-like samples.

Discussion 3: It is worth pointing out that the proposed method performs better in many application scenarios compared to the current widely-used physical methods in DHM even though, from the current point of view, the method may not fully meet the regulations and requirements of standard non-contact detection techniques. For example, as a typical physical method, besides tedious and laborious optical configurations, the double exposure method demands a flat and blank region in proximity to the sample to remove the system's distortion. However, in the cases of the measurement of samples of dense structure, e.g., the dense pap smear sample [51], the above physical methods may become inappropriate. In these situations, our deep learning-based approach has the potential to outperform traditional physical methods.

Discussion 4: The maximum percentage of coverage [the area ratio of object to the entire microscope's field of view (FOV)] permissible for valid inpainting by the proposed GAN has also been investigated. Our study reveals that FSIM and SSIM decrease steadily as the object occupancy increases. For the specific samples tested in this paper, when the object coverage is greater than 40%, the values of FSIM and SSIM of the inpainting results both drop below 0.9, and poor inpainting can be observed, typically suggesting the bad performance of inpainting task by the proposed GAN and thus leading to deterioration in DHM measurements. Therefore, to ensure a good quality of inpainting and thus accuracy of 3D imaging, a certain threshold should be set for the percentage of object coverage in the real DHM applications. For example, a threshold value of about 40% is recommended for the testing samples in this paper. It is worth noting that this threshold may have some variations for different testing configurations. For example, our experiment suggests that a relatively higher percentage of subtracted object area may be allowed when the interference fringes of the hologram are more distinguishable or sparser. The reason for this suggestion is probably that distinguishable or sparse fringes are conducive to the generation of the edge map in Stage 1 of GAN, which in turn helps Stage 2 to better complete the inpainting task. Therefore, certain strict experimental conditions should be preferred to ensure a good quality of the hologram, which is favorable to feature extraction by the proposed GAN.

4. Conclusion

To summarize, we have presented and proved a novel method that combines background segmentation with inpainting (BS + Inpainting) to accurately compensate the phase aberration in a DHM system. By inpainting the mutilated hologram without sample areas via a trained two-stage GAN, all system aberrations, including spherical aberration and tilt aberration, are gained, and these aberrations are compensated thereby. The experimental result proves that the trained model can repair various mutilated holograms with multiple mutilation and stripes, and the proposed method outperforms the ZPF and the BS + ZPF. Besides, the proposed method is almost consistent with that measured by the 3D Measuring Laser Microscope. Thus, our BS + Inpainting algorithm is a valid and promising candidate that can be applied for accurate phase measurement and denoising in a DHM system.

Funding

National Natural Science Foundation of China (12072070, 51505076); Fundamental Research Funds for the Central Universities (140304010, 180304016); Natural Science Foundation of Liaoning Province (2015020105).

Acknowledgments

We would like to thank Jiasixuan Company in Suzhou, China for their assistance on specimen fabrication.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. G. Coppola, S. De Nicola, P. Ferraro, A. Finizio, S. Grilli, M. Iodice, C. Magro, and G. Pierattini, “Characterization of MEMS structures by microscopic digital holography,” in Mems/Moems: Advances in Photonic Communications, Sensing, Metrology, Packaging and Assembly, U. F. W. Behringer, B. Courtois, A. M. Khounsary, and D. G. Uttamchandani, eds. (SPIE, 2003), pp. 71–78.

2. P. Ferraro, G. Coppola, S. De Nicola, A. Finizio, S. Grilli, M. Iodice, C. Magro, and G. Pierattini, “Digital holography for characterization and testing of MEMS structures,” in IEEE/LEOS International Conference on Optical MEMs, (IEEE, 2002), 125–126 .

3. F. Jolivet, F. Momey, L. Denis, L. Mees, N. Faure, N. Grosjean, F. Pinston, J.-L. Marie, and C. Fournier, “Regularized reconstruction of absorbing and phase objects from a single in-line hologram, application to fluid mechanics and micro-biology,” Opt. Express 26, 8923–8940 (2018). [CrossRef]  

4. V. P. Pandiyan and R. John, “Optofluidic bioimaging platform for quantitative phase imaging of lab on a chip devices using digital holographic microscopy,” Appl. Opt. 55, A54–A59 (2016). [CrossRef]  

5. J. Di, Y. Yu, Z. Wang, W. Qu, C. Y. Cheng, and J. Zhao, “Quantitative measurement of thermal lensing in diode-side-pumped Nd:YAG laser by use of digital holographic interferometry,” Opt. Express 24, 28185–28193 (2016). [CrossRef]  

6. S. Pan, J. Ma, R. Zhu, T. Ba, C. Zuo, F. Chen, J. Dou, C. Wei, and W. Zhou, “Real-time complex amplitude reconstruction method for beam quality M-2 factor measurement,” Opt. Express 25, 20142–20155 (2017). [CrossRef]  

7. T. Bourgade, J. Sun, Z. Wang, R. Elsa, and A. Asundi, “Compact lens-less digital holographic microscope for MEMS inspection and characterization,” JOVE (2016).

8. G. Coppola, P. Ferraro, M. Iodice, S. De Nicola, A. Finizio, and S. Grilli, “A digital holographic microscope for complete characterization of microelectromechanical systems,” Meas. Sci. Technol. 15, 529–539 (2004). [CrossRef]  

9. D. Deng, J. Peng, W. Qu, Y. Wu, X. Liu, W. He, and X. Peng, “Simple and flexible phase compensation for digital holographic microscopy with electrically tunable lens,” Appl. Opt. 56, 6007–6014 (2017). [CrossRef]  

10. W. Qu, Y. Yu, C. O. Choo, and A. Asundi, “Digital holographic microscopy with physical phase compensation,” Opt. Lett. 34, 1276–1278 (2009). [CrossRef]  

11. J. Jang, C. Y. Bae, J.-K. Park, and J. C. Ye, “Self-reference quantitative phase microscopy for microfluidic devices,” Opt. Lett. 35, 514–516 (2010). [CrossRef]  

12. N. T. Shaked, “Quantitative phase microscopy of biological samples using a portable interferometer,” Opt. Lett. 37, 2016–2018 (2012). [CrossRef]  

13. A. Singh, A. Anand, R. A. Leitgeb, and B. Javidi, “Lateral shearing digital holographic imaging of small biological specimens,” Opt. Express 20, 23617–23622 (2012). [CrossRef]  

14. B. Kemper, A. Vollmer, C. E. Rommel, J. Schnekenburger, and G. V. Bally, “Simplified approach for quantitative digital holographic phase contrast imaging of living cells,” J. Biomed. Opt. 16, 026014 (2011). [CrossRef]  

15. E. Sánchez-Ortiga, P. Ferraro, M. Martínez-Corral, G. Saavedra, and A. Doblas, “Digital holographic microscopy with pure-optical spherical phase compensation,” J. Opt. Soc. Am. A 28, 1410–1417 (2011). [CrossRef]  

16. A. Doblas, E. Sánchez-Ortiga, M. Martínez-Corral, G. Saavedra, and J. Garcia-Sucerquia, “Accurate single-shot quantitative phase imaging of biological specimens with telecentric digital holographic microscopy,” J. Biomed. Opt. 19, 046022 (2014). [CrossRef]  

17. C. Trujillo, R. Castaeda, P. Piedrahita-Quintero, and J. Garcia-Sucerquia, “Automatic full compensation of quantitative phase imaging in off-axis digital holographic microscopy,” Appl. Opt. 55, 10299 (2016). [CrossRef]  

18. R. Castañeda and J. Garcia-Sucerquia, “Single-shot 3D topography of reflective samples with digital holographic microscopy,” Appl. Opt. 57, A12 (2018). [CrossRef]  

19. W. Qu, C. O. Choo, V. R. Singh, Y. Yu, and A. Asundi, “Quasi-physical phase compensation in digital holographic microscopy,” J. Opt. Soc. Am. A 26, 2005–2011 (2009). [CrossRef]  

20. C. J. Mann, L. Yu, C. M. Lo, and M. K. Kim, “High-resolution quantitative phase-contrast microscopy by digital holography,” Opt. Express 13, 8693–8698 (2005). [CrossRef]  

21. H. Cui, D. Wang, Y. Wang, Z. Jie, and Y. Zhang, “Phase aberration compensation by spectrum centering in digital holographic microscopy,” Opt. Commun. 284, 4152–4155 (2011). [CrossRef]  

22. S. Liu, W. Xiao, and F. Pan, “Automatic compensation of phase aberrations in digital holographic microscopy for living cells investigation by using spectral energy analysis,” Opt. Laser Technol. 57, 169–174 (2014). [CrossRef]  

23. J. Min, B. Yao, S. Ketelhut, C. Engwer, and B. Greve, “Simple and fast spectral domain algorithm for quantitative phase imaging of living cells with digital holographic microscopy,” Opt. Lett. 42, 227–230 (2017). [CrossRef]  

24. C. Zuo, Q. Chen, W. Qu, and A. Asundi, “Phase aberration compensation in digital holographic microscopy based on principal component analysis,” Opt. Lett. 38, 1724–1726 (2013). [CrossRef]  

25. T. Colomb, E. Cuche, F. Charriere, J. Kuhn, N. Aspert, F. Montfort, P. Marquet, and C. Depeursinge, “Automatic procedure for aberration compensation in digital holographic microscopy and applications to specimen shape compensation,” Appl. Opt. 45, 851–863 (2006). [CrossRef]  

26. X. Lai, S. Xiao, Y. Ge, K. Wei, and K. Wu, “Digital holographic phase imaging with aberrations totally compensated,” Biomed. Opt. Express 10, 283–292 (2019). [CrossRef]  

27. T. Colomb, F. Montfort, J. Kühn, N. Aspert, and C. Depeursinge, “Numerical parametric lens for shifting, magnification, and complete aberration compensation in digital holographic microscopy,” J. Opt. Soc. Am. A 23, 3177–3190 (2006). [CrossRef]  

28. J. Di, J. Zhao, W. Sun, H. Jiang, and X. Yan, “Phase aberration compensation of digital holographic microscopy based on least squares surface fitting,” Opt. Commun. 282, 3873–3877 (2009). [CrossRef]  

29. L. Miccio, D. Alfieri, S. Grilli, P. Ferraro, A. Finizio, L. D. Petrocellis, and S. D. Nicola, “Direct full compensation of the aberrations in quantitative phase microscopy of thin objects by a single digital hologram,” Appl. Phys. Lett. 90, 041104 (2007). [CrossRef]  

30. N. Thanh, B. Vy, L. Van, C. B. Raub, L. C. Chang, and N. George, “Automatic phase aberration compensation for digital holographic microscopy based on deep learning background detection,” Opt. Express 25, 15043 (2017). [CrossRef]  

31. S. Ma, R. Fang, Y. Luo, Q. Liu, S. Wang, and X. Zhou, “Phase aberration compensation via deep learning in digital holographic microscopy,” Meas. Sci. Technol. 32, 105203 (2021). [CrossRef]  

32. L. Huang, L. Yan, B. Chen, Y. Zhou, and T. Yang, “Phase aberration compensation of digital holographic microscopy with curve fitting preprocessing and automatic background segmentation for microstructure testing,” Opt. Commun. 462, 125311 (2020). [CrossRef]  

33. S. Liu, Q. Lian, and Z. Xu, “Phase aberration compensation for digital holographic microscopy based on double fitting and background segmentation,” Opt. Lasers Eng. 115, 238–242 (2019). [CrossRef]  

34. G. Coppola, G. D. Caprio, M. Gioffré, R. Puglisi, and P. Ferraro, “Digital self-referencing quantitative phase microscopy by wavefront folding in holographic image reconstruction,” Opt. Lett. 35, 3390 (2010). [CrossRef]  

35. W. He, Z. Liu, Z. Yang, J. Dou, X. Liu, Y. Zhang, and Z. Liu, “Robust phase aberration compensation in digital holographic microscopy by self-extension of holograms,” Opt. Commun. 445, 69–75 (2019). [CrossRef]  

36. D. Deng, W. Qu, W. He, X. Liu, and X. Peng, “Phase aberration compensation for digital holographic microscopy based on geometrical transformations,” J. Opt. 21, 085702 (2019). [CrossRef]  

37. G. Nehmetallah and P. P. Banerjee, “Applications of digital and analog holography in three-dimensional imaging,” Adv. Opt. Photonics 4, 472–553 (2012). [CrossRef]  

38. K. Iskakov, “Quick draw irregular mask dataset,” (2020).

39. K. Nazeri, E. Ng, T. Joseph, F. Z. Qureshi, and M. Ebrahimi, “Edgeconnect: Generative image inpainting with adversarial edge learning,” arXiv preprint arXiv:1901.00212 (2019).

40. I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” arXiv preprint arXiv:1406.2661 (2014).

41. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 770–778.

42. P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 1125–1134.

43. J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proceedings of the IEEE International Conference on Computer Vision (2017), pp. 2223–2232.

44. D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 6924–6932.

45. T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro, “High-resolution image synthesis and semantic manipulation with conditional gans,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 8798–8807.

46. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015). [CrossRef]  

47. D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

48. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13, 600–612 (2004). [CrossRef]  

49. L. Zhang, L. Zhang, X. Mou, and D. Zhang, “FSIM: a feature similarity index for image quality assessment,” IEEE Trans. on Image Process. 20, 2378–2386 (2011). [CrossRef]  

50. M. Takeda, H. Ina, and S. Kobayashi, “Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry,” Rev. Sci. Instrum. 72, 156–160 (1982). [CrossRef]  

51. A. Greenbaum, U. Sikora, and A. Ozcan, “Field-portable wide-field microscopy of dense samples using multi-height pixel super-resolution based lensfree imaging,” Lab Chip 12, 1242 (2012). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. G. Coppola, S. De Nicola, P. Ferraro, A. Finizio, S. Grilli, M. Iodice, C. Magro, and G. Pierattini, “Characterization of MEMS structures by microscopic digital holography,” in Mems/Moems: Advances in Photonic Communications, Sensing, Metrology, Packaging and Assembly, U. F. W. Behringer, B. Courtois, A. M. Khounsary, and D. G. Uttamchandani, eds. (SPIE, 2003), pp. 71–78.
  2. P. Ferraro, G. Coppola, S. De Nicola, A. Finizio, S. Grilli, M. Iodice, C. Magro, and G. Pierattini, “Digital holography for characterization and testing of MEMS structures,” in IEEE/LEOS International Conference on Optical MEMs, (IEEE, 2002), 125–126 .
  3. F. Jolivet, F. Momey, L. Denis, L. Mees, N. Faure, N. Grosjean, F. Pinston, J.-L. Marie, and C. Fournier, “Regularized reconstruction of absorbing and phase objects from a single in-line hologram, application to fluid mechanics and micro-biology,” Opt. Express 26, 8923–8940 (2018).
    [Crossref]
  4. V. P. Pandiyan and R. John, “Optofluidic bioimaging platform for quantitative phase imaging of lab on a chip devices using digital holographic microscopy,” Appl. Opt. 55, A54–A59 (2016).
    [Crossref]
  5. J. Di, Y. Yu, Z. Wang, W. Qu, C. Y. Cheng, and J. Zhao, “Quantitative measurement of thermal lensing in diode-side-pumped Nd:YAG laser by use of digital holographic interferometry,” Opt. Express 24, 28185–28193 (2016).
    [Crossref]
  6. S. Pan, J. Ma, R. Zhu, T. Ba, C. Zuo, F. Chen, J. Dou, C. Wei, and W. Zhou, “Real-time complex amplitude reconstruction method for beam quality M-2 factor measurement,” Opt. Express 25, 20142–20155 (2017).
    [Crossref]
  7. T. Bourgade, J. Sun, Z. Wang, R. Elsa, and A. Asundi, “Compact lens-less digital holographic microscope for MEMS inspection and characterization,” JOVE (2016).
  8. G. Coppola, P. Ferraro, M. Iodice, S. De Nicola, A. Finizio, and S. Grilli, “A digital holographic microscope for complete characterization of microelectromechanical systems,” Meas. Sci. Technol. 15, 529–539 (2004).
    [Crossref]
  9. D. Deng, J. Peng, W. Qu, Y. Wu, X. Liu, W. He, and X. Peng, “Simple and flexible phase compensation for digital holographic microscopy with electrically tunable lens,” Appl. Opt. 56, 6007–6014 (2017).
    [Crossref]
  10. W. Qu, Y. Yu, C. O. Choo, and A. Asundi, “Digital holographic microscopy with physical phase compensation,” Opt. Lett. 34, 1276–1278 (2009).
    [Crossref]
  11. J. Jang, C. Y. Bae, J.-K. Park, and J. C. Ye, “Self-reference quantitative phase microscopy for microfluidic devices,” Opt. Lett. 35, 514–516 (2010).
    [Crossref]
  12. N. T. Shaked, “Quantitative phase microscopy of biological samples using a portable interferometer,” Opt. Lett. 37, 2016–2018 (2012).
    [Crossref]
  13. A. Singh, A. Anand, R. A. Leitgeb, and B. Javidi, “Lateral shearing digital holographic imaging of small biological specimens,” Opt. Express 20, 23617–23622 (2012).
    [Crossref]
  14. B. Kemper, A. Vollmer, C. E. Rommel, J. Schnekenburger, and G. V. Bally, “Simplified approach for quantitative digital holographic phase contrast imaging of living cells,” J. Biomed. Opt. 16, 026014 (2011).
    [Crossref]
  15. E. Sánchez-Ortiga, P. Ferraro, M. Martínez-Corral, G. Saavedra, and A. Doblas, “Digital holographic microscopy with pure-optical spherical phase compensation,” J. Opt. Soc. Am. A 28, 1410–1417 (2011).
    [Crossref]
  16. A. Doblas, E. Sánchez-Ortiga, M. Martínez-Corral, G. Saavedra, and J. Garcia-Sucerquia, “Accurate single-shot quantitative phase imaging of biological specimens with telecentric digital holographic microscopy,” J. Biomed. Opt. 19, 046022 (2014).
    [Crossref]
  17. C. Trujillo, R. Castaeda, P. Piedrahita-Quintero, and J. Garcia-Sucerquia, “Automatic full compensation of quantitative phase imaging in off-axis digital holographic microscopy,” Appl. Opt. 55, 10299 (2016).
    [Crossref]
  18. R. Castañeda and J. Garcia-Sucerquia, “Single-shot 3D topography of reflective samples with digital holographic microscopy,” Appl. Opt. 57, A12 (2018).
    [Crossref]
  19. W. Qu, C. O. Choo, V. R. Singh, Y. Yu, and A. Asundi, “Quasi-physical phase compensation in digital holographic microscopy,” J. Opt. Soc. Am. A 26, 2005–2011 (2009).
    [Crossref]
  20. C. J. Mann, L. Yu, C. M. Lo, and M. K. Kim, “High-resolution quantitative phase-contrast microscopy by digital holography,” Opt. Express 13, 8693–8698 (2005).
    [Crossref]
  21. H. Cui, D. Wang, Y. Wang, Z. Jie, and Y. Zhang, “Phase aberration compensation by spectrum centering in digital holographic microscopy,” Opt. Commun. 284, 4152–4155 (2011).
    [Crossref]
  22. S. Liu, W. Xiao, and F. Pan, “Automatic compensation of phase aberrations in digital holographic microscopy for living cells investigation by using spectral energy analysis,” Opt. Laser Technol. 57, 169–174 (2014).
    [Crossref]
  23. J. Min, B. Yao, S. Ketelhut, C. Engwer, and B. Greve, “Simple and fast spectral domain algorithm for quantitative phase imaging of living cells with digital holographic microscopy,” Opt. Lett. 42, 227–230 (2017).
    [Crossref]
  24. C. Zuo, Q. Chen, W. Qu, and A. Asundi, “Phase aberration compensation in digital holographic microscopy based on principal component analysis,” Opt. Lett. 38, 1724–1726 (2013).
    [Crossref]
  25. T. Colomb, E. Cuche, F. Charriere, J. Kuhn, N. Aspert, F. Montfort, P. Marquet, and C. Depeursinge, “Automatic procedure for aberration compensation in digital holographic microscopy and applications to specimen shape compensation,” Appl. Opt. 45, 851–863 (2006).
    [Crossref]
  26. X. Lai, S. Xiao, Y. Ge, K. Wei, and K. Wu, “Digital holographic phase imaging with aberrations totally compensated,” Biomed. Opt. Express 10, 283–292 (2019).
    [Crossref]
  27. T. Colomb, F. Montfort, J. Kühn, N. Aspert, and C. Depeursinge, “Numerical parametric lens for shifting, magnification, and complete aberration compensation in digital holographic microscopy,” J. Opt. Soc. Am. A 23, 3177–3190 (2006).
    [Crossref]
  28. J. Di, J. Zhao, W. Sun, H. Jiang, and X. Yan, “Phase aberration compensation of digital holographic microscopy based on least squares surface fitting,” Opt. Commun. 282, 3873–3877 (2009).
    [Crossref]
  29. L. Miccio, D. Alfieri, S. Grilli, P. Ferraro, A. Finizio, L. D. Petrocellis, and S. D. Nicola, “Direct full compensation of the aberrations in quantitative phase microscopy of thin objects by a single digital hologram,” Appl. Phys. Lett. 90, 041104 (2007).
    [Crossref]
  30. N. Thanh, B. Vy, L. Van, C. B. Raub, L. C. Chang, and N. George, “Automatic phase aberration compensation for digital holographic microscopy based on deep learning background detection,” Opt. Express 25, 15043 (2017).
    [Crossref]
  31. S. Ma, R. Fang, Y. Luo, Q. Liu, S. Wang, and X. Zhou, “Phase aberration compensation via deep learning in digital holographic microscopy,” Meas. Sci. Technol. 32, 105203 (2021).
    [Crossref]
  32. L. Huang, L. Yan, B. Chen, Y. Zhou, and T. Yang, “Phase aberration compensation of digital holographic microscopy with curve fitting preprocessing and automatic background segmentation for microstructure testing,” Opt. Commun. 462, 125311 (2020).
    [Crossref]
  33. S. Liu, Q. Lian, and Z. Xu, “Phase aberration compensation for digital holographic microscopy based on double fitting and background segmentation,” Opt. Lasers Eng. 115, 238–242 (2019).
    [Crossref]
  34. G. Coppola, G. D. Caprio, M. Gioffré, R. Puglisi, and P. Ferraro, “Digital self-referencing quantitative phase microscopy by wavefront folding in holographic image reconstruction,” Opt. Lett. 35, 3390 (2010).
    [Crossref]
  35. W. He, Z. Liu, Z. Yang, J. Dou, X. Liu, Y. Zhang, and Z. Liu, “Robust phase aberration compensation in digital holographic microscopy by self-extension of holograms,” Opt. Commun. 445, 69–75 (2019).
    [Crossref]
  36. D. Deng, W. Qu, W. He, X. Liu, and X. Peng, “Phase aberration compensation for digital holographic microscopy based on geometrical transformations,” J. Opt. 21, 085702 (2019).
    [Crossref]
  37. G. Nehmetallah and P. P. Banerjee, “Applications of digital and analog holography in three-dimensional imaging,” Adv. Opt. Photonics 4, 472–553 (2012).
    [Crossref]
  38. K. Iskakov, “Quick draw irregular mask dataset,” (2020).
  39. K. Nazeri, E. Ng, T. Joseph, F. Z. Qureshi, and M. Ebrahimi, “Edgeconnect: Generative image inpainting with adversarial edge learning,” arXiv preprint arXiv:1901.00212 (2019).
  40. I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” arXiv preprint arXiv:1406.2661 (2014).
  41. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 770–778.
  42. P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 1125–1134.
  43. J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proceedings of the IEEE International Conference on Computer Vision (2017), pp. 2223–2232.
  44. D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 6924–6932.
  45. T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro, “High-resolution image synthesis and semantic manipulation with conditional gans,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 8798–8807.
  46. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
    [Crossref]
  47. D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).
  48. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13, 600–612 (2004).
    [Crossref]
  49. L. Zhang, L. Zhang, X. Mou, and D. Zhang, “FSIM: a feature similarity index for image quality assessment,” IEEE Trans. on Image Process. 20, 2378–2386 (2011).
    [Crossref]
  50. M. Takeda, H. Ina, and S. Kobayashi, “Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry,” Rev. Sci. Instrum. 72, 156–160 (1982).
    [Crossref]
  51. A. Greenbaum, U. Sikora, and A. Ozcan, “Field-portable wide-field microscopy of dense samples using multi-height pixel super-resolution based lensfree imaging,” Lab Chip 12, 1242 (2012).
    [Crossref]

2021 (1)

S. Ma, R. Fang, Y. Luo, Q. Liu, S. Wang, and X. Zhou, “Phase aberration compensation via deep learning in digital holographic microscopy,” Meas. Sci. Technol. 32, 105203 (2021).
[Crossref]

2020 (1)

L. Huang, L. Yan, B. Chen, Y. Zhou, and T. Yang, “Phase aberration compensation of digital holographic microscopy with curve fitting preprocessing and automatic background segmentation for microstructure testing,” Opt. Commun. 462, 125311 (2020).
[Crossref]

2019 (4)

S. Liu, Q. Lian, and Z. Xu, “Phase aberration compensation for digital holographic microscopy based on double fitting and background segmentation,” Opt. Lasers Eng. 115, 238–242 (2019).
[Crossref]

W. He, Z. Liu, Z. Yang, J. Dou, X. Liu, Y. Zhang, and Z. Liu, “Robust phase aberration compensation in digital holographic microscopy by self-extension of holograms,” Opt. Commun. 445, 69–75 (2019).
[Crossref]

D. Deng, W. Qu, W. He, X. Liu, and X. Peng, “Phase aberration compensation for digital holographic microscopy based on geometrical transformations,” J. Opt. 21, 085702 (2019).
[Crossref]

X. Lai, S. Xiao, Y. Ge, K. Wei, and K. Wu, “Digital holographic phase imaging with aberrations totally compensated,” Biomed. Opt. Express 10, 283–292 (2019).
[Crossref]

2018 (2)

2017 (4)

2016 (3)

2015 (1)

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

2014 (2)

S. Liu, W. Xiao, and F. Pan, “Automatic compensation of phase aberrations in digital holographic microscopy for living cells investigation by using spectral energy analysis,” Opt. Laser Technol. 57, 169–174 (2014).
[Crossref]

A. Doblas, E. Sánchez-Ortiga, M. Martínez-Corral, G. Saavedra, and J. Garcia-Sucerquia, “Accurate single-shot quantitative phase imaging of biological specimens with telecentric digital holographic microscopy,” J. Biomed. Opt. 19, 046022 (2014).
[Crossref]

2013 (1)

2012 (4)

G. Nehmetallah and P. P. Banerjee, “Applications of digital and analog holography in three-dimensional imaging,” Adv. Opt. Photonics 4, 472–553 (2012).
[Crossref]

N. T. Shaked, “Quantitative phase microscopy of biological samples using a portable interferometer,” Opt. Lett. 37, 2016–2018 (2012).
[Crossref]

A. Singh, A. Anand, R. A. Leitgeb, and B. Javidi, “Lateral shearing digital holographic imaging of small biological specimens,” Opt. Express 20, 23617–23622 (2012).
[Crossref]

A. Greenbaum, U. Sikora, and A. Ozcan, “Field-portable wide-field microscopy of dense samples using multi-height pixel super-resolution based lensfree imaging,” Lab Chip 12, 1242 (2012).
[Crossref]

2011 (4)

L. Zhang, L. Zhang, X. Mou, and D. Zhang, “FSIM: a feature similarity index for image quality assessment,” IEEE Trans. on Image Process. 20, 2378–2386 (2011).
[Crossref]

B. Kemper, A. Vollmer, C. E. Rommel, J. Schnekenburger, and G. V. Bally, “Simplified approach for quantitative digital holographic phase contrast imaging of living cells,” J. Biomed. Opt. 16, 026014 (2011).
[Crossref]

E. Sánchez-Ortiga, P. Ferraro, M. Martínez-Corral, G. Saavedra, and A. Doblas, “Digital holographic microscopy with pure-optical spherical phase compensation,” J. Opt. Soc. Am. A 28, 1410–1417 (2011).
[Crossref]

H. Cui, D. Wang, Y. Wang, Z. Jie, and Y. Zhang, “Phase aberration compensation by spectrum centering in digital holographic microscopy,” Opt. Commun. 284, 4152–4155 (2011).
[Crossref]

2010 (2)

2009 (3)

2007 (1)

L. Miccio, D. Alfieri, S. Grilli, P. Ferraro, A. Finizio, L. D. Petrocellis, and S. D. Nicola, “Direct full compensation of the aberrations in quantitative phase microscopy of thin objects by a single digital hologram,” Appl. Phys. Lett. 90, 041104 (2007).
[Crossref]

2006 (2)

2005 (1)

2004 (2)

G. Coppola, P. Ferraro, M. Iodice, S. De Nicola, A. Finizio, and S. Grilli, “A digital holographic microscope for complete characterization of microelectromechanical systems,” Meas. Sci. Technol. 15, 529–539 (2004).
[Crossref]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13, 600–612 (2004).
[Crossref]

1982 (1)

M. Takeda, H. Ina, and S. Kobayashi, “Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry,” Rev. Sci. Instrum. 72, 156–160 (1982).
[Crossref]

Alfieri, D.

L. Miccio, D. Alfieri, S. Grilli, P. Ferraro, A. Finizio, L. D. Petrocellis, and S. D. Nicola, “Direct full compensation of the aberrations in quantitative phase microscopy of thin objects by a single digital hologram,” Appl. Phys. Lett. 90, 041104 (2007).
[Crossref]

Anand, A.

Aspert, N.

Asundi, A.

Ba, J.

D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

Ba, T.

Bae, C. Y.

Bally, G. V.

B. Kemper, A. Vollmer, C. E. Rommel, J. Schnekenburger, and G. V. Bally, “Simplified approach for quantitative digital holographic phase contrast imaging of living cells,” J. Biomed. Opt. 16, 026014 (2011).
[Crossref]

Banerjee, P. P.

G. Nehmetallah and P. P. Banerjee, “Applications of digital and analog holography in three-dimensional imaging,” Adv. Opt. Photonics 4, 472–553 (2012).
[Crossref]

Bengio, Y.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” arXiv preprint arXiv:1406.2661 (2014).

Bernstein, M.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Bourgade, T.

T. Bourgade, J. Sun, Z. Wang, R. Elsa, and A. Asundi, “Compact lens-less digital holographic microscope for MEMS inspection and characterization,” JOVE (2016).

Bovik, A. C.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13, 600–612 (2004).
[Crossref]

Caprio, G. D.

Castaeda, R.

Castañeda, R.

Catanzaro, B.

T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro, “High-resolution image synthesis and semantic manipulation with conditional gans,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 8798–8807.

Chang, L. C.

Charriere, F.

Chen, B.

L. Huang, L. Yan, B. Chen, Y. Zhou, and T. Yang, “Phase aberration compensation of digital holographic microscopy with curve fitting preprocessing and automatic background segmentation for microstructure testing,” Opt. Commun. 462, 125311 (2020).
[Crossref]

Chen, F.

Chen, Q.

Cheng, C. Y.

Choo, C. O.

Colomb, T.

Coppola, G.

G. Coppola, G. D. Caprio, M. Gioffré, R. Puglisi, and P. Ferraro, “Digital self-referencing quantitative phase microscopy by wavefront folding in holographic image reconstruction,” Opt. Lett. 35, 3390 (2010).
[Crossref]

G. Coppola, P. Ferraro, M. Iodice, S. De Nicola, A. Finizio, and S. Grilli, “A digital holographic microscope for complete characterization of microelectromechanical systems,” Meas. Sci. Technol. 15, 529–539 (2004).
[Crossref]

G. Coppola, S. De Nicola, P. Ferraro, A. Finizio, S. Grilli, M. Iodice, C. Magro, and G. Pierattini, “Characterization of MEMS structures by microscopic digital holography,” in Mems/Moems: Advances in Photonic Communications, Sensing, Metrology, Packaging and Assembly, U. F. W. Behringer, B. Courtois, A. M. Khounsary, and D. G. Uttamchandani, eds. (SPIE, 2003), pp. 71–78.

P. Ferraro, G. Coppola, S. De Nicola, A. Finizio, S. Grilli, M. Iodice, C. Magro, and G. Pierattini, “Digital holography for characterization and testing of MEMS structures,” in IEEE/LEOS International Conference on Optical MEMs, (IEEE, 2002), 125–126 .

Courville, A.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” arXiv preprint arXiv:1406.2661 (2014).

Cuche, E.

Cui, H.

H. Cui, D. Wang, Y. Wang, Z. Jie, and Y. Zhang, “Phase aberration compensation by spectrum centering in digital holographic microscopy,” Opt. Commun. 284, 4152–4155 (2011).
[Crossref]

De Nicola, S.

G. Coppola, P. Ferraro, M. Iodice, S. De Nicola, A. Finizio, and S. Grilli, “A digital holographic microscope for complete characterization of microelectromechanical systems,” Meas. Sci. Technol. 15, 529–539 (2004).
[Crossref]

P. Ferraro, G. Coppola, S. De Nicola, A. Finizio, S. Grilli, M. Iodice, C. Magro, and G. Pierattini, “Digital holography for characterization and testing of MEMS structures,” in IEEE/LEOS International Conference on Optical MEMs, (IEEE, 2002), 125–126 .

G. Coppola, S. De Nicola, P. Ferraro, A. Finizio, S. Grilli, M. Iodice, C. Magro, and G. Pierattini, “Characterization of MEMS structures by microscopic digital holography,” in Mems/Moems: Advances in Photonic Communications, Sensing, Metrology, Packaging and Assembly, U. F. W. Behringer, B. Courtois, A. M. Khounsary, and D. G. Uttamchandani, eds. (SPIE, 2003), pp. 71–78.

Deng, D.

D. Deng, W. Qu, W. He, X. Liu, and X. Peng, “Phase aberration compensation for digital holographic microscopy based on geometrical transformations,” J. Opt. 21, 085702 (2019).
[Crossref]

D. Deng, J. Peng, W. Qu, Y. Wu, X. Liu, W. He, and X. Peng, “Simple and flexible phase compensation for digital holographic microscopy with electrically tunable lens,” Appl. Opt. 56, 6007–6014 (2017).
[Crossref]

Deng, J.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Denis, L.

Depeursinge, C.

Di, J.

J. Di, Y. Yu, Z. Wang, W. Qu, C. Y. Cheng, and J. Zhao, “Quantitative measurement of thermal lensing in diode-side-pumped Nd:YAG laser by use of digital holographic interferometry,” Opt. Express 24, 28185–28193 (2016).
[Crossref]

J. Di, J. Zhao, W. Sun, H. Jiang, and X. Yan, “Phase aberration compensation of digital holographic microscopy based on least squares surface fitting,” Opt. Commun. 282, 3873–3877 (2009).
[Crossref]

Doblas, A.

A. Doblas, E. Sánchez-Ortiga, M. Martínez-Corral, G. Saavedra, and J. Garcia-Sucerquia, “Accurate single-shot quantitative phase imaging of biological specimens with telecentric digital holographic microscopy,” J. Biomed. Opt. 19, 046022 (2014).
[Crossref]

E. Sánchez-Ortiga, P. Ferraro, M. Martínez-Corral, G. Saavedra, and A. Doblas, “Digital holographic microscopy with pure-optical spherical phase compensation,” J. Opt. Soc. Am. A 28, 1410–1417 (2011).
[Crossref]

Dou, J.

W. He, Z. Liu, Z. Yang, J. Dou, X. Liu, Y. Zhang, and Z. Liu, “Robust phase aberration compensation in digital holographic microscopy by self-extension of holograms,” Opt. Commun. 445, 69–75 (2019).
[Crossref]

S. Pan, J. Ma, R. Zhu, T. Ba, C. Zuo, F. Chen, J. Dou, C. Wei, and W. Zhou, “Real-time complex amplitude reconstruction method for beam quality M-2 factor measurement,” Opt. Express 25, 20142–20155 (2017).
[Crossref]

Ebrahimi, M.

K. Nazeri, E. Ng, T. Joseph, F. Z. Qureshi, and M. Ebrahimi, “Edgeconnect: Generative image inpainting with adversarial edge learning,” arXiv preprint arXiv:1901.00212 (2019).

Efros, A. A.

P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 1125–1134.

J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proceedings of the IEEE International Conference on Computer Vision (2017), pp. 2223–2232.

Elsa, R.

T. Bourgade, J. Sun, Z. Wang, R. Elsa, and A. Asundi, “Compact lens-less digital holographic microscope for MEMS inspection and characterization,” JOVE (2016).

Engwer, C.

Fang, R.

S. Ma, R. Fang, Y. Luo, Q. Liu, S. Wang, and X. Zhou, “Phase aberration compensation via deep learning in digital holographic microscopy,” Meas. Sci. Technol. 32, 105203 (2021).
[Crossref]

Faure, N.

Ferraro, P.

E. Sánchez-Ortiga, P. Ferraro, M. Martínez-Corral, G. Saavedra, and A. Doblas, “Digital holographic microscopy with pure-optical spherical phase compensation,” J. Opt. Soc. Am. A 28, 1410–1417 (2011).
[Crossref]

G. Coppola, G. D. Caprio, M. Gioffré, R. Puglisi, and P. Ferraro, “Digital self-referencing quantitative phase microscopy by wavefront folding in holographic image reconstruction,” Opt. Lett. 35, 3390 (2010).
[Crossref]

L. Miccio, D. Alfieri, S. Grilli, P. Ferraro, A. Finizio, L. D. Petrocellis, and S. D. Nicola, “Direct full compensation of the aberrations in quantitative phase microscopy of thin objects by a single digital hologram,” Appl. Phys. Lett. 90, 041104 (2007).
[Crossref]

G. Coppola, P. Ferraro, M. Iodice, S. De Nicola, A. Finizio, and S. Grilli, “A digital holographic microscope for complete characterization of microelectromechanical systems,” Meas. Sci. Technol. 15, 529–539 (2004).
[Crossref]

P. Ferraro, G. Coppola, S. De Nicola, A. Finizio, S. Grilli, M. Iodice, C. Magro, and G. Pierattini, “Digital holography for characterization and testing of MEMS structures,” in IEEE/LEOS International Conference on Optical MEMs, (IEEE, 2002), 125–126 .

G. Coppola, S. De Nicola, P. Ferraro, A. Finizio, S. Grilli, M. Iodice, C. Magro, and G. Pierattini, “Characterization of MEMS structures by microscopic digital holography,” in Mems/Moems: Advances in Photonic Communications, Sensing, Metrology, Packaging and Assembly, U. F. W. Behringer, B. Courtois, A. M. Khounsary, and D. G. Uttamchandani, eds. (SPIE, 2003), pp. 71–78.

Finizio, A.

L. Miccio, D. Alfieri, S. Grilli, P. Ferraro, A. Finizio, L. D. Petrocellis, and S. D. Nicola, “Direct full compensation of the aberrations in quantitative phase microscopy of thin objects by a single digital hologram,” Appl. Phys. Lett. 90, 041104 (2007).
[Crossref]

G. Coppola, P. Ferraro, M. Iodice, S. De Nicola, A. Finizio, and S. Grilli, “A digital holographic microscope for complete characterization of microelectromechanical systems,” Meas. Sci. Technol. 15, 529–539 (2004).
[Crossref]

G. Coppola, S. De Nicola, P. Ferraro, A. Finizio, S. Grilli, M. Iodice, C. Magro, and G. Pierattini, “Characterization of MEMS structures by microscopic digital holography,” in Mems/Moems: Advances in Photonic Communications, Sensing, Metrology, Packaging and Assembly, U. F. W. Behringer, B. Courtois, A. M. Khounsary, and D. G. Uttamchandani, eds. (SPIE, 2003), pp. 71–78.

P. Ferraro, G. Coppola, S. De Nicola, A. Finizio, S. Grilli, M. Iodice, C. Magro, and G. Pierattini, “Digital holography for characterization and testing of MEMS structures,” in IEEE/LEOS International Conference on Optical MEMs, (IEEE, 2002), 125–126 .

Fournier, C.

Garcia-Sucerquia, J.

Ge, Y.

George, N.

Gioffré, M.

Goodfellow, I. J.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” arXiv preprint arXiv:1406.2661 (2014).

Greenbaum, A.

A. Greenbaum, U. Sikora, and A. Ozcan, “Field-portable wide-field microscopy of dense samples using multi-height pixel super-resolution based lensfree imaging,” Lab Chip 12, 1242 (2012).
[Crossref]

Greve, B.

Grilli, S.

L. Miccio, D. Alfieri, S. Grilli, P. Ferraro, A. Finizio, L. D. Petrocellis, and S. D. Nicola, “Direct full compensation of the aberrations in quantitative phase microscopy of thin objects by a single digital hologram,” Appl. Phys. Lett. 90, 041104 (2007).
[Crossref]

G. Coppola, P. Ferraro, M. Iodice, S. De Nicola, A. Finizio, and S. Grilli, “A digital holographic microscope for complete characterization of microelectromechanical systems,” Meas. Sci. Technol. 15, 529–539 (2004).
[Crossref]

P. Ferraro, G. Coppola, S. De Nicola, A. Finizio, S. Grilli, M. Iodice, C. Magro, and G. Pierattini, “Digital holography for characterization and testing of MEMS structures,” in IEEE/LEOS International Conference on Optical MEMs, (IEEE, 2002), 125–126 .

G. Coppola, S. De Nicola, P. Ferraro, A. Finizio, S. Grilli, M. Iodice, C. Magro, and G. Pierattini, “Characterization of MEMS structures by microscopic digital holography,” in Mems/Moems: Advances in Photonic Communications, Sensing, Metrology, Packaging and Assembly, U. F. W. Behringer, B. Courtois, A. M. Khounsary, and D. G. Uttamchandani, eds. (SPIE, 2003), pp. 71–78.

Grosjean, N.

He, K.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 770–778.

He, W.

D. Deng, W. Qu, W. He, X. Liu, and X. Peng, “Phase aberration compensation for digital holographic microscopy based on geometrical transformations,” J. Opt. 21, 085702 (2019).
[Crossref]

W. He, Z. Liu, Z. Yang, J. Dou, X. Liu, Y. Zhang, and Z. Liu, “Robust phase aberration compensation in digital holographic microscopy by self-extension of holograms,” Opt. Commun. 445, 69–75 (2019).
[Crossref]

D. Deng, J. Peng, W. Qu, Y. Wu, X. Liu, W. He, and X. Peng, “Simple and flexible phase compensation for digital holographic microscopy with electrically tunable lens,” Appl. Opt. 56, 6007–6014 (2017).
[Crossref]

Huang, L.

L. Huang, L. Yan, B. Chen, Y. Zhou, and T. Yang, “Phase aberration compensation of digital holographic microscopy with curve fitting preprocessing and automatic background segmentation for microstructure testing,” Opt. Commun. 462, 125311 (2020).
[Crossref]

Huang, Z.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Ina, H.

M. Takeda, H. Ina, and S. Kobayashi, “Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry,” Rev. Sci. Instrum. 72, 156–160 (1982).
[Crossref]

Iodice, M.

G. Coppola, P. Ferraro, M. Iodice, S. De Nicola, A. Finizio, and S. Grilli, “A digital holographic microscope for complete characterization of microelectromechanical systems,” Meas. Sci. Technol. 15, 529–539 (2004).
[Crossref]

G. Coppola, S. De Nicola, P. Ferraro, A. Finizio, S. Grilli, M. Iodice, C. Magro, and G. Pierattini, “Characterization of MEMS structures by microscopic digital holography,” in Mems/Moems: Advances in Photonic Communications, Sensing, Metrology, Packaging and Assembly, U. F. W. Behringer, B. Courtois, A. M. Khounsary, and D. G. Uttamchandani, eds. (SPIE, 2003), pp. 71–78.

P. Ferraro, G. Coppola, S. De Nicola, A. Finizio, S. Grilli, M. Iodice, C. Magro, and G. Pierattini, “Digital holography for characterization and testing of MEMS structures,” in IEEE/LEOS International Conference on Optical MEMs, (IEEE, 2002), 125–126 .

Iskakov, K.

K. Iskakov, “Quick draw irregular mask dataset,” (2020).

Isola, P.

P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 1125–1134.

J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proceedings of the IEEE International Conference on Computer Vision (2017), pp. 2223–2232.

Jang, J.

Javidi, B.

Jiang, H.

J. Di, J. Zhao, W. Sun, H. Jiang, and X. Yan, “Phase aberration compensation of digital holographic microscopy based on least squares surface fitting,” Opt. Commun. 282, 3873–3877 (2009).
[Crossref]

Jie, Z.

H. Cui, D. Wang, Y. Wang, Z. Jie, and Y. Zhang, “Phase aberration compensation by spectrum centering in digital holographic microscopy,” Opt. Commun. 284, 4152–4155 (2011).
[Crossref]

John, R.

Jolivet, F.

Joseph, T.

K. Nazeri, E. Ng, T. Joseph, F. Z. Qureshi, and M. Ebrahimi, “Edgeconnect: Generative image inpainting with adversarial edge learning,” arXiv preprint arXiv:1901.00212 (2019).

Karpathy, A.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Kautz, J.

T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro, “High-resolution image synthesis and semantic manipulation with conditional gans,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 8798–8807.

Kemper, B.

B. Kemper, A. Vollmer, C. E. Rommel, J. Schnekenburger, and G. V. Bally, “Simplified approach for quantitative digital holographic phase contrast imaging of living cells,” J. Biomed. Opt. 16, 026014 (2011).
[Crossref]

Ketelhut, S.

Khosla, A.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Kim, M. K.

Kingma, D. P.

D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

Kobayashi, S.

M. Takeda, H. Ina, and S. Kobayashi, “Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry,” Rev. Sci. Instrum. 72, 156–160 (1982).
[Crossref]

Krause, J.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Kuhn, J.

Kühn, J.

Lai, X.

Leitgeb, R. A.

Lempitsky, V.

D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 6924–6932.

Lian, Q.

S. Liu, Q. Lian, and Z. Xu, “Phase aberration compensation for digital holographic microscopy based on double fitting and background segmentation,” Opt. Lasers Eng. 115, 238–242 (2019).
[Crossref]

Liu, M.-Y.

T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro, “High-resolution image synthesis and semantic manipulation with conditional gans,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 8798–8807.

Liu, Q.

S. Ma, R. Fang, Y. Luo, Q. Liu, S. Wang, and X. Zhou, “Phase aberration compensation via deep learning in digital holographic microscopy,” Meas. Sci. Technol. 32, 105203 (2021).
[Crossref]

Liu, S.

S. Liu, Q. Lian, and Z. Xu, “Phase aberration compensation for digital holographic microscopy based on double fitting and background segmentation,” Opt. Lasers Eng. 115, 238–242 (2019).
[Crossref]

S. Liu, W. Xiao, and F. Pan, “Automatic compensation of phase aberrations in digital holographic microscopy for living cells investigation by using spectral energy analysis,” Opt. Laser Technol. 57, 169–174 (2014).
[Crossref]

Liu, X.

W. He, Z. Liu, Z. Yang, J. Dou, X. Liu, Y. Zhang, and Z. Liu, “Robust phase aberration compensation in digital holographic microscopy by self-extension of holograms,” Opt. Commun. 445, 69–75 (2019).
[Crossref]

D. Deng, W. Qu, W. He, X. Liu, and X. Peng, “Phase aberration compensation for digital holographic microscopy based on geometrical transformations,” J. Opt. 21, 085702 (2019).
[Crossref]

D. Deng, J. Peng, W. Qu, Y. Wu, X. Liu, W. He, and X. Peng, “Simple and flexible phase compensation for digital holographic microscopy with electrically tunable lens,” Appl. Opt. 56, 6007–6014 (2017).
[Crossref]

Liu, Z.

W. He, Z. Liu, Z. Yang, J. Dou, X. Liu, Y. Zhang, and Z. Liu, “Robust phase aberration compensation in digital holographic microscopy by self-extension of holograms,” Opt. Commun. 445, 69–75 (2019).
[Crossref]

W. He, Z. Liu, Z. Yang, J. Dou, X. Liu, Y. Zhang, and Z. Liu, “Robust phase aberration compensation in digital holographic microscopy by self-extension of holograms,” Opt. Commun. 445, 69–75 (2019).
[Crossref]

Lo, C. M.

Luo, Y.

S. Ma, R. Fang, Y. Luo, Q. Liu, S. Wang, and X. Zhou, “Phase aberration compensation via deep learning in digital holographic microscopy,” Meas. Sci. Technol. 32, 105203 (2021).
[Crossref]

Ma, J.

Ma, S.

S. Ma, R. Fang, Y. Luo, Q. Liu, S. Wang, and X. Zhou, “Phase aberration compensation via deep learning in digital holographic microscopy,” Meas. Sci. Technol. 32, 105203 (2021).
[Crossref]

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Magro, C.

P. Ferraro, G. Coppola, S. De Nicola, A. Finizio, S. Grilli, M. Iodice, C. Magro, and G. Pierattini, “Digital holography for characterization and testing of MEMS structures,” in IEEE/LEOS International Conference on Optical MEMs, (IEEE, 2002), 125–126 .

G. Coppola, S. De Nicola, P. Ferraro, A. Finizio, S. Grilli, M. Iodice, C. Magro, and G. Pierattini, “Characterization of MEMS structures by microscopic digital holography,” in Mems/Moems: Advances in Photonic Communications, Sensing, Metrology, Packaging and Assembly, U. F. W. Behringer, B. Courtois, A. M. Khounsary, and D. G. Uttamchandani, eds. (SPIE, 2003), pp. 71–78.

Mann, C. J.

Marie, J.-L.

Marquet, P.

Martínez-Corral, M.

A. Doblas, E. Sánchez-Ortiga, M. Martínez-Corral, G. Saavedra, and J. Garcia-Sucerquia, “Accurate single-shot quantitative phase imaging of biological specimens with telecentric digital holographic microscopy,” J. Biomed. Opt. 19, 046022 (2014).
[Crossref]

E. Sánchez-Ortiga, P. Ferraro, M. Martínez-Corral, G. Saavedra, and A. Doblas, “Digital holographic microscopy with pure-optical spherical phase compensation,” J. Opt. Soc. Am. A 28, 1410–1417 (2011).
[Crossref]

Mees, L.

Miccio, L.

L. Miccio, D. Alfieri, S. Grilli, P. Ferraro, A. Finizio, L. D. Petrocellis, and S. D. Nicola, “Direct full compensation of the aberrations in quantitative phase microscopy of thin objects by a single digital hologram,” Appl. Phys. Lett. 90, 041104 (2007).
[Crossref]

Min, J.

Mirza, M.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” arXiv preprint arXiv:1406.2661 (2014).

Momey, F.

Montfort, F.

Mou, X.

L. Zhang, L. Zhang, X. Mou, and D. Zhang, “FSIM: a feature similarity index for image quality assessment,” IEEE Trans. on Image Process. 20, 2378–2386 (2011).
[Crossref]

Nazeri, K.

K. Nazeri, E. Ng, T. Joseph, F. Z. Qureshi, and M. Ebrahimi, “Edgeconnect: Generative image inpainting with adversarial edge learning,” arXiv preprint arXiv:1901.00212 (2019).

Nehmetallah, G.

G. Nehmetallah and P. P. Banerjee, “Applications of digital and analog holography in three-dimensional imaging,” Adv. Opt. Photonics 4, 472–553 (2012).
[Crossref]

Ng, E.

K. Nazeri, E. Ng, T. Joseph, F. Z. Qureshi, and M. Ebrahimi, “Edgeconnect: Generative image inpainting with adversarial edge learning,” arXiv preprint arXiv:1901.00212 (2019).

Nicola, S. D.

L. Miccio, D. Alfieri, S. Grilli, P. Ferraro, A. Finizio, L. D. Petrocellis, and S. D. Nicola, “Direct full compensation of the aberrations in quantitative phase microscopy of thin objects by a single digital hologram,” Appl. Phys. Lett. 90, 041104 (2007).
[Crossref]

Ozair, S.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” arXiv preprint arXiv:1406.2661 (2014).

Ozcan, A.

A. Greenbaum, U. Sikora, and A. Ozcan, “Field-portable wide-field microscopy of dense samples using multi-height pixel super-resolution based lensfree imaging,” Lab Chip 12, 1242 (2012).
[Crossref]

Pan, F.

S. Liu, W. Xiao, and F. Pan, “Automatic compensation of phase aberrations in digital holographic microscopy for living cells investigation by using spectral energy analysis,” Opt. Laser Technol. 57, 169–174 (2014).
[Crossref]

Pan, S.

Pandiyan, V. P.

Park, J.-K.

Park, T.

J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proceedings of the IEEE International Conference on Computer Vision (2017), pp. 2223–2232.

Peng, J.

Peng, X.

D. Deng, W. Qu, W. He, X. Liu, and X. Peng, “Phase aberration compensation for digital holographic microscopy based on geometrical transformations,” J. Opt. 21, 085702 (2019).
[Crossref]

D. Deng, J. Peng, W. Qu, Y. Wu, X. Liu, W. He, and X. Peng, “Simple and flexible phase compensation for digital holographic microscopy with electrically tunable lens,” Appl. Opt. 56, 6007–6014 (2017).
[Crossref]

Petrocellis, L. D.

L. Miccio, D. Alfieri, S. Grilli, P. Ferraro, A. Finizio, L. D. Petrocellis, and S. D. Nicola, “Direct full compensation of the aberrations in quantitative phase microscopy of thin objects by a single digital hologram,” Appl. Phys. Lett. 90, 041104 (2007).
[Crossref]

Piedrahita-Quintero, P.

Pierattini, G.

P. Ferraro, G. Coppola, S. De Nicola, A. Finizio, S. Grilli, M. Iodice, C. Magro, and G. Pierattini, “Digital holography for characterization and testing of MEMS structures,” in IEEE/LEOS International Conference on Optical MEMs, (IEEE, 2002), 125–126 .

G. Coppola, S. De Nicola, P. Ferraro, A. Finizio, S. Grilli, M. Iodice, C. Magro, and G. Pierattini, “Characterization of MEMS structures by microscopic digital holography,” in Mems/Moems: Advances in Photonic Communications, Sensing, Metrology, Packaging and Assembly, U. F. W. Behringer, B. Courtois, A. M. Khounsary, and D. G. Uttamchandani, eds. (SPIE, 2003), pp. 71–78.

Pinston, F.

Pouget-Abadie, J.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” arXiv preprint arXiv:1406.2661 (2014).

Puglisi, R.

Qu, W.

Qureshi, F. Z.

K. Nazeri, E. Ng, T. Joseph, F. Z. Qureshi, and M. Ebrahimi, “Edgeconnect: Generative image inpainting with adversarial edge learning,” arXiv preprint arXiv:1901.00212 (2019).

Raub, C. B.

Ren, S.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 770–778.

Rommel, C. E.

B. Kemper, A. Vollmer, C. E. Rommel, J. Schnekenburger, and G. V. Bally, “Simplified approach for quantitative digital holographic phase contrast imaging of living cells,” J. Biomed. Opt. 16, 026014 (2011).
[Crossref]

Russakovsky, O.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Saavedra, G.

A. Doblas, E. Sánchez-Ortiga, M. Martínez-Corral, G. Saavedra, and J. Garcia-Sucerquia, “Accurate single-shot quantitative phase imaging of biological specimens with telecentric digital holographic microscopy,” J. Biomed. Opt. 19, 046022 (2014).
[Crossref]

E. Sánchez-Ortiga, P. Ferraro, M. Martínez-Corral, G. Saavedra, and A. Doblas, “Digital holographic microscopy with pure-optical spherical phase compensation,” J. Opt. Soc. Am. A 28, 1410–1417 (2011).
[Crossref]

Sánchez-Ortiga, E.

A. Doblas, E. Sánchez-Ortiga, M. Martínez-Corral, G. Saavedra, and J. Garcia-Sucerquia, “Accurate single-shot quantitative phase imaging of biological specimens with telecentric digital holographic microscopy,” J. Biomed. Opt. 19, 046022 (2014).
[Crossref]

E. Sánchez-Ortiga, P. Ferraro, M. Martínez-Corral, G. Saavedra, and A. Doblas, “Digital holographic microscopy with pure-optical spherical phase compensation,” J. Opt. Soc. Am. A 28, 1410–1417 (2011).
[Crossref]

Satheesh, S.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Schnekenburger, J.

B. Kemper, A. Vollmer, C. E. Rommel, J. Schnekenburger, and G. V. Bally, “Simplified approach for quantitative digital holographic phase contrast imaging of living cells,” J. Biomed. Opt. 16, 026014 (2011).
[Crossref]

Shaked, N. T.

Sheikh, H. R.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13, 600–612 (2004).
[Crossref]

Sikora, U.

A. Greenbaum, U. Sikora, and A. Ozcan, “Field-portable wide-field microscopy of dense samples using multi-height pixel super-resolution based lensfree imaging,” Lab Chip 12, 1242 (2012).
[Crossref]

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13, 600–612 (2004).
[Crossref]

Singh, A.

Singh, V. R.

Su, H.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Sun, J.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 770–778.

T. Bourgade, J. Sun, Z. Wang, R. Elsa, and A. Asundi, “Compact lens-less digital holographic microscope for MEMS inspection and characterization,” JOVE (2016).

Sun, W.

J. Di, J. Zhao, W. Sun, H. Jiang, and X. Yan, “Phase aberration compensation of digital holographic microscopy based on least squares surface fitting,” Opt. Commun. 282, 3873–3877 (2009).
[Crossref]

Takeda, M.

M. Takeda, H. Ina, and S. Kobayashi, “Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry,” Rev. Sci. Instrum. 72, 156–160 (1982).
[Crossref]

Tao, A.

T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro, “High-resolution image synthesis and semantic manipulation with conditional gans,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 8798–8807.

Thanh, N.

Trujillo, C.

Ulyanov, D.

D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 6924–6932.

Van, L.

Vedaldi, A.

D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 6924–6932.

Vollmer, A.

B. Kemper, A. Vollmer, C. E. Rommel, J. Schnekenburger, and G. V. Bally, “Simplified approach for quantitative digital holographic phase contrast imaging of living cells,” J. Biomed. Opt. 16, 026014 (2011).
[Crossref]

Vy, B.

Wang, D.

H. Cui, D. Wang, Y. Wang, Z. Jie, and Y. Zhang, “Phase aberration compensation by spectrum centering in digital holographic microscopy,” Opt. Commun. 284, 4152–4155 (2011).
[Crossref]

Wang, S.

S. Ma, R. Fang, Y. Luo, Q. Liu, S. Wang, and X. Zhou, “Phase aberration compensation via deep learning in digital holographic microscopy,” Meas. Sci. Technol. 32, 105203 (2021).
[Crossref]

Wang, T.-C.

T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro, “High-resolution image synthesis and semantic manipulation with conditional gans,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 8798–8807.

Wang, Y.

H. Cui, D. Wang, Y. Wang, Z. Jie, and Y. Zhang, “Phase aberration compensation by spectrum centering in digital holographic microscopy,” Opt. Commun. 284, 4152–4155 (2011).
[Crossref]

Wang, Z.

J. Di, Y. Yu, Z. Wang, W. Qu, C. Y. Cheng, and J. Zhao, “Quantitative measurement of thermal lensing in diode-side-pumped Nd:YAG laser by use of digital holographic interferometry,” Opt. Express 24, 28185–28193 (2016).
[Crossref]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13, 600–612 (2004).
[Crossref]

T. Bourgade, J. Sun, Z. Wang, R. Elsa, and A. Asundi, “Compact lens-less digital holographic microscope for MEMS inspection and characterization,” JOVE (2016).

Warde-Farley, D.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” arXiv preprint arXiv:1406.2661 (2014).

Wei, C.

Wei, K.

Wu, K.

Wu, Y.

Xiao, S.

Xiao, W.

S. Liu, W. Xiao, and F. Pan, “Automatic compensation of phase aberrations in digital holographic microscopy for living cells investigation by using spectral energy analysis,” Opt. Laser Technol. 57, 169–174 (2014).
[Crossref]

Xu, B.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” arXiv preprint arXiv:1406.2661 (2014).

Xu, Z.

S. Liu, Q. Lian, and Z. Xu, “Phase aberration compensation for digital holographic microscopy based on double fitting and background segmentation,” Opt. Lasers Eng. 115, 238–242 (2019).
[Crossref]

Yan, L.

L. Huang, L. Yan, B. Chen, Y. Zhou, and T. Yang, “Phase aberration compensation of digital holographic microscopy with curve fitting preprocessing and automatic background segmentation for microstructure testing,” Opt. Commun. 462, 125311 (2020).
[Crossref]

Yan, X.

J. Di, J. Zhao, W. Sun, H. Jiang, and X. Yan, “Phase aberration compensation of digital holographic microscopy based on least squares surface fitting,” Opt. Commun. 282, 3873–3877 (2009).
[Crossref]

Yang, T.

L. Huang, L. Yan, B. Chen, Y. Zhou, and T. Yang, “Phase aberration compensation of digital holographic microscopy with curve fitting preprocessing and automatic background segmentation for microstructure testing,” Opt. Commun. 462, 125311 (2020).
[Crossref]

Yang, Z.

W. He, Z. Liu, Z. Yang, J. Dou, X. Liu, Y. Zhang, and Z. Liu, “Robust phase aberration compensation in digital holographic microscopy by self-extension of holograms,” Opt. Commun. 445, 69–75 (2019).
[Crossref]

Yao, B.

Ye, J. C.

Yu, L.

Yu, Y.

Zhang, D.

L. Zhang, L. Zhang, X. Mou, and D. Zhang, “FSIM: a feature similarity index for image quality assessment,” IEEE Trans. on Image Process. 20, 2378–2386 (2011).
[Crossref]

Zhang, L.

L. Zhang, L. Zhang, X. Mou, and D. Zhang, “FSIM: a feature similarity index for image quality assessment,” IEEE Trans. on Image Process. 20, 2378–2386 (2011).
[Crossref]

L. Zhang, L. Zhang, X. Mou, and D. Zhang, “FSIM: a feature similarity index for image quality assessment,” IEEE Trans. on Image Process. 20, 2378–2386 (2011).
[Crossref]

Zhang, X.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 770–778.

Zhang, Y.

W. He, Z. Liu, Z. Yang, J. Dou, X. Liu, Y. Zhang, and Z. Liu, “Robust phase aberration compensation in digital holographic microscopy by self-extension of holograms,” Opt. Commun. 445, 69–75 (2019).
[Crossref]

H. Cui, D. Wang, Y. Wang, Z. Jie, and Y. Zhang, “Phase aberration compensation by spectrum centering in digital holographic microscopy,” Opt. Commun. 284, 4152–4155 (2011).
[Crossref]

Zhao, J.

J. Di, Y. Yu, Z. Wang, W. Qu, C. Y. Cheng, and J. Zhao, “Quantitative measurement of thermal lensing in diode-side-pumped Nd:YAG laser by use of digital holographic interferometry,” Opt. Express 24, 28185–28193 (2016).
[Crossref]

J. Di, J. Zhao, W. Sun, H. Jiang, and X. Yan, “Phase aberration compensation of digital holographic microscopy based on least squares surface fitting,” Opt. Commun. 282, 3873–3877 (2009).
[Crossref]

Zhou, T.

P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 1125–1134.

Zhou, W.

Zhou, X.

S. Ma, R. Fang, Y. Luo, Q. Liu, S. Wang, and X. Zhou, “Phase aberration compensation via deep learning in digital holographic microscopy,” Meas. Sci. Technol. 32, 105203 (2021).
[Crossref]

Zhou, Y.

L. Huang, L. Yan, B. Chen, Y. Zhou, and T. Yang, “Phase aberration compensation of digital holographic microscopy with curve fitting preprocessing and automatic background segmentation for microstructure testing,” Opt. Commun. 462, 125311 (2020).
[Crossref]

Zhu, J.-Y.

P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 1125–1134.

J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proceedings of the IEEE International Conference on Computer Vision (2017), pp. 2223–2232.

T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro, “High-resolution image synthesis and semantic manipulation with conditional gans,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 8798–8807.

Zhu, R.

Zuo, C.

Adv. Opt. Photonics (1)

G. Nehmetallah and P. P. Banerjee, “Applications of digital and analog holography in three-dimensional imaging,” Adv. Opt. Photonics 4, 472–553 (2012).
[Crossref]

Appl. Opt. (5)

Appl. Phys. Lett. (1)

L. Miccio, D. Alfieri, S. Grilli, P. Ferraro, A. Finizio, L. D. Petrocellis, and S. D. Nicola, “Direct full compensation of the aberrations in quantitative phase microscopy of thin objects by a single digital hologram,” Appl. Phys. Lett. 90, 041104 (2007).
[Crossref]

Biomed. Opt. Express (1)

IEEE Trans. on Image Process. (2)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13, 600–612 (2004).
[Crossref]

L. Zhang, L. Zhang, X. Mou, and D. Zhang, “FSIM: a feature similarity index for image quality assessment,” IEEE Trans. on Image Process. 20, 2378–2386 (2011).
[Crossref]

Int. J. Comput. Vis. (1)

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

J. Biomed. Opt. (2)

B. Kemper, A. Vollmer, C. E. Rommel, J. Schnekenburger, and G. V. Bally, “Simplified approach for quantitative digital holographic phase contrast imaging of living cells,” J. Biomed. Opt. 16, 026014 (2011).
[Crossref]

A. Doblas, E. Sánchez-Ortiga, M. Martínez-Corral, G. Saavedra, and J. Garcia-Sucerquia, “Accurate single-shot quantitative phase imaging of biological specimens with telecentric digital holographic microscopy,” J. Biomed. Opt. 19, 046022 (2014).
[Crossref]

J. Opt. (1)

D. Deng, W. Qu, W. He, X. Liu, and X. Peng, “Phase aberration compensation for digital holographic microscopy based on geometrical transformations,” J. Opt. 21, 085702 (2019).
[Crossref]

J. Opt. Soc. Am. A (3)

Lab Chip (1)

A. Greenbaum, U. Sikora, and A. Ozcan, “Field-portable wide-field microscopy of dense samples using multi-height pixel super-resolution based lensfree imaging,” Lab Chip 12, 1242 (2012).
[Crossref]

Meas. Sci. Technol. (2)

S. Ma, R. Fang, Y. Luo, Q. Liu, S. Wang, and X. Zhou, “Phase aberration compensation via deep learning in digital holographic microscopy,” Meas. Sci. Technol. 32, 105203 (2021).
[Crossref]

G. Coppola, P. Ferraro, M. Iodice, S. De Nicola, A. Finizio, and S. Grilli, “A digital holographic microscope for complete characterization of microelectromechanical systems,” Meas. Sci. Technol. 15, 529–539 (2004).
[Crossref]

Opt. Commun. (4)

L. Huang, L. Yan, B. Chen, Y. Zhou, and T. Yang, “Phase aberration compensation of digital holographic microscopy with curve fitting preprocessing and automatic background segmentation for microstructure testing,” Opt. Commun. 462, 125311 (2020).
[Crossref]

J. Di, J. Zhao, W. Sun, H. Jiang, and X. Yan, “Phase aberration compensation of digital holographic microscopy based on least squares surface fitting,” Opt. Commun. 282, 3873–3877 (2009).
[Crossref]

H. Cui, D. Wang, Y. Wang, Z. Jie, and Y. Zhang, “Phase aberration compensation by spectrum centering in digital holographic microscopy,” Opt. Commun. 284, 4152–4155 (2011).
[Crossref]

W. He, Z. Liu, Z. Yang, J. Dou, X. Liu, Y. Zhang, and Z. Liu, “Robust phase aberration compensation in digital holographic microscopy by self-extension of holograms,” Opt. Commun. 445, 69–75 (2019).
[Crossref]

Opt. Express (6)

Opt. Laser Technol. (1)

S. Liu, W. Xiao, and F. Pan, “Automatic compensation of phase aberrations in digital holographic microscopy for living cells investigation by using spectral energy analysis,” Opt. Laser Technol. 57, 169–174 (2014).
[Crossref]

Opt. Lasers Eng. (1)

S. Liu, Q. Lian, and Z. Xu, “Phase aberration compensation for digital holographic microscopy based on double fitting and background segmentation,” Opt. Lasers Eng. 115, 238–242 (2019).
[Crossref]

Opt. Lett. (6)

Rev. Sci. Instrum. (1)

M. Takeda, H. Ina, and S. Kobayashi, “Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry,” Rev. Sci. Instrum. 72, 156–160 (1982).
[Crossref]

Other (12)

D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

K. Iskakov, “Quick draw irregular mask dataset,” (2020).

K. Nazeri, E. Ng, T. Joseph, F. Z. Qureshi, and M. Ebrahimi, “Edgeconnect: Generative image inpainting with adversarial edge learning,” arXiv preprint arXiv:1901.00212 (2019).

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” arXiv preprint arXiv:1406.2661 (2014).

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 770–778.

P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 1125–1134.

J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proceedings of the IEEE International Conference on Computer Vision (2017), pp. 2223–2232.

D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 6924–6932.

T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro, “High-resolution image synthesis and semantic manipulation with conditional gans,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 8798–8807.

T. Bourgade, J. Sun, Z. Wang, R. Elsa, and A. Asundi, “Compact lens-less digital holographic microscope for MEMS inspection and characterization,” JOVE (2016).

G. Coppola, S. De Nicola, P. Ferraro, A. Finizio, S. Grilli, M. Iodice, C. Magro, and G. Pierattini, “Characterization of MEMS structures by microscopic digital holography,” in Mems/Moems: Advances in Photonic Communications, Sensing, Metrology, Packaging and Assembly, U. F. W. Behringer, B. Courtois, A. M. Khounsary, and D. G. Uttamchandani, eds. (SPIE, 2003), pp. 71–78.

P. Ferraro, G. Coppola, S. De Nicola, A. Finizio, S. Grilli, M. Iodice, C. Magro, and G. Pierattini, “Digital holography for characterization and testing of MEMS structures,” in IEEE/LEOS International Conference on Optical MEMs, (IEEE, 2002), 125–126 .

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Experimental system: (a) experimental setup; (b) experimental sample.
Fig. 2.
Fig. 2. Flow diagram for the aberration correction algorithm of our method (BS + Inpainting). BS: background segmentation; ⊙: the Hadamard product; GAN: generative adversarial network.
Fig. 3.
Fig. 3. The detailed steps of BS procedure. ① Reconstructing + unwrapping; ② ZPF fitting; ③ Calculating gradient and binarizing; ④ Filling the internal gap; ⑤ Deleting the connected objects on the boundary of image; ⑥ Eroding image; ⑦ Smoothing and re-binarizing. ⊙: the Hadamard product.
Fig. 4.
Fig. 4. Dataset acquisition of the experiment: (a) the experimental system and sample; (b) the process of dataset acquisition to train the inpainting network model. Procedure of device adjusting: turn Mirror in (a) to different angles, rotate MO1 and MO2 in (a); procedure of data enhancement: rotating (90°, 180°, and 270°) and flipping (horizontally and vertically). MO: microscopic objectives.
Fig. 5.
Fig. 5. Data preprocess and network architecture: (a) data preprocess; (b) network architecture. The inputs of G 1 are the mask (M), the mutilated grayscale of hologram $\left (\mathop {H_{\textrm{gray}}}\limits^{1 - M}\right )$ and the mutilated edge map $\left (\mathop {E_{gt}}\limits^{1 - M}\right )$, which are used to predict intact edge map. The predicted intact edge map (${E_{{G_1}}}$) and the mutilated hologram $\left (\mathop {H_{gt}}\limits^{1 - M} \right )$ are transmitted to G 2 to execute the inpainting task. The ground truth of Stage 1 and Stage 2 are label 1 (${E_{gt}}$) and label 2 (${H_{gt}}$), respectively. GAN: Generative Adversarial Network.
Fig. 6.
Fig. 6. The training process: (a) precision of Step 1 (the edge model), (b) MAE of Step 2 (the inpainting model), (c) loss of generator 2 of Step 3 (the joint model) and (d) loss of discriminator 2 of Step 3 (the joint model). MAE: mean absolute error.
Fig. 7.
Fig. 7. The qualitative evaluation of inpainting results: (a) original hologram, (b) input mutilated hologram, (c) generated edges, (d) inpainting results without any post-processing.
Fig. 8.
Fig. 8. The results of each step in our algorithm: (a) the raw hologram containing phase aberrations and the sample phase; (b) the binary mask obtained by background segmentation; (c) the mutilated hologram without the sample areas processed by (b); (d) the prediction of (b) and (c) by our trained network; (e) the reconstruction and unwrapping result of (a), and the dotted line contains the sample phase; (f) the reconstruction and unwrapping result of (d).
Fig. 9.
Fig. 9. Aberration compensation results of the experimental sample: (a) the result obtained by the proposed method; (b) the 3D rendering of (a); (c) the result obtained by the BS + ZPF; (d) the 3D rendering of (c); (e) the result obtained by the ZPF; (f) the 3D rendering of (e). The black lines mark the intersectional lines for cross-section profiling of the sample.
Fig. 10.
Fig. 10. Sample measurement by the 3D measuring laser microscope: (a) the local top view of the sample; (b) 3D view of the area in the red box of (a). The blue line represents the corresponding cross-section lines for cross-section profiling.
Fig. 11.
Fig. 11. Cross-sectional profiling of the sample: (a) phase profiles measured by the ZPF, the BS + ZPF, and the proposed method; (b) the phase profile measured by the LEXT OLS4100.

Tables (1)

Tables Icon

Table 1. Quantitative Evaluation Metrics

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

O ( x , y ) = A o ( x , y ) exp [ i φ o ( x , y ) ] ,
R ( x , y ) = A R ( x , y ) exp [ i φ R ( x , y ) ] ,
H ( x , y ) = | O + R | 2 = | O | 2 + | R | 2 + R O + R O ,
H + 1 FP ( x , y ) = R O = | R | | O | exp [ i φ ( x , y ) ] exp [ i T ( x , y ) ] exp [ i P ( x , y ) ] exp ( i O T ( x , y ) ) .
Φ 1 ( x , y ) = φ ( x , y ) + A ( x , y ) ,
Φ 0 ( x , y ) = A ( x , y ) = T ( x , y ) + P ( x , y ) + O T ( x , y ) .
φ ( x , y ) = Φ 1 ( x , y ) Φ 0 ( x , y ) .
E G 1 = G 1 ( M , H gray 1 M , E g t 1 M ) .
min G 1 max D 1 L G 1 = min G 1 ( max D 1 ( L adv , 1 ) + 10 L F M ) .
L adv , 1 = E ( E g t , H gray ) [ log D 1 ( E g t , H gray ) ] + E H gray log [ 1 D 1 ( E G 1 , H gray ) ] .
L F M = E [ i = 1 L 1 N i | | D 1 ( i ) ( E g t ) D 1 ( i ) ( E G 1 ) | | 1 ] ,
H G 2 = G 2 ( H g t 1 M , E filled ) .
L adv , 2 = E ( H g t , E filled ) [ log D 2 ( H g t , E filled ) ] + E E filled log [ 1 D 2 ( H G 2 , E filled ) ] .
L perc = E [ i = 1 1 N i | | ϕ i ( H g t ) ϕ i ( H G 2 ) | | 1 ] ,
L style = E j [ | | G j ϕ ( H G 2 1 M ) G j ϕ ( H g t 1 M ) | | 1 ] ,
L G 2 = L 1 + 0.1 L adv , 2 + 0.1 L perc + 250 L style .
MAE = 1 H × W i = 1 H j = 1 W | X ground-truth ( i , j ) X predicted ( i , j ) | ,
PSNR = 10 log 10 ( ( 2 n 1 ) 2 MSE ) ,
MSE = 1 H × W i = 1 H j = 1 W ( X ground-truth ( i , j ) X predicted ( i , j ) ) 2 ,
SSIM = ( 2 μ H G 2 μ H g t + c 1 ) ( 2 σ H G 2 H g t + c 2 ) ( μ H G 2 2 + μ H g t 2 + c 1 ) ( σ H G 2 2 + σ H G g t 2 + c 2 ) ,
FSIM = x Ω S L ( x ) P C m ( x ) x Ω P C m ( x ) ,