Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Speckle denoising based on deep learning via a conditional generative adversarial network in digital holographic interferometry

Open Access Open Access

Abstract

Speckle denoising can improve digital holographic interferometry phase measurements but may affect experimental accuracy. A deep-learning-based speckle denoising algorithm is developed using a conditional generative adversarial network. Two subnetworks, namely discriminator and generator networks, which refer to the U-Net and DenseNet layer structures are used to supervise network learning quality and denoising. Datasets obtained from speckle simulations are shown to provide improved noise feature extraction. The loss function is designed by considering the peak signal-to-noise ratio parameters to improve efficiency and accuracy. The proposed method thus shows better performance than other denoising algorithms for processing experimental strain data from digital holography.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Digital holography interferometry (DHI) is a coherent technology that was proposed on the basis of the principles of interference and diffraction [14]. It can be used to record the intensity and phase information of 3D scenes simultaneously and has been widely used in reported works for phase measurements [59]. In general, the phase data generated by holography includes the diffuse object beam from the illuminated object and the reference beam from the laser illumination source [3,4]. The holographic phase data are usually in the mixed phase pattern form, wrapped modulo 2π and speckled, and the noise is inevitably generated by the reflection of the random roughness of the object surface [10]. As one of the common phenomena in DHI, speckle noise must be removed because it reduces the quality of the phase patterns and seriously affects the holographic phase data analysis performance. Speckle denoising filtering algorithms are typically classified into three kinds [11], namely spatial-domain-based [1214], transformation-domain-based [15], and deep-learning-based (DL-based) denoising algorithms [1621].

Since the formal proposal of the DL concept in 2006 [22], it has demonstrated great potential and has been widely used in many optical image processing fields, such as object detection [23], segmentation [24], and denoising [25,26] in optical coherence tomography (OCT), and wavefront sensing [27,28]. In recent years, many DL-based methods that use various convolutional neural network (CNN) architectures have achieved good results with interferometric image denoising, with the aim of addressing the difficulty in obtaining clear training images corresponding to speckle noise from experiments. A DL-based algorithm using the denoising CNN (DnCNN) architecture [29] was initially proposed to denoise the Gaussian noise simulated in fringe patterns trained with paired images [30,31]. However, in practical DHI applications, the principal contradiction affecting the post processing accuracies of the interference images is attributed to speckle noise, which is non-Gaussian noise [32,33]. Therefore, accurate design of the noise type of the training set is key to training reduction networks. One such method involves using paired speckle noise datasets, for which some denoising algorithms have been proposed. Speckle noise sets are generally obtained by simulations and applied to network architectures designed based on DnCNN [1719] or the ResNet [20], which are initially efficient for Gaussian noise reduction. However, the non-Gaussian and nonstationary noises in the holographic wrapped phase data disable a series of Gaussian noise reduction algorithms. Speckle noise reduction of the interference image denoising process is likely not efficient if the DL algorithm is initially aimed at Gaussian noise reduction of the noninterference images. Another method involves semisupervised training via a conditional generative adversarial network (GAN) instead of seeking features from accurate paired speckle noise datasets via neural networks instead of manual dataset labeling procedures. In short, a discriminator network is used instead of a finely designed speckle noise training set to train another network known as a generator network. The DL-based cycle-consistent algorithm via the cycle GAN has been proposed to reduce the refractive index tomography noise under the premise of using unpaired images [34]. It is common for researchers to use GANs to handle the lack of clean and paired data in training sets [3537]. Although algorithms can omit the corresponding image labels when constructing training sets, they improve the training difficulty of the process and may also reduce the accuracies of the training results. The function of the noise reduction algorithm is thus weakened, or the training might even fail for irrelevant and incomprehensible instances in semisupervised training, based on general expectations.

This study proposes a new DHI speckle denoising algorithm based on DL, which combines the advantages of simulating speckle sets and conditional GAN architecture. The proposed approach mainly improves the structure and loss function of the CNN model, which is more suited for denoising in high-speckle-noise environments. First, we use the simulated speckle algorithm to build training, validation, and testing sets containing pairs of clean speckle images for actual network training. Second, the conditional GAN structure is added to the overall architectural design, in which one part is the generator network with the DenseNet layer [38] that is used for speckle noise reduction tasks in accordance with the traditional U-Net network [39], and the other is the discriminator network used to improve training accuracy. In practice, the training of specially designed networks allows not only improvements from the simulated sets but also benefits from the conditional GAN structure in the form of unique loss functions at the same time, such that the training set calibrations by manual procedures and the discriminator network can be integrated into the training process to improve the accuracies of the training results. Finally, compared with other algorithms, experiments are conducted here to demonstrate the improved feature extraction ability of the proposed method, which has good noise suppression effects in DHI imaging.

The remainder of this paper is structured as follows: Section 2 presents the principles of DHI, speckle noise simulations, proposed architecture, and loss functions. Section 3 explores the results and presents the noise reduction application for deformation field measurements. Section 4 presents the conclusions of this study.

2. Methods

Autonomously discovering the conversion relationship between the speckle noisy image domain X and clean image domain Y is the training goal of the proposed method. First, the simulated speckle noise method is used to establish the training, validation, and testing datasets. Second, the network architecture based on conditional GAN is designed by referring to the structural concepts of the U-Net and DenseNet. Third, a set of loss functions is specially designed for speckle noise reduction tasks and specified as the criterion of learning quality, including the loss function ${L_{PSNR}}$ that is improved by considering the peak signal-to-noise ratio (PSNR). In particular, to compensate for possible training errors, the loss value ${L_{cGAN}}$ obtained from the discriminator network is used as a loss function in training the generator network. Finally, quality metrics are used to analyze the denoised results quantitatively and compare the proposed method with various existing algorithms.

2.1 Speckle simulation for DHI image

The principle of the DHI system is shown in Fig. 1. The wave intensity $I(x,y)$ of the speckle pattern in the recording medium consists of the object wave $O(x,y) = {a_o}\textrm{exp} [j{\phi _o}(x,y)]$ and reference wave $R(x,y) = {a_r}\textrm{exp} [j{\phi _r}(x,y)]$, as described by the following equation:

$$\begin{aligned} I &= {|{R + O} |^2} = {|R |^2} + {|O |^2} + ({{R^ \ast }O + R{O^ \ast }} )\\ &= {a_r}^2 + {a_o}^2 + 2{a_r}{a_o}\cos ({{\phi_r} - {\phi_o}} ), \end{aligned}$$
where the wave $I(x,y)$ is dependent on the relationship between the object and reference waves. The first two components in the equation represent the amplitudes of the object and reference waves, which are low-frequency terms, and the final component includes the useful phase, which is a high-frequency term.

 figure: Fig. 1.

Fig. 1. Schematic of the DHI system. BS1 and BS2 represent beam splitters; M1, M2, and M3 represent mirrors; BE1 and BE2 represent beam expanders; L1 and L2 represent convex lenses; O represents the object wave; R represents the reference wave; CCD refers to the charge-coupled device.

Download Full Size | PDF

The unique high-frequency characteristics of the phase can be obtained by frequency-domain filtering. The wave intensity $I(x,y)$ is first expressed in the frequency domain using the Fourier transform, as shown in the following equation:

$$F[I({x,y} )] = {G_o}({u,v} )+ G({u + \xi ,v + \eta } )+ {G^ \ast }({ - u - \xi , - v - \eta } ),$$
where $\xi $ and $\eta $ are coefficients related to the projection angle of the reference wave and wavelength of the beam used in the DHI system.

Second, a selective filter is designed to remove the low-frequency parts. Third, the inverse Fourier transform is performed on the remaining high-frequency part to obtain the phase term as ${F^{ - 1}}[G(u,v)]$. Finally, the phase of the object wave can be obtained by:

$$\phi ({x,y} )= \arctan \frac{{{\mathop{\rm {Im}}\nolimits} ({{F^{ - 1}}[{G({u,v} )} ]} )}}{{\rm {Re}} ({{F^{ - 1}}[{G({u,v} )} ]} )}.$$

The schematic of the digital holographic speckle pattern interferometry (DHSPI) system is shown in Fig. 1, where the physical quantities are measured via the changing phase, such as the internal stress of the object and deformation of the opaque object surface. The principle here is to obtain the phase difference by recording holograms before and after object deformation at different times and using numerical reconstruction. Assuming that the wave intensities at times ${t_1}$ and ${t_2}$ are ${I_1}(x,y)$ and ${I_2}(x,y)$, the measured phases are ${\phi _\textrm{1}}(x,y)$ and ${\phi _\textrm{2}}(x,y)$, respectively. The phase difference between the two time instances is $\Delta \phi (x,y) = {\phi _2}(x,y) - {\phi _1}(x,y)$ and is wrapped in the $( - \pi ,\pi ]$ interval, which can be decomposed using an exponential function and expressed by two fringe patterns in the form of a complex function with the Euler formula:

$$\textrm{exp} [{j\Delta \phi ({x,y} )} ]= \cos [{\Delta \phi ({x,y} )} ]+ j\sin [{\Delta \phi ({x,y} )} ].$$

Here, the real and imaginary parts can also be presented in the form of fringe patterns, with $\cos [\Delta \phi (x,y)]$ being the cosine fringe pattern, $\sin [\Delta \phi (x,y)]$ being the sine fringe pattern, and the values being in the range of $( - 1,1]$. As shown in Fig. 2(a), a wrapped phase image is obtained from DHSPI, and the two decomposed fringe patterns are shown in Figs. 2(b) and 2(c).

 figure: Fig. 2.

Fig. 2. (a) phase wrapped at (-π, π] generated from DHSPI, (b) fringe pattern of the real part of a), which is also known as the cosine part, (c) fringe patterns of the imaginary part of a), which is also known as the sine part in the range of (−1, 1].

Download Full Size | PDF

The denoising algorithm flow proposed in this work first decomposes and denoises the wrapped image into the cosine and sine fringe patterns before denoising the individual fringe patterns, and the fringe patterns are finally reorganized into wrapped images through the inverse operation of Eq. (4). Therefore, real numerical simulations of the speckle dataset are required to train the DL model, including the holographic interference phase-wrapping data and corresponding speckle patterns. The simulations and speckle generation by the spectrum of the beam field of a 4-f optical system are described in Ref. [32]. The schematic, formula, and statistics of the simulation process are shown in Fig. 3.

The process can be described by the following equation:

$${O_s}({x,y} )= {F^{ - 1}}\{{d({{k_1},{k_2}} )\times F[{O({x,y} )} ]} \},$$
where $d({k_1},{k_2})$ are low-pass filters used to simulate the aperture applied to the object beam, F and ${F^{ - 1}}$ represent the forward and inverse Fourier transforms, respectively. In the simulation process, speckle fringe patterns with different speckle grain sizes are generated according to Eq. (5). Then, the value range of $\varphi $ to $\varphi \textrm{ + }\Delta \phi $ before and after deformation are set to a variable value in $( - \pi ,\pi ]$. Finally, the wrapping phase with speckle noise is obtained using Eq. (3). In this simulation, as shown in Fig. 3(a), the speckle grain size in the simulated phase data is controlled by adjusting the value of the diaphragm to ${R_u}\textrm{ = 1/}{N_s}{P_x}$, where ${N_s}$ and ${P_x}$ is the number of speckle size and pixel pitch in the image plane, respectively [18]; and when the number of pixels per speckle grain, in other words, the speckle size increasing, the SNR decreasing [32], as shown in Fig. 3(b). The standard deviation of the speckles is not easily controlled, but the surface deformation amplitude and speckle grain size are efficiently managed; therefore, various speckle sizes with different SNR are used to establish datasets, and these are widely used by other peers [17,18]. This paper considered 2, 4, and 6 pixels per speckle grain to establish the training, validation, and testing datasets.

 figure: Fig. 3.

Fig. 3. (a) Scheme for generating speckled phase data with controlled speckle sizes by setting the number of pixels per speckle grain [32], (b) statistics of SNR distribution of three kinds of simulated speckle noise images datasets, each of which has 1800 simulated images.

Download Full Size | PDF

The simulated results show that the phase decorrelation noise is non-Gaussian but that the speckle noise follows the statistical law in [32]. Figure 4 shows the wrapped phase and fringe patterns obtained through the simulated data algorithm; Fig. 4(a) shows a selection of the noise-free image with 512×512 pixels, and Figs. 4(b-d) show images with multiple levels of speckle noise generated using different aperture sizes, whereas Figs. 4(e, f) show the locally enlarged patterns of these noises.

 figure: Fig. 4.

Fig. 4. Multi-aperture simulation of speckle images with different speckle grain sizes.

Download Full Size | PDF

2.2 Architecture and object

The algorithm proposed herein is based on the conditional GAN framework, which is composed of two mutually restricted networks, i.e., the generator and discriminator, as shown in Fig. 5. The generator network G minimizes the target in the training process, whereas the discriminator network D maximizes the target. The training goal of G is to generate clean data $\bar{y} = G(x)$ from speckle noise data x, while the training goal of D is to find the differences between the actual data y and generated clean data $\bar{y}$ so as to guide the training direction of G.

 figure: Fig. 5.

Fig. 5. Training a conditional GAN to map noise to clean data. The discriminator network D learns to classify between the fake {x, G(x)} and real {x, y} couples, while the generator network G learns to extend beyond the capabilities of the discriminator D.

Download Full Size | PDF

Generally speaking, the growth of the generated network in deep learning needs a group of loss functions to indicate the direction of training. This loss function can be a static spatial measure, such as ${L_1}$ and ${L_{PSNR}}$ calculated by a fixed process, or a dynamic criterion offered by a discrimination network growing with the generated network together, such as ${L_{cGAN}}$. Specifically, the generator and discriminator are trained simultaneously for this speckle noise reduction task, given a set of corresponding speckle noise and noise-free images as condition inputs, as shown in Fig. 5. The goal of the generator is to make the spatial distribution of the denoised G(x) the same as that of the target noise-free y; The purpose of the discriminator is to distinguish G(x) and y with x and feedback the results to the generator in the form of ${L_{cGAN}}$. When the discriminator has been unable to differentiate between the real noise-free y and the denoised data G(x), it indicates that the excellent generator has been achieved for noise reduction, and the discriminant network will be abandoned in the following prediction task.

The architectures of the discriminator and generator networks for wrapped phase denoising obtained from the DHSPI are shown in Figs. 6 and 7, respectively.

 figure: Fig. 6.

Fig. 6. Architecture of the discriminator network D of the conditional GAN.

Download Full Size | PDF

where the discriminator network D consists of seven layers that are trained to evaluate and provide feedback on the training results so as to improve the training accuracy of G. The generator network G consists of 257 layers and is based on the U-Net and DenseNet concepts, which are then trained to generate clean images from the speckle noise images.

Specific to this speckle noise reduction task, as shown in Fig. 7, the architecture of the proposed model is established with input data of real and imag (cosine and sine) fringe patterns and being in the range of $( - \textrm{1},\textrm{1}]$ converted from wrapped data being in $( - \pi ,\pi ]$, that is, from a discontinuous image to two continuous. The computation alternately carries out the linear DenseNet block and the non-linear LeakyReLU unit. Maxpool and Upsample are also used in the architecture, for the scaling of the image feature layer is beneficial to deal with different sizes of speckle grain. In addition, the following methods can be used to improve the running speed but may affect the accuracy: to shorten the number of layers of the DenseNet block [38]; linearly map the input and output to $(\textrm{0},\textrm{1}]$ in advance; use the ReLU unit, etc. Moreover, We recommend reversibly mapping the data from nonlinearity to linearity, just like the wrapping to fringe patterns in this task, reducing tasks’ difficulty by increasing the dimension of data, which is a scheme that can improve both running speed and accuracy.

 figure: Fig. 7.

Fig. 7. Architecture of the generator network G of the conditional GAN.

Download Full Size | PDF

The loss function determines the direction of network training. The final target loss function for training the noise reduction network proposed here is composed of three parts, namely the conditional GAN loss function ${L_{cGAN}}$, PSNR loss function ${L_{PSNR}}$, and ${L_\textrm{1}}$ loss function, and can be expressed as follows:

$${G^\mathrm{\ast }}\textrm{ = }\arg \mathop {\min }\limits_G \left[ {\arg \mathop {\max }\limits_D {L_{cGAN}}({G,D} )+ \lambda {L_{PSNR}}(G )+ \beta {L_1}(G )} \right],$$
where G is the generator network used in the conditional GAN to map from the speckle noise image domain X to the clean image domain $\bar{Y}$; this mapping is the primary goal of DL learning and exists in the form of weights of the neurons in the network. The discriminator network D is trained to grow together with G simultaneously; it is used to measure the quality of data generated by G in the training process and to provide real-time feedback to G to promote its growth. The dynamic quality evaluation coefficient ${L_{cGAN}}$ is self-growing compared with traditional static quality evaluation coefficients, such as the ${L_\textrm{1}}$ and ${L_\textrm{2}}$ distances. A previous study showed that the mixed use of the dynamic and standard static loss functions could improve the network training accuracy effectively [40,41].

The objective of the conditional GAN in Eq. (6) can be expressed as follows:

$${L_{cGAN}}({G,D} )= {\mathrm{\mathbb{E}}_{x,y}}[{\log D({x,y} )} ]+ {\mathrm{\mathbb{E}}_x}[{\log ({1 - D({x,G(x )} )} )} ],$$
where x is the input speckle noise image, and y is the target image. As one of the widely used noise quantization indexes, the PSNR is used to formulate the second part of the total loss function equation to maximize the effects of speckle noise reduction, and it is proportional to the ${L_2} = {\mathrm{\mathbb{E}}_{x,y}}[||y - G(x)||_2^2\textrm{/2}]$ loss function. It can adjust its range through the maximum value of each input, i.e., infinite-norm value, which can be expressed as follows:
$${L_{PSNR}}(G )= {\mathrm{\mathbb{E}}_{x,y}}\left[ {{{\left( {10 \cdot {{\log }_{10}}\frac{{||y ||_\infty^2({MN} )}}{{||{y - G(x )} ||_2^2}}} \right)}^{ - 1}}} \right],$$
where $||y||_\infty ^2$ denotes the squared of the maximum value of y, $\textrm{||} \cdot \textrm{||}_\textrm{2}^\textrm{2}$ is the norm squared, and $M,N$ are the number of rows and columns, respectively.

The last part of the total loss function is the ${L_1}$ loss function, which can be expressed as:

$${L_1}(G )= {\mathrm{\mathbb{E}}_{x,y}}[{{{||{y - G(x )} ||}_\textrm{1}}} ].$$

In the traditional conditional GAN training process, the input x is often supplied with an additional noise vector z to prevent overfitting of the network [41], which is expressed as $\bar{y} = G({x_{withoutNoise}},{z_{Noise}})$ instead of $\bar{y} = G({x_{withNoise}})$. Owing to the specificity of the task, the training set is superimposed with speckle noise, and other studies in this field show that the noise vector z does not explicitly affect the results as the generator tends to ignore it [42]; hence, we train the network model by increasing the number of datasets containing speckle noise instead of adding the noise vector z, thereby ensuring the accuracy of the results while reducing task difficulty. In the proposed method, the input changes from the domain $\textrm{\{ }{x_{withoutNoise}},{z_{Noise}}\textrm{\} }$ to the domain $\textrm{\{ }{x_{withNoise}}\textrm{\} }$, as shown in Eq. (7).

2.3 Quantitative appraisal metrics

Quality metrics are typically used to judge the qualities of different speckle noise reduction methods. The image quality evaluation indexes are divided into objective and subjective types based on whether the original noise-free phase pattern can be retrieved. The objective quality metrics can be calculated using the corresponding formulas from the differences between the noise-free image ${v_m}$ and the processed image ${w_m}$ of the dataset ${D_k}$, which includes k images of size $M \times N$. The generally recognized objective evaluation standards are used in this study, including the mean absolute error (MAE) given as ${\mathrm{\mathbb{E}}_{m \sim D}}_k[||{v_m} - {w_m}|{|_\textrm{1}}/(MN)]$, mean-squared error (MSE) given as ${\mathrm{\mathbb{E}}_{m \sim D}}_k[||{v_m} - {w_m}||_\textrm{2}^\textrm{2}/(MN)]$, signal-to-noise ratio (SNR) given by Eq. (10), PSNR given by Eq. (11), and structural similarity index (SSIM) given by Eq. (12):

$$SNR({v,w} )= {\mathrm{\mathbb{E}}_{m \sim D{k_{}}}}\left[ {10 \cdot {{\log }_{10}}\frac{{||{v_m} - {{\bar{v}}_m}||_2^2}}{{||{v_m} - {w_m}||_2^2}}} \right],$$
$$PSNR({v,w} )= {\mathrm{\mathbb{E}}_{m \sim D{k_{}}}}\left[ {10 \cdot {{\log }_{10}}\frac{{||{v_m}||_\infty^2({MN} )}}{{||{v_m} - {w_m}||_2^2}}} \right],$$
$$SSIM({v,m} )= {\mathrm{\mathbb{E}}_{m \sim D{k_{}}}}\left[ {\frac{{({2{\mu_v}{\mu_w} + {c_1}} )({2{\sigma_{vw}} + {c_2}} )}}{{({{\mu_v}^2 + {\mu_w}^2 + {c_1}} )({{\sigma_v}^2 + {\sigma_w}^2 + {c_2}} )}}} \right].$$

Here, $\mu $, $\sigma $, and ${\sigma _{vw}}$ are the mean intensity, variance, and covariance of the images, respectively, and ${c_1}$, ${c_\textrm{2}}$ are two numbers used to stabilize the SSIM equation [43,44]. These objective quality metrics are used to measure the quality of the training process and compare the noise reduction performances of the various noise reduction algorithms using the proposed method. In instances where noise-free data ${v_m}$ cannot be obtained directly for comparison with the processed data, subjective quality metrics are used; the phase images are further processed for visualization of the contours, and the efficiency of noise reduction is subjectively determined by observing the continuities of the fringe contours.

3. Results

3.1 Network training

As shown in Fig. 7, the real and imaginary parts of the wrapped phase images are applied to the input of the generator network G to obtain the denoised images at the output, and the discriminator network D provides the training direction guidance via quality evaluations of the data generated by network G. The networks are trained and tested on the simulated speckle noise data, including 4000 images of the training set, 1200 images of the validation set and 1200 images of testing set, where each image is of size 256×256 pixels. Moreover, in the training set, the speckle size of 25% of the images was set to 2 pixels, that of another 25% was set to 4 pixels, and those of the remaining 50% was 6 pixels, in which the average SNR was 2.18 dB and varied over the range of −3.34 dB to 10.40 dB; the percentage distribution of the validation and the testing sets have slightly been set different from the training set, to verify the generalization ability of deep learning models; as shown in Fig. 8.

 figure: Fig. 8.

Fig. 8. SNR statistics of the training, validation, and testing sets.

Download Full Size | PDF

In the neural network training process, the traditional DnCNN and U-Net were trained using the same dataset to enable comparison. The specific training parameters are presented in Table 1. A total of 200 epochs and 200,000 iterations with a batch size of 4 were performed. The DnCNN used a single ${L_2}$ loss function, and the U-Net used a single ${L_1}$ loss function, whereas the proposed method used the loss function described in Section 2.2. A stochastic gradient descent was used for the minimization training method. The total network training time of the DnCNN was about 35.9 h, that of the U-Net was about 13.6 h, and that of the proposed method was approximately 30.6 h when using an i9-10900kf CPU and NVIDIA RTX3090 GPU. Trained weights were used for denoising, and the computation time of proposed was similar to U-Net and better than DnCNN, as shown in Table 1, where 1200 images with the size of 256×256 pixels were used to test in different types of computation batch sizes. Because the loss functions of both networks contain ${L_1}$ which is proportional to MAE, we compare them with the number of iterations, as shown in Fig. 9. The effects of the proposed network designed for speckle noise reduction, regardless of the training or validation sets, were higher than those of the improved traditional DnCNN and U-Net approaches. To compare the overall effects, the PSNR was used as the metric for quantitative appraisal.

 figure: Fig. 9.

Fig. 9. Evolutions of (a) PSNR and (b) MAE during the training process.

Download Full Size | PDF

Tables Icon

Table 1. Parameters for training the network models.

Tables Icon

Table 2. Performance for various losses, evaluated on the validation set.

It is seen from Fig. 9(a) that in a given training epoch, the PSNR of the noise reduction effects of the proposed algorithm for the training and validation datasets is better than those of the DnCNN and U-Net schemes, and Fig. 9(b) shows that the MAEs of the proposed algorithm are lower than those of the other two algorithms. It can thus be determined that, during the same training epoch duration, the effectiveness of the proposed algorithm is improved in terms of efficiency and accuracy. Moreover, as shown in Figs. 9(a) and 9(b), in the epoch close to 200, the proposed validation corve has converged when the training corve does not; early stopping is required to avoid overfitting, and this criterion is widely used by other peers [24,28]. The same is true for the U-Net model participating in the comparison. As shown in Table 2, an average of the last 40 epochs in 200, different loss functions have different orientations for network training: the ${L_{PSNR}}$ tends to improve the PSNR while ${L_1}$ and ${L_{cGAN}}$ are helping to improve the accuracy of the results; we use $\lambda \textrm{ = 1}$ and $\beta \textrm{ = 1}$ to train the proposed model for better precision and edge sharpness.

3.2 Comparison with similar algorithms

Representative methods for speckle noise filtering are selected as similar algorithms and compared with the noise reduction method proposed in this work. Accordingly, speckle noise filtering methods can be generally divided into spatial-domain, transform-domain, and DL technologies. In this study, the block-matching and 3D filtering (BM3D) method [12] based on the spatial-domain technology is used as the first algorithm; its basic idea is derived from the observed results of many similar repetitive structures in natural images, which are collected and aggregated by image block matching and subsequently grouped for orthogonal transformation and filtering. The BM3D method reduces noise by fully retaining the structures and details of the images to obtain excellent SNR results. Moreover, optimization-based methods [13,14] have unique advantages in recovering dislocation structures beyond speckle denoising. Second, based on the transform-domain technology, the two-dimensional windowed Fourier transform (WFT2F) method [15] is selected as another evaluation algorithm; this is a denoising algorithm specially designed for phase filtering of speckle and holographic data. Finally, the traditional DnCNN and U-Net models [29,39] are selected as the DL-based algorithm. Figure 10 shows a comparison of the results based on the PSNR, RMSE, and SSIM for the testing set described in Section 2.3. All three DL models were trained for 200 epochs.

 figure: Fig. 10.

Fig. 10. Statistics for the testing set comparisons: (a) rankings based on PSNR, (b) mean of the PSNR of the mix pixels, (c) rankings based on RMSE, (d) mean of the RMSE of the mix pixels, (e) rankings based on SSIM, and (f) mean of the SSIM of the mix pixels.

Download Full Size | PDF

Figure 10(a) shows a comparison of the average phase PSNR values before and after noise reduction, and Fig. 10(b) shows the statistical comparison results of the PSNRs of the wrapped phase images by the five examined methods. The data were based on the 1200 simulated speckle noise images in the testing set, including 400 images with a speckle radius of 2 pixels, 400 images with a radius of 4 pixels, and 400 images with a radius of 6 pixels. For all the kinds of speckle noise control groups, although the performances of the WFT2F are better than the DnCNN based on DL, the method proposed in this work and U-Net are better than the traditional BM3D and WFT2F noise reduction methods, i.e., the potential of the DL-based methods are better than those of the conventional methods. Figure 10(b) shows the median statistical results for all 1200 images (mix pixels), indicating the superiority of the proposed method. It is worth noting that there are many abnormal values of the PSNR in the results of the DnCNN model in Fig. 10(b); in conjunction with Fig. 10(a), it can be determined that the DnCNN, whose structure was initially designed for Gaussian noise denoising task, is inadequate for handling speckle noise. This can be explained by the fact that combined with the trend of the loss function curve in Fig. 9, the underfitting of the DnCNN is reflected in the training and testing curve high error results, which indicates that the structural ability of the DnCNN has reached the upper limit for speckle noise reduction. The number of trainable parameters in Table 1 also shows that the DnCNN method had the lowest potential among the three DL-based methods. Therefore, the DL-based models for Gaussian noise reduction require a series of modifications before they can be better applied to speckle noise reduction. It is further proven that the architecture of the proposed method in this work can efficiently process the mappings for high and low speckle noise domains simultaneously.

In Fig. 10(c), the average RMSE is shown for the phase errors of the five methods for three types of speckle sizes. Figure 10(d) shows the median statistics of the noise reduction results of the 1200 testing images. Figure 10(e) shows the SSIM comparisons of the results obtained by the five methods for the structural similarities between the denoised and original images, and Fig. 10(f) shows the median statistics of all 1200 testing images.

From the results in Figs. 10(a)–(f), the following observations can be obtained. First, the performances of the DL-based methods may be superior to those of the traditional noise reduction methods via careful optimization of the network architecture and training process. Second, if the complexity of the task exceeds the capacity of the neural network structural design, then the overall performances of the DL-based methods will not collapse; however, the strength distribution will be randomly selected based on ability and specific training to achieve balance between the capacity and general task requirements. For example, as shown in Figs. 10(c) and 10(e), the DnCNN model has no redundant ability for low-noise domain mapping (2 and 4 pixels) after handling high-noise domain mapping (6 pixels), one of the reasons is that the high-noise sample data in the training set is twice that of the low noise sample data. Finally, the statistical chart proves the superiority of the proposed method.

Figure 11 compares the specific phase images before and after noise reduction, and Table 3 shows a quantitative appraisal of the result. Owing to the unique applications of the wrapped phase images, the structural integrities are of great importance because they often determine the accuracies of later measurements that cannot be obtained with a single technical index. Therefore, we extracted three wrapped phase images within a speckle radius of 6 pixels with different fringe densities for noise reduction comparison. The effects of each noise reduction method can be intuitively observed. The ground truth in Fig. 11 is the wrapped phase image without speckle noise, which is the target of the denoising method. Figures 11(a), 11(c), and 11(e) show the comparisons before and after noise reduction, and Figs. 11(b), 11(d), and 11(f) are the cross-sectional profiles of these comparison diagrams. The comparisons show that the results of the proposed method are closer to the ground truth and are better than those obtained using the other algorithms.

 figure: Fig. 11.

Fig. 11. Noise reduction results of different phase densities with a speckle radius of 6 pixels. (a), (c) and (e) are the denoising results of the speckle noise images with various phase densities for each method. (b), (d) and (f) are the line profiles from a), c), and e), respectively, to compare the effect of the denoising methods on the input (X with noisy) and ground truth (Y without noisy) images.

Download Full Size | PDF

Tables Icon

Table 3. Quantitative appraisal of challenger algorithms.

3.3 Application of holographic measurements to determine strain

As an optical nondestructive technology system, the DHI scheme shown in Fig. 1 provides the entire object detection field. The phase of the object wavefield can be measured easily, and the height information, surface morphology, deformation field, as well as stress and strain of the object can be obtained. The equipment used is briefly described below: the recording wavelength was 532 nm, and the distance d0 between the object and charge-coupled device (CCD) recording plane was 1920 mm. The CCD camera model used was the GRAS-50S5M-C, and the measured object was an aluminum alloy disc with a fixed periphery, as shown in Fig. 12(a). The surface diameter, thickness, elastic modulus, and Poisson's ratio of the disc were 80 mm, 2.4 mm, 71 GPa, and 0.3, respectively. During the experiments, the center of the disc was subjected to a point load applied with a rotating micrometer, which resulted in out-of-plane and in-plane displacements of the object surface shown in Fig. 12(a). The CCD recorded the digital holograms of the object before and after deformation. After holographic reconstruction, the phase difference before and after deformation was extracted, and the strain field of the object was recorded and was reconstructed as a phase-wrapping diagram with speckle noise, as shown in Fig. 12(b); then, the noise-free experiment data could be obtained by denoising via the proposed method, as shown in Fig. 12(d). For an intuitive comparison between experiment data and ideal, the ideal displacement field ${d_z}$ was calculated under the concentrated force of 18N by finite element software and the DHI simulation program [3,4], shown in Fig. 12(c). Comparing the ideal data in Fig. 12(c) and the denoised experimental data in Fig. 12(d), the difference could be seen, where the ideal deformation strain field was a perfect circle, but the experimental data was not.

 figure: Fig. 12.

Fig. 12. Schematic diagram of deformation field measurement by DHI and simulation. (a) demonstration of the measured object with the deformation field, (b) Schematic diagram of DHI recording and reconstruction, (c) presentation of deformation field digital simulation and ideal data, and (d) denoising processing of speckle noise data obtained by experiment.

Download Full Size | PDF

Figure 13 shows the phase-wrapping diagram including speckle noise. The above noise reduction methods were applied to compare the results. The data reconstruction was achieved at the correct distance of 1920 mm with a size of 1024×1024, as shown in Fig. 13(a), and the enlarged images of the center points are shown in Figs. 13(b) and 13(c) show the locally enlarged isometric diagrams of the various noise reduction results. Comparisons according to data reconstructions with an error reconstruction distance of 2400 mm to obtain more speckle noise are shown in Fig. 13(d), and their enlarged images are shown in Fig. 13(e); Fig. 13(f) depicts the locally enlarged isometric diagrams, and the effects can be subjectively deduced by observing the continuity of the fringe contours. In the actual experiments, for the noise-free phase wrapping diagrams could not be obtained directly, we used spatial frequency (SF) as a non-reference spatial quality metric to evaluate the denoising results of 100 experimental images, as shown in Table 4. thus, it is determined that the proposed method produced the best results among all the methods.

 figure: Fig. 13.

Fig. 13. Phase wrapping diagram caused by strain change field recorded by DHI. (a) Noisy data is obtained by an accurate distance of 1920mm reconstruction, (d) Noisy data is obtained by an error distance of 2400 mm reconstruction, containing more speckle noise for comparison, (b) and (c) were local enlarged comparative images of accurately reconstructed data denoised by different algorithms, (e) and (f) were comparative images of error reconstructed data.

Download Full Size | PDF

Tables Icon

Table 4. SF of the noise and of the five denoising methods results.

4. Discussion and conclusion

This study presents a new DL-based algorithm to reduce the speckle noise of wrapped-phase images generated by DHI; the algorithm shows better performance than traditional networks for the same training set using the conditional GAN architecture and well-designed loss functions. Comprehensive experiments are performed to verify the effectiveness and generality of the proposed method. The experimental results show that existing algorithms have better visual qualities and higher objective indexes.

The proposed method is suitable for holographic wrapped images from different types of holographic equipment and other modes. Because real noiseless and clean data cannot be obtained directly from experiments, only simulated speckle noise sets are used in this study to train the proposed model; thus, the method has some limitations. Although this method achieves good universality for actual data via training with simulated datasets, we believe that if actual data can be added to the training set or if the network can be further improved to reduce its dependence on clean data in the training process, the performance of the proposed method can be enhanced. Therefore, our future research topics would be to explore a noise-free training set suitable for holographic wrapped data and to reduce the dependence of the training set on clean data by improving the network structure; migrating existing algorithms in the traditional fields to the DL-based field. In addition, enhancing the matrix operation efficiency of the DL model can be another line of exploration in future work.

Funding

National Natural Science Foundation of China (11862008, 62165007).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. W. Goodman and R. W. Lawrence, “Digital image formation from electronically detected holograms,” Appl. Phys. Lett. 11(3), 77–79 (1967). [CrossRef]  

2. U. Schnars, “Direct phase determination in hologram interferometry with use of digitally recorded holograms,” J. Opt. Soc. Am. A 11(7), 2011–2015 (1994). [CrossRef]  

3. P. Picart and J. C Li, Digital Holography, (ISTE-Wiley, 2012).

4. P. Picart, New Techniques in Digital Holography, (ISTE-Wiley, 2015).

5. M. Karray, C. Poilane, M. Gargouri, and P. Picart, “Digital holographic non-destructive testing of laminate composite,” Opt. Eng. 55(9), 095105 (2016). [CrossRef]  

6. S. Zhang, Z. Xu, B. Chen, L. Yan, and J. Xie, “Sinusoidal phase modulating absolute distance measurement interferometer combining frequency-sweeping and multi-wavelength interferometry,” Opt. Express 26(7), 9273–9284 (2018). [CrossRef]  

7. L. Wang, Y. Wu, X. Wu, and K. Cen, “Measurement of dynamics of laser-induced cavitation around nanoparticle with high-speed digital holographic microscopy,” Exp. Therm. Fluid Sci. 121, 110266 (2021). [CrossRef]  

8. T. Kozacki, M. Mikula-Zdankowska, J. Martinez-Carranza, and M. S. Idicula, “Single-shot digital multiplexed holography for the measurement of deep shapes,” Opt. Express 29(14), 21965–21977 (2021). [CrossRef]  

9. L. Yan, J. Xie, B. Chen, Y. Lou, and S. Zhang, “Absolute distance measurement using laser interferometric wavelength leverage with a dynamic-sideband-locked synthetic wavelength generation,” Opt. Express 29(6), 8344–8357 (2021). [CrossRef]  

10. J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications, (Roberts and Company Publishers, 2007).

11. S. Pradeep and P. Nirmaladevi, “A review on speckle noise reduction techniques in ultrasound medical images based on spatial domain, transform domain and CNN methods,” IOP Conf. Ser.: Mater. Sci. Eng. 1055(1), 012116 (2021). [CrossRef]  

12. D. Kostadin, F. Alessandro, K. Vladimir, and E. Karen, “Image denoising with block matching and 3D filtering,” Proc. SPIE 6064, 606414 (2006). [CrossRef]  

13. J. Pineda, J. L. Bacca, J. Meza, L. A. Romero, and A. G. Marrugo, “SPUD: Simultaneous Phase Unwrapping and Denoising Algorithm for Phase Imaging,” Appl. Opt. 59(13), D81–D88 (2020). [CrossRef]  

14. P. E. Alcaraz, R. S. Ketchum, and P.-A. Blanche, “Robust phase unwrapping algorithm based on enhanced denoising and fringe quality improvement routines,” OSA Continuum 4(2), 633–649 (2021). [CrossRef]  

15. K. Qian, “Windowed Fourier transform for fringe pattern analysis,” Appl. Opt. 43(13), 2695–2702 (2004). [CrossRef]  

16. Y. Song, Y. Zhu, and X. Du, “Dynamic residual dense network for image de-noising,” Sensors 19(17), 3809 (2019). [CrossRef]  

17. K. Yan, L. Chang, M. Andrianakis, V. Tornari, and Y. Yu, “Deep learning-based wrapped phase de-noising method for application in digital holographic speckle pattern interferometry,” Appl. Sci. 10(11), 4044 (2020). [CrossRef]  

18. S. Montresor, M. Tahon, A. Laurent, and P. Picart, “Computational de-noising based on deep learning for phase data in digital holographic interferometry,” APL Photonics 5(3), 030802 (2020). [CrossRef]  

19. M. Tahon, S. Montresor, and P. Picart, “Towards reduced CNNs for de-noising phase images corrupted with speckle noise,” Photonics 8(7), 255 (2021). [CrossRef]  

20. K. Yan, Y. Yu, T. Sun, A. Asundi, and Q. Kemao, “Wrapped phase denoising using convolutional neural networks,” Opt. Laser Eng. 128, 105999 (2020). [CrossRef]  

21. B. Qiu, Y. You, Z. Huang, X. Meng, Z. Jiang, C. Zhou, G. Liu, K. Yang, Q. Ren, and Y. Lu, “N2NSR-OCT: Simultaneous denoising and super-resolution in optical coherence tomography images using semisupervised deep learning,” J. Biophoton. 14, e202000282 (2021). [CrossRef]  

22. G. E. Hinton, S. Osindero, and Y. W. Teh, “A fast learning algorithm for deep belief nets,” Neural Comput. 18(7), 1527–1554 (2006). [CrossRef]  

23. M. Christopher, J. A. Proudfoot, C. Bowd, A. Belghith, M. H. Goldbaum, J. Rezapour, S. Moghimi, R. N. Weinreb, M. A. Fazio, C. A. Girkin, C. G. De Moraes, J. M. Liebmann, and L. M. Zangwill, “Deep learning models based on unsegmented OCT RNFL circle scans provide accurate detection of glaucoma and high resolution prediction of visual field damage,” Ophthalmology 61(7), 1439 (2020). [CrossRef]  

24. Q. Li, S. Li, Z. He, H. Guan, R. Chen, Y. Xu, T. Wang, S. Qi, J. Mei, and W. Wang, “DeepRetina: Layer segmentation of retina in OCT images using deep learning,” Transl. Vis. Sci. Technol. 9(2), 61 (2020). [CrossRef]  

25. Z. Dong, G. Liu, G. Ni, J. Jerwick, L. Duan, and C. Zhou, “Optical coherence tomography image denoising using a generative adversarial network with speckle modulation,” J Biophotonics 13, e201960135 (2020). [CrossRef]  

26. Y. Zhou, K. Yu, M. Wang, Y. Ma, Y. Peng, Z. Chen, W. Zhu, F. Shi, and X. Chen, “Speckle Noise Reduction for OCT Images Based on Image Style Transfer and Conditional GAN,” IEEE J. Biomed. Health Inform. 26(1), 139–150 (2022). [CrossRef]  

27. K. Wang, M. Zhang, J. Tang, L. Wang, L. Hu, X. Wu, W. Li, J. Di, G. Liu, and J. Zhao, “Deep learning wavefront sensing and aberration correction in atmospheric turbulence,” PhotoniX 2(1), 8 (2021). [CrossRef]  

28. Y. He, Z. Liu, Y. Ning, J. Li, X. Xu, and Z. Jiang, “Deep learning wavefront sensing method for Shack-Hartmann sensors with sparse sub-apertures,” Opt. Express 29(11), 17669–17682 (2021). [CrossRef]  

29. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: Residual learning of deep CNN for image de-noising,” IEEE Trans. on Image Process. 26(7), 3142–3155 (2017). [CrossRef]  

30. K. Yan, Y. Yu, C. Huang, L. Sui, K. Qian, and A. Asundi, “Fringe pattern denoising based on deep learning,” Opt. Commun. 437, 148–152 (2019). [CrossRef]  

31. F. Hao, C. Tang, M. Xu, and Z. Lei, “Batch denoising of ESPI fringe patterns based on convolutional neural network,” Appl. Opt. 58(13), 3338–3346 (2019). [CrossRef]  

32. S. Montresor and P. Picart, “Quantitative appraisal for noise reduction in digital holographic phase imaging,” Opt. Express 24(13), 14322–14343 (2016). [CrossRef]  

33. S. Montresor and P. Picart, “On the assessment of de-noising algorithms in digital holographic interferometry and related approaches,” Appl. Phys. B 128(3), 59 (2022). [CrossRef]  

34. G. Choi, D. Ryu, Y. Jo, Y. S. Kim, W. Park, H. S. Min, and Y. Park, “Cycle-consistent deep learning approach to coherent noise reduction in optical diffraction tomography,” Opt. Express 27(4), 4927–4943 (2019). [CrossRef]  

35. T. L. Bobrow, F. Mahmood, M. Inserni, and N. J. Durr, “DeepLSR: A deep learning approach for laser speckle reduction,” Biomed. Opt. Express 10(6), 2869–2882 (2019). [CrossRef]  

36. L. Bargsten and A. Schlaefer, “SpeckleGAN: A generative adversarial network with an adaptive speckle layer to augment limited training data for ultrasound image processing,” Int. J. Comput. Assist. Radiol. Surg. 15(9), 1427–1436 (2020). [CrossRef]  

37. M. Wang, W. Zhu, K. Yu, Z. Chen, F. Shi, Y. Zhou, Y. Ma, Y. Peng, D. Bao, S. Feng, L. Ye, D. Xiang, and X. Chen, “Semi-supervised capsule cGAN for speckle noise reduction in retinal OCT images,” IEEE Trans. Med. Imag. 40(4), 1168–1183 (2021). [CrossRef]  

38. G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 2261–2269.

39. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 (2015), pp. 234–241.

40. D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, “Context encoders: Feature learning by inpainting,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 2536–2544.

41. P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 5967–5976.

42. H. Lan, A. W. Toga, and F. Sepehrband, “Three-dimensional self-attention conditional GAN with spectral normalization for multimodal neuroimaging synthesis,” Magn. Reson. Med. 86(3), 1718–1733 (2021). [CrossRef]  

43. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

44. P.-W. Hsieh, P.-C. Shao, and S.-Y. Yang, “A regularization model with adaptive diffusivity for variational image denoising,” Sig. Process. 149, 214–228 (2018). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Schematic of the DHI system. BS1 and BS2 represent beam splitters; M1, M2, and M3 represent mirrors; BE1 and BE2 represent beam expanders; L1 and L2 represent convex lenses; O represents the object wave; R represents the reference wave; CCD refers to the charge-coupled device.
Fig. 2.
Fig. 2. (a) phase wrapped at (-π, π] generated from DHSPI, (b) fringe pattern of the real part of a), which is also known as the cosine part, (c) fringe patterns of the imaginary part of a), which is also known as the sine part in the range of (−1, 1].
Fig. 3.
Fig. 3. (a) Scheme for generating speckled phase data with controlled speckle sizes by setting the number of pixels per speckle grain [32], (b) statistics of SNR distribution of three kinds of simulated speckle noise images datasets, each of which has 1800 simulated images.
Fig. 4.
Fig. 4. Multi-aperture simulation of speckle images with different speckle grain sizes.
Fig. 5.
Fig. 5. Training a conditional GAN to map noise to clean data. The discriminator network D learns to classify between the fake {x, G(x)} and real {x, y} couples, while the generator network G learns to extend beyond the capabilities of the discriminator D.
Fig. 6.
Fig. 6. Architecture of the discriminator network D of the conditional GAN.
Fig. 7.
Fig. 7. Architecture of the generator network G of the conditional GAN.
Fig. 8.
Fig. 8. SNR statistics of the training, validation, and testing sets.
Fig. 9.
Fig. 9. Evolutions of (a) PSNR and (b) MAE during the training process.
Fig. 10.
Fig. 10. Statistics for the testing set comparisons: (a) rankings based on PSNR, (b) mean of the PSNR of the mix pixels, (c) rankings based on RMSE, (d) mean of the RMSE of the mix pixels, (e) rankings based on SSIM, and (f) mean of the SSIM of the mix pixels.
Fig. 11.
Fig. 11. Noise reduction results of different phase densities with a speckle radius of 6 pixels. (a), (c) and (e) are the denoising results of the speckle noise images with various phase densities for each method. (b), (d) and (f) are the line profiles from a), c), and e), respectively, to compare the effect of the denoising methods on the input (X with noisy) and ground truth (Y without noisy) images.
Fig. 12.
Fig. 12. Schematic diagram of deformation field measurement by DHI and simulation. (a) demonstration of the measured object with the deformation field, (b) Schematic diagram of DHI recording and reconstruction, (c) presentation of deformation field digital simulation and ideal data, and (d) denoising processing of speckle noise data obtained by experiment.
Fig. 13.
Fig. 13. Phase wrapping diagram caused by strain change field recorded by DHI. (a) Noisy data is obtained by an accurate distance of 1920mm reconstruction, (d) Noisy data is obtained by an error distance of 2400 mm reconstruction, containing more speckle noise for comparison, (b) and (c) were local enlarged comparative images of accurately reconstructed data denoised by different algorithms, (e) and (f) were comparative images of error reconstructed data.

Tables (4)

Tables Icon

Table 1. Parameters for training the network models.

Tables Icon

Table 2. Performance for various losses, evaluated on the validation set.

Tables Icon

Table 3. Quantitative appraisal of challenger algorithms.

Tables Icon

Table 4. SF of the noise and of the five denoising methods results.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

I = | R + O | 2 = | R | 2 + | O | 2 + ( R O + R O ) = a r 2 + a o 2 + 2 a r a o cos ( ϕ r ϕ o ) ,
F [ I ( x , y ) ] = G o ( u , v ) + G ( u + ξ , v + η ) + G ( u ξ , v η ) ,
ϕ ( x , y ) = arctan I m ( F 1 [ G ( u , v ) ] ) R e ( F 1 [ G ( u , v ) ] ) .
exp [ j Δ ϕ ( x , y ) ] = cos [ Δ ϕ ( x , y ) ] + j sin [ Δ ϕ ( x , y ) ] .
O s ( x , y ) = F 1 { d ( k 1 , k 2 ) × F [ O ( x , y ) ] } ,
G  =  arg min G [ arg max D L c G A N ( G , D ) + λ L P S N R ( G ) + β L 1 ( G ) ] ,
L c G A N ( G , D ) = E x , y [ log D ( x , y ) ] + E x [ log ( 1 D ( x , G ( x ) ) ) ] ,
L P S N R ( G ) = E x , y [ ( 10 log 10 | | y | | 2 ( M N ) | | y G ( x ) | | 2 2 ) 1 ] ,
L 1 ( G ) = E x , y [ | | y G ( x ) | | 1 ] .
S N R ( v , w ) = E m D k [ 10 log 10 | | v m v ¯ m | | 2 2 | | v m w m | | 2 2 ] ,
P S N R ( v , w ) = E m D k [ 10 log 10 | | v m | | 2 ( M N ) | | v m w m | | 2 2 ] ,
S S I M ( v , m ) = E m D k [ ( 2 μ v μ w + c 1 ) ( 2 σ v w + c 2 ) ( μ v 2 + μ w 2 + c 1 ) ( σ v 2 + σ w 2 + c 2 ) ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.