Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Ghost imaging with Bayesian denoising method

Open Access Open Access

Abstract

We propose a Bayesian denoising method to improve the quality of ghost imaging. The proposed method achieved the highest PSNR and SSIM in both binary and gray-scale targets with fewer measurements. Experimentally, it obtained a reconstructed image of a USAF target where the PSNR and SSIM of the image were up to 12.80 dB and 0.77, respectively, whereas those of traditional ghost images were 7.24 dB and 0.28 with 3000 measurements. Furthermore, it was robust against additive Gaussian noise. Thus, this method could make the ghost imaging technique more feasible as a practical application.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Ghost imaging (GI) is an imaging technique which utilizes the intensity correlation of two (or several) lights. In a conventional imaging system, an image can be acquired by converting the energy of the incident light into an electrical signal via a spatially resolved photon sensor such as a charged coupled device (CCD) camera or a complementary metal-oxide-semiconductor camera. Unlike the existing imaging technique, GI being based on the correlation of two beams makes it possible to indirectly acquire an image by separating the imaging and detection processes. The first experimental GI was implemented by entangled photon pairs in 1995 [1]. After that, various GI systems were proposed depending on the light source, such as pseudo-thermal light generated by a rotating round glass (RGG) [2,3], computational light patterns using a spatial light modulator (SLM) or a digital micro-mirror device (DMD) [4,5], and the radiation source, including X-ray [6,7], neutrons [8], ions and electrons [9,10]. GI can be categorized into two systems: quantum GI systems, which use quantum optics like entangled photon pairs, and classical GI systems, which are based on spatial correlation of the photon pairs. Quantum GI has the benefit of resistance against jamming, but it is tricky to develop a verification system for entangled photons and still has the difficulty of low efficiency in producing entangled photons. Computational GI, a kind of classical GI, replaces the camera with an SLM or DMD, which generates the beam of a programmed pattern and makes it possible to acquire an image with only a single-pixel photosensor [11]. Though the system has structural simplicity, it strongly depends on expensive devices, suffers from weak noise compared with conventional array detectors and is not suitable for remote sensing [12,13]. The classical GI using pseudo-thermal light is widely used due to its convenient configurations, which consist of only a laser and a slowly rotating diffuser.

GI has some advantages over the conventional imaging technique because of its robustness against noise from a scattering medium and a high signal-to-noise ratio (SNR) in a photon-limited environment. However, it requires much correlated data to reconstruct an image with a high SNR so it takes a long time to obtain a high quality image. Various approaches to make up for this weakness have been intensively studied by modifying an equation based on mutual-correlation or converting the correlation representation into an optimization of the inverse matrix solution or coupling it with deep learning. Both differential GI (DGI) or normalized GI (NGI) are representative methods based on the modified correlation equation that reduces the noise from the fluctuations of the total light intensity [14,15]. In addition, uniformly weighted GI (UWGI) and uniformly weighted differential GI (UWDGI) methods have been proposed to correct noise caused by a non-uniform intensity of the beam pattern [16]. Although they achieved better quality ghost images, they had limitations for a practical application. On the other hand, the problem of GI can be considered as an inverse problem of Y = AX where A is the spatially recorded beam pattern, Y is the corresponding measurements of a single pixel detector and X is the spatial information of an unknown object. Compressive sensing can be exploited to optimize the solution of the inverse problem of Y = AX, which is called compressive GI (CGI) [17,18]. This solution performed better on the quality of the reconstructed ghost image than the previous correlation-based methods with the same measurements, but it was rather vulnerable to external noise. Recently, deep learning based GI has been intensively studied and has shown a respectable performance with regard to the SNR of the image. GI with a deep learning method has achieved high SNR images in highly scattering conditions [19,20]. However, due to the inherent characteristics of the supervised learning, paradoxically, GI with deep learning is effective only on a trained object.

According to the above GI approach, both beam patterns and the corresponding correlated signals of another photo-sensor are independently accumulated for iterative measurements. Although the correlated data obtained in each step of measurement do not voluntarily reveal image information, the data set obviously could contain certain information on a target image that we can not intuitively recognize. The motivation of the proposed method stems from the intention to reflect such latent information of the previous measurement data into the next reconstruction step as prior information, eventually making the unknown object visible more quickly. One means of specifying this concept is to utilize a process known as Bayesian inference.

In this paper, we demonstrate a Bayesian denoising method for GI to enhance image quality with less data. Through a Bayesian inference framework, we inferred the original noiseless image from a noisy image that was reconstructed by a set of measured correlation data. We used a Markov random field to mathematically model the posterior probability of the denoised image. We chose an Ising model to lessen the computational complexity in the statistical inference, resulting in the posterior of the denoised image as a mask. The estimated mask was then filtered by a Gaussian kernel and projected into the noisy image. The denoised image in the previous step was exploited as prior information for computing the next posterior of the denoised image hiding in the noisy image reconstructed by the latter data. The proposed method was validated by the simulated and experimental GI depending on various shapes of objects. Quantitative analysis of image quality was estimated using a peak signal-to-noise ratio (PSNR) and a structural similarity index (SSIM). The proposed method showed outstanding performance for both quantitative factors compared to the existing GI approach.

2. Methods

2.1 Bayesian image denoising process using Markov random field

Bayesian inference, a method of statistical reasoning based on Bayes’ theorem expressed as Eq. (1), is an important technique to infer the probability of a hypothesis for an unknown parameter X when a parameter Y is obtained from measurement.

$$P(X|Y) = \frac{{P(X,Y)}}{{\int {P(X)P(Y|X)dX} }} = \frac{{P(Y|X)P(X)}}{{P(Y)}}$$
where both P(Y|X) and P(X), resulting from representing a joint distribution P(X,Y) as a chain rule, indicates the likelihood function and prior distribution, respectively, and P(Y) denotes the normalized constant that the probability of the data is determined by integrating all possible latent variables, X.

Here X is designated as a latent image, i.e., an original image while Y is a measured image containing noise. Generally, a nature image has similar intensity between adjacent pixels except for some edge parts according to the distribution of gradients in the neighboring pixels [21,22]. However, it could be stained by various noises so that the intensity of the original image would be changed. The property of the Markov random field (MRF) is suitable to probabilistically model such intrinsic characteristics of an image. The MRF is a graphical model of a joint probability distribution. It is composed of an undirected graph model G = (N, ɛ) where N means a set of nodes representing random variables in the graph and ɛ denotes a set of the edge of the corresponding node. A noteworthy feature of the MRF is that it makes a multi-variable joint probability distribution approximately computed using conditional independence based on a Markov property [23]. Namely, in the MRF, the probability of a node is dependent only on the value of adjacent nodes, which are the atoms of an edge. For example, let an image $X = \{{{x_1} \cdots {x_n}} \}$ consist of n pixels. Each pixel in the image is considered as a node, a random variable. If the number of nodes, n, is 10000 and those variables take a grayscale value in the range from 0 to L-1 (generally L=256), the computational complexity of calculating the joint probability distribution $P({{x_1} \cdots {x_n}} )$ is L10000 where it could be a non-deterministic polynomial-time hardness problem (NP-hard). Applying the MRF to this joint distribution, it can be significantly simplified as a linear complexity, $256 \times 10000$.

Figure 1 shows a schematic of a hidden Markov random field (HMRF) for an image denoising model. The HMRF is a Markov model that consists of two elements: a hidden state and an observed state. It is often used to infer the hidden state based on the observed results. Let X = [x11 ··· xhw] and Y = [y11 ··· yhw] be n-dimensional vectors corresponding to the original image and the noisy image, respectively, where the total number of pixels is $n = h \times w$. Under the structure of the HMRF, the hidden state of a node (pixel) in the original image can be described as a conditional probability P(X = [x11 ··· xij] | Y), which corresponds to the posterior of the Bayes’ theorem, Eq. (1). Using the MRF property, one pixel of the original image ${x_{ij}}$ depends on only its neighboring nodes $N({{x_{ij}}} )$ and the corresponding observed node ${y_{ij}}$, while it is conditionally independent of the rest of the nodes $R({{x_{ij}}} )$ in X. According to this, the probabilistic model for image denoising can be expressed by decomposition as the following Eq. (2).

$$\begin{aligned} P(X &= \{ {x_{11}} \cdots {x_{hw}}\} |y) = \prod\limits_{i = 1}^h {\prod\limits_{j = 1}^w {P({x_{ij}} \bot R({x_{ij}})|N({x_{ij}},{y_{ij}}) = \frac{1}{{Z(y)}}\prod\limits_{c \in C} {{\psi _c}({x_c},{y_c})} } } \\ &= \frac{1}{{Z(y)}}\exp [ - \alpha \sum\limits_{i = 1}^h {\sum\limits_{j = 1}^w {\phi ({x_{ij}},{y_{ij}})s - \beta \sum\limits_{\mathop x\nolimits_{ij}^{\prime} \in N({x_{ij}})} {\psi ({x_{ij}},\mathop x\nolimits_{ij}^{\prime} )} } } ] \end{aligned}$$
where $Z(y )= \mathop \sum \nolimits_{{x_{ij}}} exp[ - \alpha \mathop \sum \nolimits_{i = 1}^h \mathop \sum \nolimits_{j = 1}^w \phi ({{x_{ij}},{y_{ij}}} )- \beta \mathop \sum \nolimits_{x{^{\prime}_{ij}} \in N({{x_{ij}}} )} \psi ({{x_{ij}},{{x^{\prime}}_{ij}}} )]$, h is the number of height pixels, w is the number of width pixels, α and β are user-determined parameters called hyperparameters, C is a clique, that is, a subset of fully connected nodes, also called a Markov blanket.

 figure: Fig. 1.

Fig. 1. Schematic of the hidden Markov random field for an image denoising model.

Download Full Size | PDF

The factorization is in terms of local potential functions, cliques. ${\psi _c}({x_c},{y_c})$ is a potential function of the maximal clique. This potential function can be divided into two terms: $\phi ({x_{ij}},{y_{ij}})$ and $\psi ({x_{ij}},x_{ij}^{\prime})$. The first term, $\phi ({x_{ij}},{y_{ij}})$ denotes the dependency between the intensity of a node in the original image X and that of a node in the noisy image Y, which is called a data cost function or unary term. $\psi ({x_{ij}},x_{ij}^{\prime})$ is a smooth function (or binary term) representing the similarity between a certain pixel in the original image ${x_{ij}}$ and its neighboring pixels $x_{ij}^{\prime}$. These two terms can be considered as a likelihood function and a prior distribution in Eq. (1), thereby embodying a mathematical model that describes a Bayesian denoising framework in terms of the nature of an ideal image.

Although the locally conditional probability using the MRF substantially reduces the computational complexity, it is still too complex to solve the posterior distribution. In particular, the range of a latent value of an original pixel (L=256 in a gray scale image) has a strong influence on the computational complexity. Hence, we think of an image as a binary image inspired by the Ising model that simply describes the magnetic properties of a material with an intensity of only -1 or 1. By defining the data cost function $\phi ({x_{ij}},{y_{ij}})$ as $\phi ({x_{ij}},{y_{ij}}) = {x_{ij}}{y_{ij}}$ and the smooth function $\psi ({x_{ij}},x_{ij}^{\prime})$ as $\psi ({x_{ij}},x_{ij}^{\prime}) = {x_{ij}}x_{ij}^{\prime}$, Eq. (2) can be expressed as Eq. (3).

$$\prod\limits_{i = 1}^h {\prod\limits_{j = 1}^w {P({x_{ij}}|x_{ij}^{mb})} } = \frac{1}{{Z(y)}}\exp [ - \alpha \sum\limits_{i = 1}^h {\sum\limits_{j = 1}^w {{x_{ij}}{y_{ij}} - \beta \sum\limits_{\mathop x\nolimits_{ij}^{\prime} \in N({x_{ij}})} {{x_{ij}}\mathop x\nolimits_{ij}^{\prime} )} } } ]$$
where $x_{ij}^{mb}$ is a Markov blanket containing the locally dependent nodes of ${x_{ij}}$, e.g., $x_{22}^{mb} = \{ {x_{12}},{x_{21}},{x_{23}},{x_{32}},{y_{22}}\}$ in a pairwise HMRF model as shown in Fig. 1.

The probability that one pixel of the original image ${x_{ij}}$ has a white color is described as the following Eq. (4).

$$P({x_{ij}} = 1|x_{ij}^{mb}) = \frac{{\exp [ - \alpha {y_{ij}} - \beta \sum\limits_{\mathop x\nolimits_{ij}^{\prime} \in N({x_{ij}})} {\mathop x\nolimits_{ij}^{\prime} )} ]}}{{\exp [ - \alpha {y_{ij}} - \beta \sum\limits_{\mathop x\nolimits_{ij}^{\prime} \in N({x_{ij}})} {\mathop x\nolimits_{ij}^{\prime} )} ] + \exp [\alpha {y_{ij}} + \beta \sum\limits_{\mathop x\nolimits_{ij}^{\prime} \in N({x_{ij}})} {\mathop x\nolimits_{ij}^{\prime} )} ]}} = \frac{1}{{1 + \exp ( - 2{w_{ij}})}}$$
where ${w_{ij}} = \alpha {y_{ij}} + \beta \sum\limits_{\mathop x\nolimits_{ij}^{\prime} \in N({x_{ij}})} {x_{ij}^{\prime})}$.

Thus, the Ising model allows us to simplify the marginal probability.

2.2 Gibbs sampling

An exact calculation of the posterior distribution is intractable because computing its normalized constant, which is a multi-dimensional integration, is too difficult. Fortunately, this impediment has been resolved owing to the enormous development in computational techniques. The Markov chain Monte Carlo (MCMC) is a common method to obtain the approximate distribution of the posterior. The MCMC is a sampling method that draws latent values from a stationary distribution and then corrects these draws to better approximate the target distribution [24]. The Metroplis-Hastings (MH) algorithm is a representative MCMC method with an acceptance (or rejection) rule to converge the proposal distribution into the desired target posterior distribution. However, once the number of random variables to be estimated in the target distribution is incremental, the accuracy of the approximate distribution decreases. Moreover, the performance of the MCMC is sensitive to which proposal distribution is set. In 1984, the brothers Geman. S & Geman. D proposed the Gibbs sampling (or Gibbs sampler) method based on the MCMC to approximate a multivariate probability distribution [25]. In Gibbs sampling, the sampled data in each trial was totally accepted by placing the proposal distribution as its full conditional distribution. Considering the peculiarity of the MRF in terms of the locally conditional probability, this sampling method can be particularly well suited [26]. The following pseudo-code presents the sampling process of calculating the approximate posterior for Bayesian image denoising. Every node in a noisy image was sampled at the one iteration step. The posterior distribution described in Eq. (4) was then updated with the Monte Carlo method, which sums the total occurrences of the event that ${x_{ij}}$ has the intensity of white color 1 in a binary image.

oe-29-24-39323-i001

2.3 Procedure of Bayesian denoising method for ghost imaging (BDGI)

Figure 2 shows the Bayesian denoising procedure for improving the quality of ghost images. After acquiring correlated data through a series of measurements, the ghost images were reconstructed by conventional methods. This reconstruction process was conducted by an accumulated data set in a certain number that was determined by dividing the total number of the measured data items into a certain interval number. For example, we obtained 20 noisy images, Y={Y1 … Y20} accumulating 500 # at each interval from 0 to 10000 # of data. Yk (k={1…20}) indicates a noisy image reconstructed by 500 accumulated data items. An initial random image X1 to infer the original image of the first noisy image was initially generated. Hyperparameters (α and β) were also pre-determined. Both parameters were set as 0.5, for which the optimization process for each parameter was not included in this study.

 figure: Fig. 2.

Fig. 2. Flowchart of Bayesian denoising method for ghost imaging.

Download Full Size | PDF

Before doing Gibbs sampling based on the Ising model, we conducted the binarization process for the noisy image and then processed the initial random image. To binarize a noisy image, a global fixed thresholding approach, called the Otsu method [27], was used to find a threshold value that maximizes the separability of the binary class in the intensity histogram of the input image.There were three steps in the denoising module. First, the posterior representing the probability that the intensity of each pixel in the original image has a white color 1 is computed by Gibbs sampling with the fully conditional probability described in Section 2.2. Then, in each pixel, if the posterior probability was higher than a certain threshold, a binary mask M was created and had an intensity of 1, otherwise 0. The threshold as the belief that the intensity of a pixel is 1 was assumed to be 0.95. In order to soften the artificially different values between adjacent pixels, the mask was filtered by a Gaussian kernel. Finally, a denoised image can be obtained by multiplying the Gaussian filtered mask with the input noisy image. The previously denoised image was exploited as prior information in the subsequent denoising process. The degree of influence of the prior in the latter reconstruction can be controlled by hyperparameter β in (Eq. (4)). The image acquired in each iteration was quantitatively evaluated and compared with the ground image and saved as an image file.

2.4 Experimental setup and reconstruction of ghost images

Figure 3 shows a simplified schematic of pseudo-thermal light GI systems. The pseudo-thermal light source consisted of a continuous wave laser (PicoQuant, LDH-D-TA-595) with a wavelength of 595 nm, a slowly rotating ground glass (RGG) and a scattering medium containing tap water. The coherent beam striking the rough surface of the ground glass was converted into an incoherent pattern, which is called a random speckle pattern and is suitable for second order intensity correlation. Though the RGG is effective to generate the incoherent speckle pattern, sometimes a similar pattern can be reproduced after a whole tour of the disk [28]. The tap water as a scattering medium was used to avoid repeatable patterns and generate authentically random speckle patterns. The speckle pattern was then collimated by an aperture and incident on a 50:50 beam splitter (BS). Then, the beam was divided into two twin speckle beams, which were spatially correlated. The distances Z1 and Z2 were set to 100 mm. The speckle beams from the BS went through two different paths, the reference arm and the object arm. In the reference arm, the reflected pattern IR(x,y) was recorded with spatial intensity distribution by a CCD camera (D1, AVT, Proscilica GT 1920). In the object arm, the beam pattern IO(x,y) faced an object (Thorlabs, USAF 1951 target), which was positioned at twice the length of the focus length, f=50 mm, and the illuminated pattern was collected by a bucket detector. We used the same CCD camera as the bucket detector by summing the intensity distribution of the transmitted pattern. The pixel size of the CCD camera we used in the experiment was $4.54 \times 4.54\; \mu {m^2}$. The data from both cameras were acquired by a correlator developed in LabVIEW.

 figure: Fig. 3.

Fig. 3. Schematic of a pseudo-thermal light ghost imaging system.

Download Full Size | PDF

Under the same experimental conditions, ghost images were reconstructed by conventional methods: correlation-based GI (TGI, DGI, NGI, UWGI, UWDGI) and compressed sensing-based GI (CGI). For the ith measurements of the correlated beam signal, a CCD camera, D1 in the reference arm, recorded a series of speckle patterns ${I_R}\left( {t,{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over x} }_{obj}}} \right)$ and the intensity distribution of the transmitted patterns in the object arm was collected as follows:

$${I_B}(t) = {\int {{I_B}(t,\overrightarrow {{x_{obj}}} )|{T(\overrightarrow {{x_{obj}}} )} |} ^2}d\overrightarrow {{x_{obj}}}$$
where $T{(\overrightarrow {{x_{obj}}} )^2}$ is the response function (transmission or reflection) of a light beam with an object.

When the CCD camera (D1) and the object are positioned at the same distance from the aperture, both ${I_R}(t,\overrightarrow {{x_{obj}}} )$ and ${I_B}(t,\overrightarrow {{x_{obj}}} )$ are identical at the same time. TGI can be conducted by Eq. (6) based on the spatial correlation between two patterns in each path. It is a covariance of the vector ${I_B}(t,\overrightarrow {{x_{obj}}} )$ and the scalar value, ${I_B}$ where the ghost image is an image of accumulating the measured patterns weighted with the light intensity containing the object information.

$$\begin{aligned} &TGI(\overrightarrow {{x_{obj}}} ) = E[({I_R}(t,\overrightarrow {{x_{obj}}} ) - E[{I_R}(t,\overrightarrow {{x_{obj}}} )]) \times ({I_B}(t) - E[{I_B}(t)])]\\ &= \frac{1}{N}\sum\limits_{i = 1}^N {\{ [{I_R}(i,\overrightarrow {{x_{obj}}} ) - < {I_R}(i,\overrightarrow {{x_{obj}}} ) > ] \times [{I_B}(i) - < {I_B}(i) > ]\} } \end{aligned}$$

To improve the SNR of the ghost image, we modified Eq. (6) as shown in Eqs. (710) to reduce noise from the fluctuation of the beam intensity or the uncertainty of the uniformity of the beam distribution [1416].

$$DGI(\overrightarrow x ) = \frac{1}{N}\sum\limits_{i = 1}^N {\{ [{I_R}(i,\overrightarrow {{x_{obj}}} ) - < {I_R}(i,\overrightarrow {{x_{obj}}} ) > ] \times [{I_B}(i) - \frac{{ < {I_B}(i) > R(i)}}{{ < R(i) > }}]\} }$$
$$NGI(\overrightarrow x ) = \frac{1}{N}\sum\limits_{i = 1}^N {\{ [{I_R}(i,\overrightarrow {{x_{obj}}} ) - < {I_R}(i,\overrightarrow {{x_{obj}}} ) > ] \times [\frac{{{I_B}(i)}}{{R(i)}} - \frac{{ < {I_B}(i) > }}{{ < R(i) > }}]\} }$$
$$UWGI(\overrightarrow x ) = \frac{1}{N}\sum\limits_{i = 1}^N {\{ [\frac{{{I_R}(i,\overrightarrow {{x_{obj}}} )}}{{ < {I_R}(i,\overrightarrow {{x_{obj}}} ) > }} - 1] \times [{I_B}(i) - < {I_B}(i) > ]\} }$$
$$UWDGI(\overrightarrow x ) = \frac{1}{N}\sum\limits_{i = 1}^N {\{ [\frac{{{I_R}(i,\overrightarrow {{x_{obj}}} )}}{{ < {I_R}(i,\overrightarrow {{x_{obj}}} ) > }} - 1] \times [{I_B}(i) - \frac{{ < {I_B}(i) > R(i)}}{{ < R(i) > }}]\} }$$
where $R(i )$ is the integral intensity of the CCD camera in the reference arm, and N is the number of measurements.

On the other hand, the reconstruction of GI can be expressed as an inverse problem by converting the integral Eq. (5) into a discrete domain as Y = AX where Y is the integral intensity of the bucket detector (${I_B}(i ))$, A is the intensity distributions in ${I_R}\left( {i,{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over x} }_{obj}}} \right)$, and X is unknown object information. To approximately solve the inverse matrix for the ghost image reconstruction, compressive sensing can be applied as following Eq. (11).

$$ CGI(\overrightarrow x ) = \mathop {\arg \min }\limits_x ||{Y - AX} ||_2^2 \quad s.t{||X ||_0} \le {T_0}\quad for \quad i = 1\ldots N$$
where ${{\boldsymbol S}_p}$ is the ${l_p}$ norm of a certain matrix S, which means $\sqrt[p]{{\mathop \sum \limits_{i = 1}^N s_i^p}}$, $Y \in {{\mathbb{R}}^{N \times 1}}$, A${\in} {{\mathbb{R}}^{N \times M}}$, X${\in} {{\mathbb{R}}^{M \times 1}}$, N is the number of measurements, and M is the number of pixels.

3. Results

3.1 Performance of BDGI on a binary target

To investigate the effectiveness of BDGI, we simulated a pseudo-thermal GI system using the measured speckle patterns, ${I_R}\left( {i,{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over x} }_{obj}}} \right)$. A simulated bucket signal ${I_B}(i )$ was generated by summing all intensities of the beam distribution illuminating a transitive object. The size of both the measured speckle pattern and the object was $100 \times 100$ pixels, and the ghost images of three different objects were reconstructed by the above methods described as Eqs. (6-11). BDGI was applied to noisy images retrieved through the TGI method. The total amount of correlated data was 1000, and it was sequentially used to acquire images with one hundred accumulated data sets (K=10) for conducting BDGI. The hyperparameters (α & β) were identically set to 0.5. Furthermore, the Gaussian kernel width and sigma were assumed to be 5 and 2.5, respectively.

Figures 46 show the ghost images reconstructed by the conventional method, including the correlation-based method (TGI, DGI, UWDGI), the compressed ghost imaging (CGI) and BDGI. Three different objects were exploited to validate the performance of each reconstruction method. One was a double slit which is the simplest vertical structure; another was a letter ‘GI’ including a curved structure; and the other was a letter ‘KR’ presenting both a diagonal structure and a crooked line. For the double slit, the object was identified with a relatively smaller amount of data, even with the conventional methods. CGI clearly reconstructed the slits due to its relatively simple structure, which made it possible to easily find the approximated solution of the optimization problem based on compressive sensing. Although the CGI appears to have a better spatial resolution, the reconstructed images obtained by BDGI were definitely clearer than others using half of the total data with the naked eye.

 figure: Fig. 4.

Fig. 4. Simulated ghost images depending on the number of measurements (object: double slit).

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Simulated ghost images depending on the number of measurements (object: ‘GI’ letter).

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Simulated ghost images depending on the number of measurements (object: ‘KR’ letter).

Download Full Size | PDF

To quantitatively estimate the quality of image, the PSNR and SSIM were used in this study. The PSNR is an image quality term representing the ratio between the maximum possible intensity in an image and the mean square error (MSE), which is the cumulative squared error between the ground image and the noisy image. It is defined as the following Eq. (12).

$$PSNR = 10{\log _{10}}(\frac{{MA{X^2}}}{{MSE}})$$
where MAX is the maximum pixel value of the image, and MSE indicates the mean square error between the original image and the reconstructed image.

On the other hand, the PSNR has a limitation in that it could not entirely explain the nature of the obtained image’s quality. Zhou et al. proposed a new method, which is called the SSIM, for assessing image quality that can describe changes in structural similarity between the objects [29]. The SSIM is more sensitive to human perception than the difference in intensity of pixels in an image, which is described as:

$$SSIM(x,y) = \frac{{(2{\mu _x}{\mu _y} + {c_1})(2{\sigma _{xy}} + {c_2})}}{{(\mu _x^2 + \mu _y^2 + {c_1})(\sigma _x^2 + \sigma _y^2 + {c_2})}}$$
where x and $\textrm{y}$ are the original image and the reconstructed image respectively, ${\mu _x}$ (${\mu _y}$) is the average intensity of the pixels in the corresponding image, $\sigma _x^2$ ($\sigma _y^2$) is the variance of the pixel intensity in the corresponding image, ${\sigma _{xy}}$ is the covariance of the pixel intensity in both images, and ${C_1}$ and ${C_2}$ are regularization parameters, which can be expressed as, where generally ${k_1} = 0.01,{k_2} = 0.03$ by default and L is the dynamic range of the pixel values (${2^{bit\# perpixel}} - 1$).

Figure 7 presents PSNRs and SSIMs of the reconstructed ghost images for each object with different amounts of measurement data. In all the different methods, both image quality indexes tended to be improved as the number of measurements increased, but the growth trend of BDGI was the steepest. Although CGI clearly has better performance for all objects among the conventional reconstruction methods, BDGI had superior achievement in PSNR and SSIM for all different objects compared to other methods.

 figure: Fig. 7.

Fig. 7. Performance of simulated GI system with several reconstruction methods with different numbers of measurements: PSNR (a-c) and SSIM (b-f) of corresponding images as a function of the number of measured data. See Data File 1 for underlying values.

Download Full Size | PDF

In addition to the simulation study, the classical GI system described in Section 2.4 was built to validate the effectiveness of BDGI with experimentally retrieved ghost images. Ten thousand data items were measured for reconstruction of an object including letter “6” and triple slit in the USAF target (Group #1 and Element #6). The size of speckle patterns was set to $1.36 \times 1.36\; m{m^2}$ and consisted of $300 \times 300$ pixels. Using the identical hyperparameters as in the simulated GI, noisy images were obtained by the TGI method and it was implemented to test the performance of BDGI where 20 data sets (K=20) were set up accumulating 500 data items sequentially.

Figure 8 shows ghost images acquired by experimentally measured data. Using only about two or three thousand data items, the shape of a BDGI image is more distinct while that of the other images is obscured. Figure 9 shows the results of quantitative analysis (PSNR & SSIM) for the experimental ghost images. The overall graphical appearance of the experimental results was similar to that of the simulation results. BDGI achieved better performance than the other methods, up to about 12.75 dB for the PSNR and about 0.75 for the SSIM. At 3000 measurements, BDGI was 12.19 dB for the PSNR and 0.70 for SSIM whereas TGI was 7.24 dB and 0.28, respectively. Meanwhile, the PSNR of CGI was significantly degraded compared with that of CGI in the simulated GI system. This was because the CGI method is somewhat sensitive to various noises, such as background noise, diffraction noise and electrical noise contributed by shot noise, thermal noise and the dark current in photo-sensors [12].

 figure: Fig. 8.

Fig. 8. Experimental ghost images depending on the number of measurements (object: USAF target, Group #1, Element #6).

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Performance of the experimental GI system with several reconstruction methods with different numbers of measurements: PSNR (a) and SSIM (b) as a function of the amount of measured data. See Data File 2 for underlying values.

Download Full Size | PDF

To exploit GI in practical applications, the computational time of the reconstruction process should be as short as possible. We estimated the reconstruction time of the existing methods and BDGI depending on the different amounts of measured data. The aforementioned methods were implemented on an Intel Core i7-10700 CPU(@2.9GHz) and 16GB RAM. Figure 10 indicates the results of the computing time for experimentally reconstructing ghost images by the existing methods and BDGI. The computing time of the correlation-based methods including five different methods (TGI, DGI, NGI, UWGI, UWDGI) had similar computational time and seemed to increase linearly with the growing number of measurements, whereas that of CGI rapidly increased. The results confirmed that the slope value of the linear trend line for the CGI was larger than others where their slope was almost same as 0.0048. In the case of BDGI, it took about 2 more seconds to reconstruct denoised images than correlation-based methods at different measurements, but it still achieved acceptable cost time with a similar slope value of the linear function. Namely, the BDGI achieved a higher quality ghost image even for a limited acquisition time.

 figure: Fig. 10.

Fig. 10. Computing time of each reconstruction method with a different number of measurements and a table including coefficients (slope and offset) of the linear trend line for each method. See Data File 2 for underlying values.

Download Full Size | PDF

3.2 Sensitivity analysis of a mask threshold

In the previous BDGI results, the threshold to create a binary mask was assumed to be 0.95. The threshold was a user-specified parameter, called hyperparameter, so that its value needs to be determined in a direction where it filters out noisy elements as much as possible but leads meaningful components into the mask structure. Considering that there have been many noisy elements in an initial ghost image reconstructed with a smaller amount of data, the 0.95 value was chosen as a conservative approach. However, the influence of the threshold should be closely investigated because this value is a key parameter of the denoising mask that strongly affects the performance of BDGI. Intuitively, the larger the threshold value is, the fewer the pixels in the mask that have a white value, 1, whereas the smaller the value is, the more pixels that could have white values. Namely, if the threshold has an excessively large value, the formed mask could filter not only the noise in the background pixels but also normal pixels, which produces distortion at the edge of the object.

Figure 11 represents the ghost images reconstructed by TGI and BDGI with various threshold values at different measurements. BDGI, which had the smallest threshold (0.1), was vulnerable to noise caused by a low amount of sampling data, but it reconstructed an image with little distortion. In contrast, BDGI, which had a relatively large threshold (0.6∼0.95) filtered the noise in the background but created a distortion near the edge of the object. These results corresponded to the intuitive analysis described above.

 figure: Fig. 11.

Fig. 11. Experimental ghost images depending on the number of measurements: TGI and BDGI with different threshold values.

Download Full Size | PDF

To quantitatively validate the effect of threshold, we estimated the performance of the BDGI with various threshold values. The range of the threshold value was set to 0.1 to 0.9 at 0.1 interval and 0.95 as a reference. Figures 12(a) and (c) present the PSNR and SSIMs of the ghost images reconstructed by BDGI with the different threshold values as a function of the different number of measurements. At 3000 measurements, the BDGI with a 0.8 threshold achieved the maximum value of BDGI and SSIM as 12.81 dB and 0.77 respectively. In order to clearly compare the performance of BDGI according to a change in threshold value, we plotted Figs. 12(b) and (d) which respectively represent the PSNR and SSIM as a function of the threshold value at a certain measurement. According to Fig. 12(b), the smaller the amount of measured data, the higher the PSNR of BDGI with a high threshold value. On the other hand, as the number of measurements increased, the BDGI that was set to a small threshold value showed better performance. The SSIM also had a similar tendency as shown in Fig. 12(d). To obtain high quality ghost images with fewer measurements, the threshold value should be set to be between 0.6 and 0.8.

 figure: Fig. 12.

Fig. 12. Performance of the experimental BDGI with a different mask threshold: PSNR (a) and SSIM (c) as a function of the amount of measured data; PSNR (b) and SSIM (d) as a function of the threshold value at a certain measurement. See Data File 2 for underlying values.

Download Full Size | PDF

3.3 Effectiveness of Gaussian filtering process

After forming a mask through the MRF-based Bayesian denoising process, we applied a Gaussian filter to the mask to close the artificial gap between adjacent pixels. Here, we studied the effectiveness of this Gaussian filtering process, subsequently discussing its necessity. To validate the influence of the Gaussian filtering, we conducted a comparison analysis as shown in Fig. 13. Under the same threshold value (0.2, 0.5, 0.8 and 0.95), the performance (PSNR and SSIM) of the experimental BDGI w/ or w/o the Gaussian filtered mask was estimated as shown in Figs. 13(a) and (c). BDGI with the Gaussian kernel achieved a higher PSNR in all cases, which also showed similar trends in the SSIM. (except that the BDGI with a 0.95 threshold showed a fluctuation due to error from its excessive value.) Figs. 13(b) and (d) show the difference in PSNR and SSIM respectively between BDGI w/ and w/o the Gaussian filtered mask. As the threshold value increased, the difference in the PSNR and SSIM seemed to decrease. In addition, the more measured data was acquired, the smaller the difference value was. Both a large threshold value and a large amount of measured data have in common that they can be a factor that makes an image reconstructed with better quality in a noisy environment. This implied that the Gaussian filter can reduce the noise that exists even after the MRF-based Bayesian denoising process where sufficient measured data is not acquired and the threshold value is relatively small. Hence, the BDGI with a Gaussian filtering process is more robust against the noise and is effective in reconstructing high-quality ghost images in a noisy situation caused by a small number of measurements.

 figure: Fig. 13.

Fig. 13. Performance of the experimental BDGI w/ or w/o a Gaussian filtering process: PSNR (a) and SSIM (c) as a function of the amount of measured data with different mask thresholds; difference in PSNR (b) and SSIM (d) between w/ and w/o a Gaussian filtering process as a function of the threshold value at a certain measurement with different mask threshold values. See Data File 2 for underlying values.

Download Full Size | PDF

3.4 Modified BDGI for a gray-scale target

The proposed method contains a kind of denoising mask as a result of an appropriate image thresholding of the posterior which is obtained through the MRF-based Bayesian framework. The previous results proved that the mask was able to significantly improve the image quality of the reconstructed ghost images for the binary targets. However, in many fields, a more complex object, e.g., a gray-scale target (generally 8 bit) images, are taken into account. Here, we demonstrated the modified BDGI method using a posterior itself as a gray-scale mask in a denoising process for a gray-scale ghost image. The posterior described in Eq. (4) indicates how close a pixel seems to be to a white color by means of a probability in the range of zero to one. Instead of using a binarized mask by pre-processing the posterior, we applied the posterior itself to the denoising module by taking advantage of its intrinsic denotation. Furthermore, in order to achieve the denoising effect, the posterior was set to be accumulated in each iterative step, thereby making the intensity of each pixel to be quickly close to a latent value. To evaluate the performance of the modified BDGI using the posterior itself as a mask where the accumulated posterior at each step was normalized, we reconstructed a famous image, ‘Cameraman’ which had a 100${\times} $100 pixel size. Here, we exploited the ghost images reconstructed by DGI for input data. There were two cases: a reconstructed ghost image w/o or w/ noise as shown in Figs. 14 and 15, respectively. For the noisy case, the white Gaussian noise with a zero mean and a standard deviation of the bucket signal was added in the bucket intensity of each measurement. In contrast to the previous results in which CGI achieved better spatial resolution for binary targets than others, CGI showed the worst spatial resolution for a gray-scale target. The proposed method showed remarkable enhancement of denoising effect in both cases. In particular, in the noisy images in Fig. 15, the ghost images by BDGI represents a clearer outline distinguishing the man from the earth and the sky than others. Figure 16 indicates the quantitative analysis of the image quality for ghost images received by different methods. Figures 16(a) and (b) showed the PSNR and SSIM for a target without noise. At 20000 measurements without noise, the PSNR and SSIM of the BDGI were the 17.75 dB and 0.87, while those of the UWDGI, which had the highest value in the methods except the BDGI, were 15.73 dB and 0.76. Similarly, in the noisy status, the BDGI produced the best PSNR of 16.10 dB and SSIM of 0.71 as shown in the Figs. 16(c) and (d).

 figure: Fig. 14.

Fig. 14. Simulated gray-scale ghost images depending on the number of measurements (object: Cameraman).

Download Full Size | PDF

 figure: Fig. 15.

Fig. 15. Simulated gray-scale ghost images with noise depending on the number of measurements: (object: Cameraman).

Download Full Size | PDF

 figure: Fig. 16.

Fig. 16. Performance of the reconstruction methods with different numbers of measurements: PSNR (a) and SSIM (b) for gray-scale images without noise and PSNR (c) and SSIM (d) for gray-scale images with noise in Fig. 15. See Data File 3 for underlying values.

Download Full Size | PDF

4. Discussion and conclusions

In this study, we explored a Bayesian denoising method for GI to improve the quality of a retrieved image. The Bayes’ theorem produced a particularly different result from the previously reported method for enhancing the SNR of ghost images. Specifically, the proposed method, the BDGI, found a latent image in a noisy image, and the corresponding latent image was used as prior information at the next reconstruction step. Owing to this iterative Bayesian process, BDGI obtained the highest PSNR and SSIM in both binary and gray-scale targets with acceptable computational time even though the decision process of hyperparameter values was not optimized. Moreover, BDGI was the most robust in a weak correlation system with white Gaussian noise.

There have been several studies [3032] using an iterative approach to improve SNR of a ghost image. In [30], they optimized the illumination pattern to match the latent object in real time by using a genetic algorithm. It can be said that it has a property similar to Bayesian inference in terms of feedback processing where the algorithm interprets certain information from the measurement data and lets it influence the next illumination pattern. However, this method obviously had a limitation in that it was only effective in a computational GI system consisting of computing sources such as SLMs and DMDs. In contrast, the proposed method based on Bayes’ theorem itself can be universally applied to any GI system, thereby allowing it to easily be combined with other earlier methods.

Although our approach achieved the enhancement in terms of PSNR and SSIM for both a binary and a gray-scale target, we acknowledge the limitation that the proposed method appears to reveal a degraded spatial resolution. This limitation could be alternatively resolved by removing the noise that was due to the fluctuations of the speckle distribution, i.e., the covariance between the intensity of neighboring pixel ${\sigma ^2}({x,x^{\prime}} )$ as described in the previous studies [31,32]. Further study will need to compensate for such noise by adding a framework like the Hessian matrix [33] in each iterative step to resolve this issue so that a more versatile approach to reduce noise can be achieved and the spatial resolution in the BDGI process can be improved.

Funding

Institute for Information and Communications Technology Promotion; Ministry of Science and ICT, South Korea (2019-0-00831).

Acknowledgments

This research was supported by the institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data for underlying values in all plots are available in the uploaded Data File 1-3.

References

1. T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995). [CrossRef]  

2. R. S. Bennink, S. J. Bentley, and R. W. Boyd, ““Two-Photon” Coincidence Imaging with a Classical Source,” Phys. Rev. Lett. 89(11), 113601 (2002). [CrossRef]  

3. M. J. Padgett and R. W. Boyd, “An introduction to ghost imaging: Quantum and classical,” Phil. Trans. R. Soc. A. 375(2099), 20160233 (2017). [CrossRef]  

4. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008). [CrossRef]  

5. X. Li, H. Hou, K. Liu, J. Lou, G. Wang, and T. Cai, “Circularly Polarized Transmissive Meta-Holograms with High Fidelity,” Adv. Photonics Res. 2(9), 2100076 (2021). [CrossRef]  

6. H. Yu, R. Lu, S. Han, H. Xie, G. Du, T. Xiao, and D. Zhu, “Fourier-Transform Ghost Imaging with Hard X Rays,” Phys. Rev. Lett. 117(11), 113901 (2016). [CrossRef]  

7. D. Pelliccia, A. Rack, M. Scheel, V. Cantelli, and D. M. Paganin, “Experimental X-Ray Ghost Imaging,” Phys. Rev. Lett. 117(11), 113902 (2016). [CrossRef]  

8. A. M. Kingston, G. R. Myers, D. Pelliccia, F. Salvemini, J. J. Bevitt, U. Garbe, and D. M. Paganin, “Neutron ghost imaging,” Phys. Rev. A 101(5), 053844 (2020). [CrossRef]  

9. S. Li, F. Cropp, K. Kabra, T. J. Lane, G. Wetzstein, P. Musumeci, and D. Ratner, “Electron Ghost Imaging,” Phys. Rev. Lett. 121(11), 114801 (2018). [CrossRef]  

10. A. Trimeche, C. Lopez, D. Comparat, and Y. J. Picard, “Ion and electron ghost imaging,” Phys. Rev. Res. 2(4), 043295 (2020). [CrossRef]  

11. K. Wu, H. Zhang, Y. Chen, Q. Luo, and K. Xu, “All-Silicon Microdisplay Using Efficient Hot-Carrier Electroluminescence in Standard 0.18µm CMOS Technology,” IEEE Electron Device Lett. 42(4), 541–544 (2021). [CrossRef]  

12. D. B. Phillips, R. He, Q. Chen, G. M. Gibson, and M. J. Padgett, “Non-diffractive computational ghost imaging,” Opt. Express 24(13), 14172 (2016). [CrossRef]  

13. B. Luo, P. Yin, L. Yin, G. Wu, and H. Guo, “Orthonormalization method in ghost imaging,” Opt. Express 26(18), 23093 (2018). [CrossRef]  

14. F. Ferri, D. Magatti, L. A. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. 104(25), 253603 (2010). [CrossRef]  

15. B. Sun, S. S. Welsh, M. P. Edgar, J. H. Shapiro, and M. J. Padgett, “Normalized ghost imaging,” Opt. Express 20(15), 16892 (2012). [CrossRef]  

16. H. Li, J. Shi, and G. Zeng, “Ghost imaging with nonuniform thermal light fields,” J. Opt. Soc. Am. A 30(9), 1854 (2013). [CrossRef]  

17. O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009). [CrossRef]  

18. S. Han, H. Yu, X. Shen, H. Liu, W. Gong, and Z. Liu, “A review of ghost imaging via sparsity constraints,” Appl. Sci. 8(8), 1379 (2018). [CrossRef]  

19. F. Wang, H. Wang, H. Wang, G. Li, and G. Situ, “Learning from simulation: An end-to-end deep-learning approach for computational ghost imaging,” Opt. Express 27(18), 25560 (2019). [CrossRef]  

20. F. Li, M. Zhao, Z. Tian, F. Willomitzer, and O. Cossairt, “Compressive ghost imaging through scattering media with deep learning,” Opt. Express 28(12), 17395 (2020). [CrossRef]  

21. D. Gong, Y. Zhang, Q. Yan, and H. Li, “Learning scene-aware image priors with high-order Markov random fields,” Appl. Inform. 4(1), 12 (2017). [CrossRef]  

22. U. Schmidt, Q. Gao, and S. Roth, “A generative perspective on MRFs in low-level vision,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit.1751–1758 (2010).

23. K. Tanaka, “Statistical-mechanical approach to image processing,” J. Phys. A: Math. Gen. 35(37), R81–R150 (2002). [CrossRef]  

24. A. Gelman, J. B. Carlin, H. S. Stern, D. B. Dunson, A. Vehtari, and D. B. Rubin, “Textbook: Bayesian Data Analysis Third edition (with errors fixed as of 13 February 2020),” (February), 677 (2020).

25. S. Geman and D. Geman, “Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-6(6), 721–741 (1984). [CrossRef]  

26. J. Stoehr, “A review on statistical inference methods for discrete Markov random fields,” (2017).

27. N. Otsu, P. L. Smith, D. B. Reid, C. Environment, L. Palo, P. Alto, and P. L. Smith, “Otsu_1979_otsu_method,” IEEE Trans. Syst., Man, Cybern. 9(1), 62–66 (1979). [CrossRef]  

28. A. Gatti, M. Bache, D. Magatti, E. Brambilla, F. Ferri, and L. A. Lugiato, “Coherent imaging with pseudo-thermal incoherent light,” J. Mod. Opt. 53(5-6), 739–760 (2006). [CrossRef]  

29. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004). [CrossRef]  

30. B. Liu, X. Shan, J. Zhu, C. Chen, Y. Liu, F. Wang, and D. McGloin, “Self-optimizing ghost imaging with a genetic algorithm,” 2020 Conf. Lasers Electro-Optics Pacific Rim, CLEO-PR 2020 - Proc. (August), 3–5 (2020).

31. W. Wang, Y. P. Wang, J. Li, X. Yang, and Y. Wu, “Iterative ghost imaging,” Opt. Lett. 39(17), 5150 (2014). [CrossRef]  

32. X.-R. Yao, W.-K. Yu, X.-F. Liu, L.-Z. Li, M.-F. Li, L.-A. Wu, and G.-J. Zhai, “Iterative denoising of ghost imaging,” Opt. Express 22(20), 24268 (2014). [CrossRef]  

33. C. Chang, D. Yang, G. Wu, B. Luo, and L. Yin, “Reducing motion blur in ghost imaging via the hessian matrix,” Appl. Sci. 11(1), 323 (2020). [CrossRef]  

Supplementary Material (3)

NameDescription
Data File 1       Data file for underlying values in Fig.7
Data File 2       Data file for underlying values in Figs.9-10,12-13
Data File 3       Data file for underlying values in Fig.16

Data availability

Data for underlying values in all plots are available in the uploaded Data File 1-3.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1.
Fig. 1. Schematic of the hidden Markov random field for an image denoising model.
Fig. 2.
Fig. 2. Flowchart of Bayesian denoising method for ghost imaging.
Fig. 3.
Fig. 3. Schematic of a pseudo-thermal light ghost imaging system.
Fig. 4.
Fig. 4. Simulated ghost images depending on the number of measurements (object: double slit).
Fig. 5.
Fig. 5. Simulated ghost images depending on the number of measurements (object: ‘GI’ letter).
Fig. 6.
Fig. 6. Simulated ghost images depending on the number of measurements (object: ‘KR’ letter).
Fig. 7.
Fig. 7. Performance of simulated GI system with several reconstruction methods with different numbers of measurements: PSNR (a-c) and SSIM (b-f) of corresponding images as a function of the number of measured data. See Data File 1 for underlying values.
Fig. 8.
Fig. 8. Experimental ghost images depending on the number of measurements (object: USAF target, Group #1, Element #6).
Fig. 9.
Fig. 9. Performance of the experimental GI system with several reconstruction methods with different numbers of measurements: PSNR (a) and SSIM (b) as a function of the amount of measured data. See Data File 2 for underlying values.
Fig. 10.
Fig. 10. Computing time of each reconstruction method with a different number of measurements and a table including coefficients (slope and offset) of the linear trend line for each method. See Data File 2 for underlying values.
Fig. 11.
Fig. 11. Experimental ghost images depending on the number of measurements: TGI and BDGI with different threshold values.
Fig. 12.
Fig. 12. Performance of the experimental BDGI with a different mask threshold: PSNR (a) and SSIM (c) as a function of the amount of measured data; PSNR (b) and SSIM (d) as a function of the threshold value at a certain measurement. See Data File 2 for underlying values.
Fig. 13.
Fig. 13. Performance of the experimental BDGI w/ or w/o a Gaussian filtering process: PSNR (a) and SSIM (c) as a function of the amount of measured data with different mask thresholds; difference in PSNR (b) and SSIM (d) between w/ and w/o a Gaussian filtering process as a function of the threshold value at a certain measurement with different mask threshold values. See Data File 2 for underlying values.
Fig. 14.
Fig. 14. Simulated gray-scale ghost images depending on the number of measurements (object: Cameraman).
Fig. 15.
Fig. 15. Simulated gray-scale ghost images with noise depending on the number of measurements: (object: Cameraman).
Fig. 16.
Fig. 16. Performance of the reconstruction methods with different numbers of measurements: PSNR (a) and SSIM (b) for gray-scale images without noise and PSNR (c) and SSIM (d) for gray-scale images with noise in Fig. 15. See Data File 3 for underlying values.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

P ( X | Y ) = P ( X , Y ) P ( X ) P ( Y | X ) d X = P ( Y | X ) P ( X ) P ( Y )
P ( X = { x 11 x h w } | y ) = i = 1 h j = 1 w P ( x i j R ( x i j ) | N ( x i j , y i j ) = 1 Z ( y ) c C ψ c ( x c , y c ) = 1 Z ( y ) exp [ α i = 1 h j = 1 w ϕ ( x i j , y i j ) s β x i j N ( x i j ) ψ ( x i j , x i j ) ]
i = 1 h j = 1 w P ( x i j | x i j m b ) = 1 Z ( y ) exp [ α i = 1 h j = 1 w x i j y i j β x i j N ( x i j ) x i j x i j ) ]
P ( x i j = 1 | x i j m b ) = exp [ α y i j β x i j N ( x i j ) x i j ) ] exp [ α y i j β x i j N ( x i j ) x i j ) ] + exp [ α y i j + β x i j N ( x i j ) x i j ) ] = 1 1 + exp ( 2 w i j )
I B ( t ) = I B ( t , x o b j ) | T ( x o b j ) | 2 d x o b j
T G I ( x o b j ) = E [ ( I R ( t , x o b j ) E [ I R ( t , x o b j ) ] ) × ( I B ( t ) E [ I B ( t ) ] ) ] = 1 N i = 1 N { [ I R ( i , x o b j ) < I R ( i , x o b j ) > ] × [ I B ( i ) < I B ( i ) > ] }
D G I ( x ) = 1 N i = 1 N { [ I R ( i , x o b j ) < I R ( i , x o b j ) > ] × [ I B ( i ) < I B ( i ) > R ( i ) < R ( i ) > ] }
N G I ( x ) = 1 N i = 1 N { [ I R ( i , x o b j ) < I R ( i , x o b j ) > ] × [ I B ( i ) R ( i ) < I B ( i ) > < R ( i ) > ] }
U W G I ( x ) = 1 N i = 1 N { [ I R ( i , x o b j ) < I R ( i , x o b j ) > 1 ] × [ I B ( i ) < I B ( i ) > ] }
U W D G I ( x ) = 1 N i = 1 N { [ I R ( i , x o b j ) < I R ( i , x o b j ) > 1 ] × [ I B ( i ) < I B ( i ) > R ( i ) < R ( i ) > ] }
C G I ( x ) = arg min x | | Y A X | | 2 2 s . t | | X | | 0 T 0 f o r i = 1 N
P S N R = 10 log 10 ( M A X 2 M S E )
S S I M ( x , y ) = ( 2 μ x μ y + c 1 ) ( 2 σ x y + c 2 ) ( μ x 2 + μ y 2 + c 1 ) ( σ x 2 + σ y 2 + c 2 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.