Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Feature ghost imaging for color identification

Open Access Open Access

Abstract

On the basis of computational ghost imaging (CGI), we present a new imaging technique, feature ghost imaging (FGI), which can convert the color information into distinguishable edge features in retrieved grayscale images. With the edge features extracted by different order operators, FGI can obtain the shape and the color information of objects simultaneously in a single-round detection using one single-pixel detector. The feature distinction of rainbow colors is presented in numerical simulations and the verification of FGI’s practical performance is conducted in experiments. Furnishing a new perspective to the imaging of colored objects, our FGI extends the function and the application fields of traditional CGI while sustaining the simplicity of the experimental setup.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Originating from ghost imaging (GI) [17], computational GI (CGI) is a novel indirect imaging technique which has attracted wide attention since $2008$ [8]. Using only a spatial light modulator and a single-pixel detector, CGI retrieves the object images based on the correlation between computational patterns and the corresponding light intensities [912]. It largely simplifies the experimental setup by replacing the reference beam with computational patterns while compared with traditional GI [9]. Together with various efficient reconstruction algorithms [1315], CGI provides great opportunities to achieve imaging in noisy environments [1619].

According to the clear imaging principle and the simple experimental setup, rich frontier research has been developed since CGI’s first birth [2029], for example, anti-noise [16,3034], compressive sensing [13,35], stereoscopic imaging [3638], and temporal domain imaging [3943]. Differential ghost imaging (DGI) greatly enhanced the signal-to-noise ratio (SNR) in the recovering process of realistic complex objects [16]. And three-dimensional ($3$D) computational imaging made the imaging of stereo objects possible using merely several single-pixel detectors and one digital light projector through the objects’ $2$D images capturing from different locations [36]. Having been extended to a time object [39], computational temporal ghost imaging even allowed the reconstruction of nonreproducible time objects with a single-shot spatially multiplexed measurement [40].

For the imaging of colored objects, some techniques based on CGI have been developed. Hyperspectral ghost imaging combines CGI with hyperspectral imaging to acquire the spatial and spectral information of an object, using various frequency bands [4447]. In addition to the imaging process through CGI, hyperspectral ghost imaging requires an extra spectral analysis, including multi-round projection and detection with different wavelengths [46], THz time-domain spectroscopy [44,45], and alternatively detecting with a spectrometer [47]. Recently, ghost difference imaging (GDI) was presented to capture the differential wavelength images or the differential position images of objects in just a single-round acquisition with higher SNR but no extra computation [48]. Furthermore, by carrying out the convolution operations in the projection of computational patterns, computationally convolutional ghost imaging (CCGI) enabled the direct acquisition of objects’ features without imaging first [49]. With specific operations in CGI patterns, is it possible to recognize more object information, e.g., the color information of an object, than its grayscale edge features only?

In this work, we present a new technique, named feature ghost imaging (FGI), that converts the color information of objects into distinguishable edge features in grayscale through different order of derivative operations and thereby we can simultaneously get both the shape and the color distribution of the target objects in a single-round detection, merely using CGI’s experimental setup. Our method for color identification acquires color information from edge features extracted by designed patterns, hence differs in principle and scheme from hyperspectral ghost imaging requiring extra components of spectral analysis for frequency detection. Bringing new insight into the imaging of colored objects, FGI transforms the color recognition problem from the work of traditional RGB primary color detection to the task of distinct feature recognition in grayscale images. Combining the RGB channel imaging with the convolutional pattern design, FGI therefore obtains a great variety of information in a single-round detection and further increases the photon utilization while conforming to the popular trend of taking advantage of computer algorithms.

2. Principle and analysis

2.1 Computational ghost imaging

CGI mainly includes a digital light projector illuminating the objects with computational patterns and a single-pixel photodetector that detects the intensity of the reflected light [36], as shown in Fig. 1. For the $i$-th detection, the intensity $I_i$ as a result of diffuse reflection of the pattern is given by the Frobenius inner product of the computational pattern $\boldsymbol P_i$ and the object $\boldsymbol X$ to be measured

$$I_i=\left\langle\boldsymbol P_i,\boldsymbol X\right\rangle,$$
where $\left\langle\cdot \right\rangle$ denotes the Frobenius inner product. The computational pattern $\boldsymbol P_i$ can be chosen from the Hadamard patterns [50,51], the Fourier patterns [52], the Gao-Boole patterns [49], and random patterns [8,36]. Being binary, the Hadamard patterns do not only allow $100\%$ reconstruction in theory but also are favored for their great noise robustness [50]. The Fourier patterns allow $100\%$ reconstruction as well, but they are more complex in experimental implementation because they are in grayscale [52]. The binary Gao-Boole patterns enable $100\%$ reconstruction requiring only half the number of measurements than the Hadamard patterns but are not capable of good anti-noise ability [53]. Random patterns can be either binary or in grayscale, however, their measurement number taken in experiments is usually more than that of the first three kinds of patterns in order to achieve comparable reconstruction results [8,36]. One can choose different types of computational patterns according to the needs of the experiments.

 figure: Fig. 1.

Fig. 1. Experimental setup for FGI (the same as CGI). The light projector illuminates the object with computer-generated patterns. The reflected light from the object is detected, after a collecting lens, by a single-pixel photodetector. The detected intensities are converted and fed into the computer to retrieve the image.

Download Full Size | PDF

In this paper, we choose the Hadamard patterns as the original computational patterns for their outstanding noise robustness. Since negative values of computational patterns cannot be projected directly, we use both the complementary parts of Hadamard patterns based on the differential measurements [50] and $\boldsymbol P_i$ represents a Hadamard pattern. With sequential intensities, the image $\boldsymbol X_r$ is retrieved by aggregating the correlation between the patterns and the corresponding intensities

$$\boldsymbol X_r=\sum_{i}I_i\boldsymbol P_i.$$

2.2 Convolution for feature extracting

Since CGI pertains to linear systems, convolution also applies. The usual information of images, such as the gradient, can be measured directly by processing the patterns in advance with the corresponding convolution operators [49], instead of processing the retrieved images. Assume that the convolution operator is $\boldsymbol c$, and the convolution of $\boldsymbol P_i$ and $\boldsymbol c$ generates a new pattern

$$(\boldsymbol P_i\ast \boldsymbol c)[s,t]=\sum_p\sum_q \boldsymbol P_i[s-p,t-q]\boldsymbol c[p,q],$$
which is projected onto the object. The Eq. (3) defines each pixel value of the new pattern $(\boldsymbol P_i\ast \boldsymbol c)$. In Eq. (3), the character pair (s, t) denotes the coordinates of pixels of the pattern and, similarly, the character pair (p, q) the coordinates of matrix elements of the convolution kernel. Using the new patterns modifies the detected intensity in Eq. (1) to be
$$I_i=\left\langle\boldsymbol P_i\ast \boldsymbol c,\boldsymbol X\right\rangle.$$

By substitution of Eq. (3) into Eq. (4), the intensity can be given by an equivalent expression

$$\begin{aligned} &\left\langle\boldsymbol P_i\ast \boldsymbol c,\boldsymbol X\right\rangle\\ &=\sum_p\sum_q\sum_s\sum_t\boldsymbol P_i[s-p,t-q]\boldsymbol c[p,q]\boldsymbol X[s,t]\\ &=\sum_{p^{'}}\sum_{q^{'}}\sum_{s^{'}}\sum_{t^{'}}\boldsymbol P_i[s^{'},t^{'}]\boldsymbol c[p^{'},q^{'}]\boldsymbol X[s^{'}-p^{'},t^{'}-q^{'}]\\ &=\left\langle\boldsymbol P_i,\boldsymbol X\ast\boldsymbol c\right\rangle. \end{aligned}$$

It suggests that we can formally take the convolution of the original object and the operator as an object, and therefore the retrieved image is $\boldsymbol X_r=\boldsymbol X\ast \boldsymbol c$, allowing the potential to acquire the information of interest by choosing convolution operators.

Here we work with the zero-order, second-order and first-order derivatives to detect the features of objects, which are implemented by the following operators

$$\boldsymbol c_1=1,$$
$$\boldsymbol c_2=\left( \frac{\partial^{2} } {\partial x^{2}}+\frac{\partial^{2} } {\partial y^{2}} \right) \frac{1}{2\pi \sigma^{2}}e^{-\frac{x^{2}+y^{2}}{2\sigma^{2}}},$$
$$\boldsymbol c_3=\frac{\partial } {\partial x},$$
where $\boldsymbol c_1$ is a constant operator, $\boldsymbol c_2$ is the Laplacian of Gaussian (LoG) operator [5456], and $\boldsymbol c_3$ is the Prewitt operator [5760]. In Eq. (7), the character $\sigma$ stands for the parameter of standard deviation for the LoG operator. As shown in Fig. 2, the constant operator extracts the original grayscale values of the ring, which are positive. The LoG operator extracts the inner and outer edges of the ring that can be distinguished by the grayscale values or remaining respectively the positive and negative values [55]. The Prewitt operator extracts the gradient information along the horizontal direction of the ring, therefore the left and right edges [58]. The matrix expression of the convolution kernels for these three operators is given in Fig. 3.

 figure: Fig. 2.

Fig. 2. Comparison of the three operators for feature extracting in numerical simulation ($64\times 64$ pixels). The constant operator extracts the original grayscale values of the ring (a1), which are non-negative. The LoG operator extracts the inner and outer edges (b1). The Prewitt operator extracts the left and right edges (c1). The details of the grayscale values and edges are shown by the absolute ((a2), (b2) and (c2)), positive ((a3), (b3) and (c3)) and negative ((a4), (b4) and (c4)) values of FGI images. The constant operator, the LoG operator, and the Prewitt operator will be applied to the original Hadamard patterns to form the R, G, and B components of the projected colored patterns, respectively.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Construction of the projected colored pattern $\boldsymbol P_{ci}$ through linear superposition. $\boldsymbol P_i$ represents an original Hadamard pattern, $\boldsymbol c_1$, $\boldsymbol c_2$ and $\boldsymbol c_3$ are the constant operator, the LoG operator, and the Prewitt operator, respectively. The convolution of $\boldsymbol P_i$ and the three operators $\boldsymbol c_1$, $\boldsymbol c_2$ and $\boldsymbol c_3$ gives the R ($\boldsymbol P_{Ri}$), G ($\boldsymbol P_{Gi}$), and B ($\boldsymbol P_{Bi}$) components of the pattern $\boldsymbol P_{ci}$, respectively. The patterns are of $64\times 64$ pixels.

Download Full Size | PDF

2.3 Feature extracting of colored objects

Although the convolution traditionally computed by computers can be executed on the linear CGI system instead, both ways can replace each other. Here we propose a technique for extracting the feature of colored objects, which cannot be derived by post-computation on the retrieved grayscale images. Briefly put, we extract different features of objects according to three primary colors in the RGB color model, based on different orders of operators. We combine three sets of colored computational patterns, as shown in Fig. 3, to construct the projected pattern $\boldsymbol P_{ci}$ through linear superposition

$$\begin{aligned} &\boldsymbol P_{ci}=\boldsymbol P_{Ri}+\boldsymbol P_{Gi}+\boldsymbol P_{Bi},\\ &\boldsymbol P_{Ri}=\boldsymbol P_i*\boldsymbol c_1,\\ &\boldsymbol P_{Gi}=\boldsymbol P_i*\boldsymbol c_2,\\ &\boldsymbol P_{Bi}=\boldsymbol P_i*\boldsymbol c_3, \end{aligned}$$
where the subscript R (red), G (green), and B (blue) denote primary colors in the RGB color model. The "+" in Eq. (9) represents that the patterns of RGB channels together form a colored projection pattern, rather than the algebraic sum. This representation is analogous to the GDI that exploits classic three channels in the visible waveband simultaneously [48].

Also regarded as a linear superposition are different color components of the object, we have

$$\boldsymbol X=\boldsymbol X_{R}+\boldsymbol X_{G}+\boldsymbol X_{B}.$$
For objects without overlapped primary colors, consequently, the detected intensity is
$$\begin{aligned} I_{i}&=\left\langle\boldsymbol P_{ci},\boldsymbol X\right\rangle\\ &=\left\langle\boldsymbol P_{Ri}+\boldsymbol P_{Gi}+\boldsymbol P_{Bi},\boldsymbol X_R+\boldsymbol X_G+\boldsymbol X_B\right\rangle\\ &=\left\langle\boldsymbol P_{Ri},\boldsymbol X_R\right\rangle{+}\left\langle\boldsymbol P_{Gi},\boldsymbol X_G\right\rangle{+}\left\langle\boldsymbol P_{Bi},\boldsymbol X_B\right\rangle. \end{aligned}$$
It is natural that the cross terms between the patterns and the color components with distinct colors vanish, such as $\left\langle\boldsymbol P_{Ri},\boldsymbol X_G\right\rangle=0$, ensuring that the intensity signal is simply the sum of intensities for individual color components, hence the retrieval of images with the information of interest. By using the original pattern, we can retrieve the image
$$\boldsymbol X_r=\boldsymbol X_R*\boldsymbol c_1+\boldsymbol X_G*\boldsymbol c_2+\boldsymbol X_B*\boldsymbol c_3.$$
Although the Prewitt operator and LoG operator are both used for edge extracting in computer vision, their effects are essentially different, promising the potential to distinguish distinct colors by their edge features in the retrieved FGI results. More importantly, this kind of acquisition cannot be replaced by post-computation with only the retrieved grayscale images.

For general objects with overlapped primary colors, such as the object with $\boldsymbol X_R=\boldsymbol X_G=\boldsymbol X_B=\boldsymbol X$, similarly, the measured intensity is given by

$$I_{i}=\left\langle\boldsymbol P_{Ri}+\boldsymbol P_{Gi}+\boldsymbol P_{Bi},\boldsymbol X\right\rangle.$$
The retrieved image is simplified to
$$\boldsymbol X_{r}=\boldsymbol X\ast\left( \boldsymbol c_1+\boldsymbol c_2+\boldsymbol c_3 \right).$$
Even though the features of different primary colors also overlap, as we will demonstrate below, the grayscale values and the edges are capable of distinguishing the color components of the objects.

Note that the correspondence between the operators and the resulting RGB components of the projected colored pattern is free to change. For example, by exchanging the positions of $\boldsymbol c_1$ and $\boldsymbol c_2$ in Eq. (9), we will obtain the main body without any edge features for the G component and the shape with the LoG inner and outer edge features for the R component. And other distinguishable convolution operators are also applicable depending on demand. In addition, some new efficient edge detection methods, such as the joint iteration edge detection method [61], are also potential to extract the edges and preserve the edges’ features for color identification.

3. Simulation and experiment

3.1 Numerical simulations

Before we introduce the FGI’s performance of real objects, let us first look at the unique edge features presented by different colors using FGI. The test image we choose is a flower with seven petals of different colors, which are red, orange, yellow, green, cyan, blue, and purple, respectively, in the counterclockwise direction, and the pistil is white, as shown in Fig. 4(a). According to Eqs. (611), one may consider FGI as the linear superposition of results of applying the constant operator, the LoG operator and the Prewitt operator to the R, G and B components (Figs. 4(a1)–4(a3)) of the test image, respectively.

 figure: Fig. 4.

Fig. 4. Numerical simulation results ($64\times 64$ pixels) of FGI. (a) The colored test image. (a1)–(a3) The R, G, and B components of the colored test image. (b) The FGI result of the colored test image. (c) The absolute value of the FGI result. (d) The FGI graph of the colored test image. The blue curve refers to the points on the blue dotted circle in Fig. 4(b). The starting point is marked by the blue arrow and the Y-axis values of the curve refer to the grayscale values of the points on the dotted circle. The characteristic waveforms of different colors are boxed out by the dotted rectangles of corresponding colors and their theoretical composition of the three operators are given under the dotted rectangles. (e) The absolute FGI graph of the colored test image. The blue curve refers to the points on the blue dotted circle in Fig. 4(c). The starting point is marked by the blue arrow and the Y-axis values of the curve refer to the absolute values of the points on the dotted circle. The characteristic waveforms of different colors are boxed out by the dotted rectangles of corresponding colors and their theoretical compositions of the three operators are given on the top of the dotted rectangles.

Download Full Size | PDF

For the red petal, we can only get its original shape but no additional edge features, because the constant operator just extracts the original grayscale values of the objects and is always non-negative for non-fluorescent objects in principle. And that is also the reason why the red, orange, yellow and purple petals are brighter than others in Figs. 4(b) and 4(c), being a key to distinguishing yellow and green. For the petals with the G component, such as orange, yellow as well as green itself, we have both their shapes and LoG edge features. Therefore, we can see the outer and the inner double-edges of these petals occur in Figs. 4(b) and 4(c), and the higher the ratio of G to R, the more obvious their edges are. Particularly, for the petal in pure G, we have its dark shape, and the inner and outer double-edges are both bright in Fig. 4(c). For the petals with the B component, the shape and the Prewitt edge features come out together. Unlike the LoG edges, the color of the left edge is always different from the color of the right edge for Prewitt edge features, as shown in Fig. 4(b). And for the petal in pure B, its left edge and the right edge appear bright together in Fig. 4(c). Whereas for the purple petal with equal amounts of R and B components, its main body merges with the left edge and a dark curve insets inside the right edge in Fig. 4(c). The LoG edges can overlap with Prewitt edges, leading to mixed edge features, for the cyan petal. Because the LoG operator gives the fine inner and outer edges with different colors, while the Prewitt operator gives the rough left and right edges with different colors, the mixed edges manifest, differently, as a fine black inner edge added on the rough white left edge and a fine white outer edge added on the rough black right edge. If the absolute values of LoG edges equal the absolute values of Prewitt edges, the original rough left and right edges given by the Prewitt operator will become new left and right edges of less roughness and higher values, as shown in Figs. 4(b) and 4(c). For the special situation of the pure blue object having no clear left or right edges but up and down edges, the Prewitt operator extracting vertical gradient might be used, since the Prewitt operator we used extracts only the gradient of objects along the horizontal direction. At last, but not the least, the white pistil in the center has the positive grayscale values resulted by the constant operator, the right half with the LoG edge and the left half with the Prewitt edge due to the color distribution shown in Figs. 4(a1)–4(a3). To summarize, the brightness of the regions in the retrieved image suggests the existence of the R component, the fine inner and outer double-edges indicate the presence of the G component, and the rough left and right edges of different colors confirm the distribution of the B component. Put it in another way, all the eight colors in Fig. 4(a) have distinct and distinguishable edge features using our FGI and the shape and the color distribution of the target objects can be inferred by the FGI results.

To analyze the characteristic edge features of different colors numerically and quantitatively, we have drawn the FGI graph and the absolute FGI graph in Fig. 4(d) and Fig. 4(e). The relative ratios of $\boldsymbol c_1$, $\boldsymbol c_2$, and $\boldsymbol c_3$ operators composing each waveform are given quantitatively. These ratios are mainly determined by multiplying the ratio of R, G, and B components in objects, the intensity ratio of R light, G light, and B light in the projection, as well as the responsivity ratio of the detector to the R light, G light and B light. The latter two kinds of ratios can be summarized as the overall response ratio of the imaging systems to R light, G light and B light. According to the response ratio of the real imaging system and to clearly show the edge features brought by the operators, we here multiply the three operators $\boldsymbol c_1$, $\boldsymbol c_2$, and $\boldsymbol c_3$ by a ratio (1:10:5) in FGI simulations. Therefore, the ratios written in Figs. 4(d) and 4(e) are equal to multiplying the ratio of R, G, and B components in objects and the ratio of 1:10:5. Take the orange petal as an example, its color composition is R:G:B=1:0.65:0, the response ratio of the imaging system is R:B:G=1:10:5, and the final RGB ratio composing the orange waveform is R:B:G=1:6.5:0. For the waveform of red, what we have the most is its original distribution of grayscale values. For the identification of orange, yellow, and green, as the components of $\boldsymbol c_2$ increases, the slopes on both sides of the waveforms become bigger, which are given by the double-edges features. Also, the middle parts of waveforms containing $\boldsymbol c_1$ give higher values than the others. For colors with the $\boldsymbol c_3$ operator, the peaks of two sides of the waveforms are of different signs, and the half width of the $\boldsymbol c_3$ operator is wider than the half width of the $\boldsymbol c_2$ operator.

Here, we just focus on retrieving the FGI grayscale images containing the shape and color information of the objects. With a post-processing algorithm to analyze the relative content of RGB features in the FGI images accurately, it is believed that the further coloring process of the FGI images might be implemented.

For the FGI numerical simulations of real objects, we take as target objects the real-colored image, which includes a red (Fig. 5(a1)) and a yellow pepper (Fig. 5(a2)) that have mostly two primary colors, a green pepper (Fig. 5(a3)) with almost one primary color and a white garlic (Fig. 5(a4)) that has three primary colors of similar amount. Since the linear superposition will change the values of edges, some negative edges may become non-negative, but the relative values almost remain unchanged. Therefore, for general situations, it is enough to distinguish the color mainly based on the FGI image and the absolute image. The positive image and the negative image might be used for auxiliary analysis if necessary.

 figure: Fig. 5.

Fig. 5. Comparison of CGI and FGI numerical simulation results ($128\times 128$ pixels). (a1)–(a5) The colored test image of vegetables. (b1)–(b5) CGI results of the test image. (c1)–(c5) The FGI results of the colored test image. (d1)–(d5) The absolute values of the FGI results.

Download Full Size | PDF

To better demonstrate the capabilities of FGI, we compared FGI with CGI, we show a comparison between FGI and CGI in Fig. 5. One cannot tell the colors of objects from only the CGI images (Figs. 5(b1)–5(b5)), however, in FGI images, the color information is indicated by the edge features.

In Figs. 5(c1) and 5(d1), the main body of the pepper is bright, not surrounded by the LoG edges or Prewitt edges, so the pepper is red. The shank of the pepper is dark and surrounded by LoG edges, thus the shank is green. In Figs. 5(c2) and 5(d2), because the main body of the pepper is also bright, surrounded by the LoG edges, the pepper is yellow. Interestingly, one can hardly ever see any LoG edges surrounding the shank, indicating that the amount of the green component of the shank and the main body are almost identical. For Figs. 5(c3) and 5(d3), the body of the pepper is dark, coupled with the fine inner and outer double-edges, we can conclude that the pepper is green. Also, the double-edges on the left are more pronounced than the double-edges on the right, meaning that the pepper is greener on the left than on the right. For the garlic in Figs. 5(c4) and 5(d4), its double-edges with inconsistent colors in the left and right sides tell us that it has both G and B components. Together with the bright main body resulted by the R component, we can deduce that the garlic is white. Whether the objects are placed separately (Figs. 5(a1)–5(a4)) or together (Fig. 5(a5)), the edge characteristics of different colors remain distinguishable, which is consistent with Eqs. (6)–(11).

3.2 Experiments

To further verify the FGI’s feasibility in real life cases and test FGI performance under the presence of noises, real-scene experiments are conducted. The experimental setup is shown in Fig. 1, where a commercial digital light projector (Epson, EP-$970$) is applied to illuminate designed computational colored patterns onto the target objects, and a single-pixel photodetector (Thorlabs, PDA100A2) is used to collect the reflected light intensities. The detecting frequency of the data-acquisition card (NI, PCIe-6251) is set to be $80000$Hz. To resist the noises, the colored computational patterns $\boldsymbol P_{ci}$ ($64\times 64$ pixels) are projected through differential methods and the generation of the patterns are as introduced in Fig. 3 and Eq. (6). Also, a completely black frame is added between every two patterns for identifying each detected light intensity accurately. The color printed copies of Fig. 6(a) and Fig. 7(a) serve as the target objects in the FGI experiments.

 figure: Fig. 6.

Fig. 6. Experimental results ($64\times 64$ pixels) of FGI. (a) The colored test image. (b) The CGI result of the colored test image. (c) The FGI result of the colored test image. (d) The absolute value of the FGI result. (e) The theoretical FGI result of the colored test image. (f) The absolute value of the theoretical FGI result. (g) The FGI graph of the colored test image. The red and blue curves refer to the points on the red and blue dotted lines in Figs. 6(c) and 6(e), separately. The Y-axis values of the curves refer to the grayscale values of the points on the dotted lines. The characteristic waveforms of different colors are boxed out by the dotted rectangles of corresponding colors and their theoretical composition of the three operators are given around the dotted rectangles. (h) The absolute FGI graph of the colored test image. The yellow, red, and blue curves refer to the points on the yellow, red, and blue dotted lines in Figs. 6(b), 6(d), and 6(f), separately. The Y-axis values of the curves refer to the absolute values of the points on the dotted lines.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Experimental results ($64\times 64$ pixels) of FGI. (a) The colored test image. (b) The CGI result of the colored test image. (c) The FGI result of the colored test image. (d) The absolute value of the FGI result.(e) The theoretical FGI result of the colored test image. (f) The absolute value of the theoretical FGI result. (g) The FGI graph of the colored test image. The red and blue curves refer to the points on the red and blue dotted lines in Figs. 7(c) and 7(e), separately. The Y-axis values of the curves refer to the grayscale values of the points on the dotted lines. The characteristic waveforms of different colors are boxed out by the dotted rectangles and their theoretical composition of the three operators are given around the dotted rectangles. (h) The absolute FGI graph of the colored test image. The yellow, red, and blue curves refer to the points on the yellow, red, and blue dotted lines in Figs. 7(b), 7(d), and 7(f), separately. The Y-axis values of the curves refer to the absolute values of the points on the dotted lines.

Download Full Size | PDF

In the CGI result shown in Fig. 6(b), though the left half of the letter m is brighter than the rest owing to the fact that the imaging system is more sensitive to the green band, the distribution of RGB components remains indistinguishable. In Figs. 6(c) and 6(d), the shape of the recovered letters matches well with the original letters. The u letter with bright body and no obvious edge features is red as concluded in previous theoretical analysis. The left half of m has fine inner and outer edges with different colors, indicating the presence of G component. Interestingly, the values of its body are darker than the black background, implying the amount of R component of this letter is lower than that of the background, which is determined by the absorption and reflection properties of the color printing dyes. The right half of m shows the typical Prewitt edge features, which are the rough left and right edges with different grayscale values, so its dominant color component is B.

In the FGI graph shown in Fig. 6(g), we quantitatively analyze the relative ratios of $\boldsymbol c_1$, $\boldsymbol c_2$, and $\boldsymbol c_3$ operators composing waveforms, for different colors. By adjusting the theoretical curve (blue curve) to be consistent with the experimental curve (red curve) in the simulation, we can analyze the proportion of different operators in the waveforms of the experimental curve. We can determine that the response ratio of our experimental system to R, G, and B light is 1:6.6:2.1, which is consistent with the trend of the ratio set in the FGI simulation in Fig. 4(b). This is also the reason why the left half of the letter m is brighter than the rest in the CGI result, as shown in Fig. 6(b). The RGB components of the objects can be further deduced by dividing these ratios by the response ratio of the imaging system. The peak signal-to-noise ratio (PSNR) [53] of the experimental results is given as well. As shown in Fig. 6(h), different from the characteristic waveforms given by FGI, the waveforms of CGI experimental results do not give color information. Generally, the RGB primary color distributions of the colored objects have a large overlap, as discussed already in the theoretical analysis, therefore necessitate verification of the capability of FGI for such cases are performed in experiments.

As it should be, we cannot distinguish the RGB components through the CGI results shown in Fig. 7(b). The overall characteristics of Fig. 7(c) are analogous to that of Fig. 6(c) since the dominant primary colors of the corresponding letter are the same while having different edges and relative grayscale values in detail. For the letter u, we can deduce that its color is dominated by the R and G components, because of the bright body and LoG feature edges in Figs. 7(c) and 7(d). For the left half of letter m (Figs. 7(c) and 7(d)), its edge features are consistent with that of cyan edge features shown in Figs. 4(b) and 4(c), indicating that its amount of G component and B component are similar. Compared with the previous experiment results, the fine inner and outer double-edges given by the LoG operator are replaced by the fine bright left outer and dark right inner edges given by both the LoG operator and the Prewitt operator. The dark body saying that its amount of R component is still less than the background. As for the right half of letter m (Figs. 7(c) and 7(d)), its edge features are like the purple edge features shown in Figs. 4(b) and 4(c), suggesting the existence of R component and B component are of similar amounts. In addition, the R component is slightly higher than that of the background owing to its brighter body than the background.

We further analyze the ratios of $\boldsymbol c_1$, $\boldsymbol c_2$, and $\boldsymbol c_3$ operators for different waveforms quantitatively in the FGI graph given in Fig. 7(g). Since the RGB composition of the object is complex, here we give the ratios of colors with relatively large proportions for reference. The PSNR value of the experimental results in Fig. 7(g) is smaller than the PSNR value in Fig. 6(g), indicating the presence of noise is more significant in the experiments. The noise might be caused by the errors in values of patterns in the projection. Under the same imaging system error, the more complex the color of the objects, the smaller the PSNR. Also, one cannot tell the color distribution through the waveforms of CGI experimental results in Fig. 7(h).

4. Conclusion

In conclusion, we have presented a new technique called feature ghost imaging (FGI) that acquires the shape and the color information of colored objects together in merely a single-round detection. Taking full advantage of the computational property of CGI, FGI improves the utilization of photons by accomplishing derivative operations in projection and obtains various kinds of information in one detection. Compared with CGI, FGI does not require extra experimental equipment or longer imaging time, and the feature images of target objects can be obtained by renewing the computational patterns. Note that the FGI’s result cannot be derived through traditional CGI plus post-computation because the colored objects have been captured as grayscale images already in CGI.

Beyond proposing a novel imaging method for colored objects, based on our FGI scheme, the operators used for designing feature computational patterns can be replaced by other applicable operators to realize a lot richer imaging functions for colored objects other than the edge extraction introduced in this paper, such as scaling [62,6365] and watermarking [6668]. Other relevant quantitative analysis of colors is expected for the development of accurate edge feature analysis algorithms based on FGI in future.

Funding

Science and Technology Development Fund from Macau SAR (FDCT) (0062/2020/AMJ); Multi-Year Research Grant of University of Macau (MYRG2020-00082-IAPME).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995). [CrossRef]  

2. A. F. Abouraddy, B. E. A. Saleh, A. V. Sergienko, and M. C. Teich, “Role of entanglement in two-photon imaging,” Phys. Rev. Lett. 87(12), 123602 (2001). [CrossRef]  

3. R. S. Bennink, S. J. Bentley, and R. W. Boyd, ““two-photon” coincidence imaging with a classical source,” Phys. Rev. Lett. 89(11), 113601 (2002). [CrossRef]  

4. J. H. Shapiro and R. W. Boyd, “The physics of ghost imaging,” Quantum Inf. Process. 11(4), 949–993 (2012). [CrossRef]  

5. A. M. Kingston, G. R. Myers, D. Pelliccia, F. Salvemini, J. J. Bevitt, U. Garbe, and D. M. Paganin, “Neutron ghost imaging,” Phys. Rev. A 101(5), 053844 (2020). [CrossRef]  

6. H.-C. Liu, H. Yang, J. Xiong, and S. Zhang, “Positive and negative ghost imaging,” Phys. Rev. Appl. 12(3), 034019 (2019). [CrossRef]  

7. W.-K. Yu and J. Leng, “Probability theory of intensity correlation in ghost imaging with thermal light,” Phys. Lett. A 384(30), 126778 (2020). [CrossRef]  

8. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008). [CrossRef]  

9. H.-C. Liu, B. Yang, Q. Guo, J. Shi, C. Guan, G. Zheng, H. Mühlenbernd, G. Li, T. Zentgraf, and S. Zhang, “Single-pixel computational ghost imaging with helicitydependent metasurface hologram,” Sci. Adv. 3(9), e1701477 (2017). [CrossRef]  

10. P. Kilcullen, T. Ozaki, and J. Liang, “Compressed ultrahigh-speed single-pixel imaging by swept aggregate patterns,” Nat. Commun. 13(1), 7879 (2022). [CrossRef]  

11. T. Lu, Z. Qiu, Z. Zhang, and J. Zhong, “Comprehensive comparison of single-pixel imaging methods,” Opt. Lasers Eng. 134, 106301 (2020). [CrossRef]  

12. Z. Tan, H. Yu, R. Zhu, R. Lu, S. Han, C. Xue, S. Yang, and Y. Wu, “Single-exposure fourier-transform ghost imaging based on spatial correlation,” Phys. Rev. A 106(5), 053521 (2022). [CrossRef]  

13. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008). [CrossRef]  

14. L. Bian, J. Suo, Q. Dai, and F. Chen, “Experimental comparison of single-pixel imaging algorithms,” J. Opt. Soc. Am. A 35(1), 78 (2018). [CrossRef]  

15. M. R. Hestenes and E. Stiefel, “Methods of conjugate gradients for solving linear systems,” J. Res. Nat. Bur. Standards 49(6), 409 (1952). [CrossRef]  

16. F. Ferri, D. Magatti, L. A. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. 104(25), 253603 (2010). [CrossRef]  

17. D. Wu, J. Luo, G. Huang, Y. Feng, X. Feng, R. Zhang, Y. Shen, and Z. Li, “Imaging biological tissue with highthroughput single-pixel compressive holography,” Nat. Commun. 12(1), 4712 (2021). [CrossRef]  

18. H. Wu, G. Zhao, C. He, L. Cheng, and S. Luo, “Subnyquist underwater denoising ghost imaging with a coiflet-wavelet-order-based hadamard matrix,” Phys. Rev. A 106(5), 053522 (2022). [CrossRef]  

19. M. Bashkansky, S. D. Park, and J. Reintjes, “Single pixel structured imaging through fog,” Appl. Opt. 60(16), 4793 (2021). [CrossRef]  

20. M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13(1), 13–20 (2019). [CrossRef]  

21. J. Xiong, P. Zheng, Z. Gao, and H.-C. Liu, “Algorithmdependent computational ghost encryption and imaging,” Phys. Rev. Appl. 18(3), 034023 (2022). [CrossRef]  

22. W.-K. Yu, N. Wei, Y.-X. Li, Y. Yang, and S.-F. Wang, “Multi-party interactive cryptographic key distribution protocol over a public network based on computational ghost imaging,” Opt. Lasers Eng. 155, 107067 (2022). [CrossRef]  

23. Y. Klein, O. Sefi, H. Schwartz, and S. Shwartz, “Chemical element mapping by x-ray computational ghost fluorescence,” Optica 9(1), 63 (2022). [CrossRef]  

24. Y. Tian, H. Ge, X.-J. Zhang, X.-Y. Xu, M.-H. Lu, Y. Jing, and Y.-F. Chen, “Far-field subwavelength acoustic computational imaging with a single detector,” Phys. Rev. Appl. 18(1), 014046 (2022). [CrossRef]  

25. Y. Li and L. Tian, “Computer-free computational imaging: optical computing for seeing through random media,” Light: Sci. Appl. 11(1), 37 (2022). [CrossRef]  

26. W. Zhao, H. Chen, Y. Yuan, H. Zheng, J. Liu, Z. Xu, and Y. Zhou, “Ultrahigh-speed color imaging with single-pixel detectors at low light level,” Phys. Rev. Appl. 12(3), 034049 (2019). [CrossRef]  

27. A. M. Kingston, W. K. Fullagar, G. R. Myers, D. Adams, D. Pelliccia, and D. M. Paganin, “Inherent dose-reduction potential of classical ghost imaging,” Phys. Rev. A 103(3), 033503 (2021). [CrossRef]  

28. C. Zhou, X. Liu, Y. Feng, X. Li, G. Wang, H. Sun, H. Huang, and L. Song, “Real-time physical compression computational ghost imaging based on array spatial light field modulation and deep learning,” Opt. Lasers Eng. 156, 107101 (2022). [CrossRef]  

29. Y. Chen, X. Li, Z. Cheng, Y. Cheng, and X. Zhai, “Multidirectional edge detection based on gradient ghost imaging,” Optik 207, 163768 (2020). [CrossRef]  

30. H. Wu, R. Wang, G. Zhao, H. Xiao, J. Liang, D. Wang, X. Tian, L. Cheng, and X. Zhang, “Deep-learning denoising computational ghost imaging,” Opt. Lasers Eng. 134, 106183 (2020). [CrossRef]  

31. X. Nie, F. Yang, X. Liu, X. Zhao, R. Nessler, T. Peng, M. S. Zubairy, and M. O. Scully, “Noise-robust computational ghost imaging with pink noise speckle patterns,” Phys. Rev. A 104(1), 013513 (2021). [CrossRef]  

32. J. Kim, J. Hwang, J. Kim, K. Ko, E. Ko, and G. Cho, “Ghost imaging with bayesian denoising method,” Opt. Express 29(24), 39323 (2021). [CrossRef]  

33. L.-X. Lin, J. Cao, D. Zhou, and Q. Hao, “Scattering medium-robust computational ghost imaging with random superimposed-speckle patterns,” Opt. Commun. 529, 129083 (2023). [CrossRef]  

34. H.-K. Hu, S. Sun, H.-Z. Lin, L. Jiang, and W.-T. Liu, “Denoising ghost imaging under a small sampling rate via deep learning for tracking and imaging moving objects,” Opt. Express 28(25), 37284 (2020). [CrossRef]  

35. X. Nie, X. Zhao, T. Peng, and M. O. Scully, “Subnyquist computational ghost imaging with orthonormal spectrum-encoded speckle patterns,” Phys. Rev. A 105(4), 043525 (2022). [CrossRef]  

36. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013). [CrossRef]  

37. H. Zhang, J. Cao, D. Zhou, H. Cui, Y. Cheng, and Q. Hao, “Three-dimensional computational ghost imaging using a dynamic virtual projection unit generated by risley prisms,” Opt. Express 30(21), 39152 (2022). [CrossRef]  

38. L. Zhang, Z. Lin, R. He, Y. Qian, Q. Chen, and W. Zhang, “Improving the noise immunity of 3d computational ghost imaging,” Opt. Express 27(3), 2344 (2019). [CrossRef]  

39. P. Ryczkowski, M. Barbier, A. T. Friberg, J. M. Dudley, and G. Genty, “Ghost imaging in the time domain,” Nat. Photonics 10(3), 167–170 (2016). [CrossRef]  

40. F. Devaux, P.-A. Moreau, S. Denis, and E. Lantz, “Computational temporal ghost imaging,” Optica 3(7), 698 (2016). [CrossRef]  

41. Y. Tian, H. Ge, X.-J. Zhang, X.-Y. Xu, M.-H. Lu, Y. Jing, and Y.-F. Chen, “Acoustic ghost imaging in the time domain,” Phys. Rev. Appl. 13(6), 064044 (2020). [CrossRef]  

42. H. Wu, P. Ryczkowski, A. T. Friberg, J. M. Dudley, and G. Genty, “Temporal ghost imaging using wavelength conversion and two-color detection,” Optica 6(7), 902 (2019). [CrossRef]  

43. A. Hannonen, A. Shevchenko, A. T. Friberg, and T. Setälä, “Temporal phase-contrast ghost imaging,” Phys. Rev. A 102(6), 063524 (2020). [CrossRef]  

44. L. Olivieri, J. S. T. Gongora, L. Peters, V. Cecconi, A. Cutrona, J. Tunesi, R. Tucker, A. Pasquazi, and M. Peccianti, “Hyperspectral terahertz microscopy via nonlinear ghost imaging,” Optica 7(2), 186–191 (2020). [CrossRef]  

45. L. Olivieri, J. S. Totero Gongora, A. Pasquazi, and M. Peccianti, “Time-Resolved Nonlinear Ghost Imaging,” ACS Photonics 5(8), 3379–3388 (2018). [CrossRef]  

46. Y. Wang, J. Suo, J. Fan, and Q. Dai, “Hyperspectral Computational Ghost Imaging via Temporal Multiplexing,” IEEE Photonics Technol. Lett. 28(3), 288–291 (2016). [CrossRef]  

47. M. Song, Z. Yang, P. Li, Z. Zhao, Y. Liu, Y. Yu, and L.-a. Wu, “Single-pixel imaging with high spectral and spatial resolution,” Appl. Opt. 62(10), 2610–2616 (2023). [CrossRef]  

48. Z. Ye, J. Xiong, and H.-C. Liu, “Ghost difference imaging using one single-pixel detector,” Phys. Rev. Appl. 15(3), 034035 (2021). [CrossRef]  

49. Z. Ye, P. Zheng, W. Hou, D. Sheng, W. Jin, H.-C. Liu, and J. Xiong, “Computationally convolutional ghost imaging,” Opt. Lasers Eng. 159, 107191 (2022). [CrossRef]  

50. Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Hadamard single-pixel imaging versus fourier single-pixel imaging,” Opt. Express 25(16), 19619 (2017). [CrossRef]  

51. W.-K. Yu, “Super sub-nyquist single-pixel imaging by means of cake-cutting hadamard basis sort,” Sensors 19(19), 4122 (2019). [CrossRef]  

52. Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015). [CrossRef]  

53. Z. Gao, M. Li, P. Zheng, J. Xiong, Z. Tang, and H.-C. Liu, “Single-pixel imaging with gao-boole patterns,” Opt. Express 30(20), 35923 (2022). [CrossRef]  

54. V. Torre and T. A. Poggio, “On edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-8(2), 147–163 (1986). [CrossRef]  

55. P. A. Mlsna and J. J. Rodríguez, “Chapter 19 - gradient and laplacian edge detection,” in The Essential Guide to Image Processing, A. Bovik ed., (Academic Press, Boston, 2009) pp. 495–524.

56. K. Ding, L. Xiao, and G. Weng, “Active contours driven by region-scalable fitting and optimized laplacian of gaussian energy for image segmentation,” Signal Process. 134, 224–233 (2017). [CrossRef]  

57. M. Nixon and A. Aguado, “in Feature extraction and image processing for computer vision” (Academic Press, Boston, 2019).

58. A. S. Ahmed, “Comparative study among sobel, prewitt and canny edge detection operators used in image processing,” J. Theor. Appl. Inf. Technol. 96(19), 6517–6525 (2018).

59. M. Juneja and P. S. Sandhu, “Performance evaluation of edge detection techniques for images in spatial domain,” Int. J. Comput. Theory Eng. 1, 614–621 (2009). [CrossRef]  

60. G. T. Shrivakshan and C. Chandrasekar, “A comparison of various edge detection techniques used in image processing,” Int. J. Comput. Sci. Issues 9(5), 269–276 (2012).

61. C. Zhou, G. Wang, H. Huang, L. Song, and K. Xue, “Edge detection based on joint iteration ghost imaging,” Opt. Express 27(19), 27295–27307 (2019). [CrossRef]  

62. D. L. Ruderman and W. Bialek, “Statistics of natural images: Scaling in the woods,” Phys. Rev. Lett. 73(6), 814–817 (1994). [CrossRef]  

63. Y.-S. Wang, C.-L. Tai, O. Sorkine, and T.-Y. Lee, “Optimized scale-and-stretch for image resizing,” in ACM SIGGRAPH Asia 2008 Papers, SIGGRAPH Asia ’08 (Association for Computing Machinery, New York, NY, USA, 2008).

64. M. Tan and Q. Le, “Efficientnet: rethinking model scaling for convolutional neural networks,” In International Conference of Machine Learning ICML, 6105 (PMLR, 2019).

65. X. Zhai, A. Kolesnikov, N. Houlsby, and L. Beyer, “Scaling vision transformers,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022) pp. 12104–12113.

66. M. Begum and M. S. Uddin, “Digital image watermarking techniques: A review” (2020).

67. W. Wan, J. Wang, Y. Zhang, J. Li, H. Yu, and J. Sun, “A comprehensive survey on robust image watermarking,” Neurocomputing 488, 226–247 (2022). [CrossRef]  

68. A. Mohanarathinam, S. Kamalraj, G. K. D. Prasanna Venkatesan, R. V. Ravi, and C. S. Manikandababu, “Digital watermarking techniques for image security: a review,” J. Ambient Intell. Human. Comput. 11(8), 3221–3229 (2020). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Experimental setup for FGI (the same as CGI). The light projector illuminates the object with computer-generated patterns. The reflected light from the object is detected, after a collecting lens, by a single-pixel photodetector. The detected intensities are converted and fed into the computer to retrieve the image.
Fig. 2.
Fig. 2. Comparison of the three operators for feature extracting in numerical simulation ($64\times 64$ pixels). The constant operator extracts the original grayscale values of the ring (a1), which are non-negative. The LoG operator extracts the inner and outer edges (b1). The Prewitt operator extracts the left and right edges (c1). The details of the grayscale values and edges are shown by the absolute ((a2), (b2) and (c2)), positive ((a3), (b3) and (c3)) and negative ((a4), (b4) and (c4)) values of FGI images. The constant operator, the LoG operator, and the Prewitt operator will be applied to the original Hadamard patterns to form the R, G, and B components of the projected colored patterns, respectively.
Fig. 3.
Fig. 3. Construction of the projected colored pattern $\boldsymbol P_{ci}$ through linear superposition. $\boldsymbol P_i$ represents an original Hadamard pattern, $\boldsymbol c_1$, $\boldsymbol c_2$ and $\boldsymbol c_3$ are the constant operator, the LoG operator, and the Prewitt operator, respectively. The convolution of $\boldsymbol P_i$ and the three operators $\boldsymbol c_1$, $\boldsymbol c_2$ and $\boldsymbol c_3$ gives the R ($\boldsymbol P_{Ri}$), G ($\boldsymbol P_{Gi}$), and B ($\boldsymbol P_{Bi}$) components of the pattern $\boldsymbol P_{ci}$, respectively. The patterns are of $64\times 64$ pixels.
Fig. 4.
Fig. 4. Numerical simulation results ($64\times 64$ pixels) of FGI. (a) The colored test image. (a1)–(a3) The R, G, and B components of the colored test image. (b) The FGI result of the colored test image. (c) The absolute value of the FGI result. (d) The FGI graph of the colored test image. The blue curve refers to the points on the blue dotted circle in Fig. 4(b). The starting point is marked by the blue arrow and the Y-axis values of the curve refer to the grayscale values of the points on the dotted circle. The characteristic waveforms of different colors are boxed out by the dotted rectangles of corresponding colors and their theoretical composition of the three operators are given under the dotted rectangles. (e) The absolute FGI graph of the colored test image. The blue curve refers to the points on the blue dotted circle in Fig. 4(c). The starting point is marked by the blue arrow and the Y-axis values of the curve refer to the absolute values of the points on the dotted circle. The characteristic waveforms of different colors are boxed out by the dotted rectangles of corresponding colors and their theoretical compositions of the three operators are given on the top of the dotted rectangles.
Fig. 5.
Fig. 5. Comparison of CGI and FGI numerical simulation results ($128\times 128$ pixels). (a1)–(a5) The colored test image of vegetables. (b1)–(b5) CGI results of the test image. (c1)–(c5) The FGI results of the colored test image. (d1)–(d5) The absolute values of the FGI results.
Fig. 6.
Fig. 6. Experimental results ($64\times 64$ pixels) of FGI. (a) The colored test image. (b) The CGI result of the colored test image. (c) The FGI result of the colored test image. (d) The absolute value of the FGI result. (e) The theoretical FGI result of the colored test image. (f) The absolute value of the theoretical FGI result. (g) The FGI graph of the colored test image. The red and blue curves refer to the points on the red and blue dotted lines in Figs. 6(c) and 6(e), separately. The Y-axis values of the curves refer to the grayscale values of the points on the dotted lines. The characteristic waveforms of different colors are boxed out by the dotted rectangles of corresponding colors and their theoretical composition of the three operators are given around the dotted rectangles. (h) The absolute FGI graph of the colored test image. The yellow, red, and blue curves refer to the points on the yellow, red, and blue dotted lines in Figs. 6(b), 6(d), and 6(f), separately. The Y-axis values of the curves refer to the absolute values of the points on the dotted lines.
Fig. 7.
Fig. 7. Experimental results ($64\times 64$ pixels) of FGI. (a) The colored test image. (b) The CGI result of the colored test image. (c) The FGI result of the colored test image. (d) The absolute value of the FGI result.(e) The theoretical FGI result of the colored test image. (f) The absolute value of the theoretical FGI result. (g) The FGI graph of the colored test image. The red and blue curves refer to the points on the red and blue dotted lines in Figs. 7(c) and 7(e), separately. The Y-axis values of the curves refer to the grayscale values of the points on the dotted lines. The characteristic waveforms of different colors are boxed out by the dotted rectangles and their theoretical composition of the three operators are given around the dotted rectangles. (h) The absolute FGI graph of the colored test image. The yellow, red, and blue curves refer to the points on the yellow, red, and blue dotted lines in Figs. 7(b), 7(d), and 7(f), separately. The Y-axis values of the curves refer to the absolute values of the points on the dotted lines.

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

I i = P i , X ,
X r = i I i P i .
( P i c ) [ s , t ] = p q P i [ s p , t q ] c [ p , q ] ,
I i = P i c , X .
P i c , X = p q s t P i [ s p , t q ] c [ p , q ] X [ s , t ] = p q s t P i [ s , t ] c [ p , q ] X [ s p , t q ] = P i , X c .
c 1 = 1 ,
c 2 = ( 2 x 2 + 2 y 2 ) 1 2 π σ 2 e x 2 + y 2 2 σ 2 ,
c 3 = x ,
P c i = P R i + P G i + P B i , P R i = P i c 1 , P G i = P i c 2 , P B i = P i c 3 ,
X = X R + X G + X B .
I i = P c i , X = P R i + P G i + P B i , X R + X G + X B = P R i , X R + P G i , X G + P B i , X B .
X r = X R c 1 + X G c 2 + X B c 3 .
I i = P R i + P G i + P B i , X .
X r = X ( c 1 + c 2 + c 3 ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.