Abstract

We present a compressive parallel single-pixel imaging (cPSI) method, which applies compressive sensing in the context of PSI, to achieve highly efficient light transport coefficients capture and 3D reconstruction in the presence of strong interreflections. A characteristic-based sampling strategy is introduced that has sampling frequencies with high energy and high probability. The characteristic-based sampling strategy is compared with various state-of-the-art sampling strategies, including the square, circular, uniform random, and distance-based sampling strategies. Experimental results demonstrate that the characteristic-based sampling strategy exhibits the best performance, and cPSI can obtain highly accurate 3D shape data in the presence of strong interreflections with high efficiency.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

An optical 3D shape measurement system generally contains camera(s) and a projector. The 3D shape can be obtained by triangulation [1] when the corresponding points between the camera and the projector are provided. This corresponding relationship can be easily obtained through an optical 3D shape measurement method, such as fringe projection profilometry (FPP) [24] under the assumption that light rays reflect only once when transmitting through the scene [5]. However, this assumption generally does not hold in real world applications, and the light received by each pixel on the detector is a mixture component from different positions on the light source. The received light can be divided further into direct, indirect (also termed as global), and environment illuminations according to their interaction with the scene. Direct illumination is the light that directly reflects from light sources, indirect illumination is the light reflected by other points in the scene [5], and environment illumination is the light captured by the camera pixel from other light sources which are not related to the measurement system. Figure 1(a) depicts these three components. In the context of 3D shape recovery based on FPP, only direct illumination carries 3D geometrical information of the scene. Indirect and environment illuminations introduce errors in the corresponding point matching process whose influences should be eliminated [6,7]. The influences of environment illumination can be eliminated by projecting N-step phase shifting patterns [8]. Indirect illumination causes overlapping patterns [Fig. 1(b)], which result in missing data and large errors in the FPP method [Fig. 1(c)]. Several methods have been introduced to overcome the difficulties of 3D shape reconstruction in the presence of indirect illumination.

 figure: Fig. 1.

Fig. 1. 3D shape reconstruction in the presence of interreflections. (a) The mixture component of radiance captured by a camera pixel. (b) The overlapped patterns caused by interreflections. (c) Interreflections cause data incompleteness and large errors for traditional FPP method.

Download Full Size | PPT Slide | PDF

Nayar et al. [9] used high-frequency patterns to suppress the influence of interreflections by assuming that multiple reflected lights do not carry high frequency information. Thus, the projected high-frequency patterns should be nearly constant after interreflections, while direct reflected lights preserve high-frequency information when observed through cameras. Follow-up works, such as modulated phase-shifting [10], micro-phase shifting [11], logical codes illumination [12], and multiplexed illumination [13] were conducted for 3D reconstruction in the presence of interreflections on the basis of the high-frequency illumination mode. However, the assumption on which these methods are based will not hold when interreflections occurs at highly specular or refractive surfaces.

Structured light transport is introduced to eliminate non-epipolar indirect light by taking advantage of the crucial light transport–stereo geometry link [14]. This method can eliminate the high-frequency interreflections caused by highly specular or refractive surfaces but might fail to reconstruct a highly accurate 3D point cloud at surfaces where epipolar indirect illumination dominates. A general mathematical model is introduced to address the bimodal multi-path issue [15], but multimodal multi-path problem is not considered in detail.

Regional projection methods [16,17], which segment concave surfaces into different regions where interreflections do no occur and separately project each individual region to suppress interreflections, have also been introduced. However, this kind of method requires the CAD model of the measured object for segmentation. Adaptive regional projection methods [18,19] are proposed when the design model is unavailable, using a several rounds of projection mode, and the pixels on the projector is blocked once a 3D point is obtained. Zhao et al. [6] proposed an epipolar imaging method that combines structured light transport and the adaptive regional projection method for high-accuracy 3D reconstruction in the presence of interreflections. However, these methods require several rounds of projection, which increase the measurement time.

Most of these methods are based on specific assumptions for indirect illumination, which can be broken in real applications. For robust 3D shape measurement in the presence of indirect illuminations, indirect illuminations must be separated from direct illumination to eliminate errors. Single-pixel imaging (SI) [8,20,21] has been wildly studied in the past decade, and the super-resolution property of it can be used for the separation of indirect light. In our previous studies, we introduced a SI method [22,23] to capture the full light transport coefficients between a projector–camera pair, such that direct and indirect illuminations can be completely separated. In this method, each pixel on the camera is treated as a single-pixel detector, and the four-step phase shifting sinusoidal patterns required by Fourier SI [8] are projected. The main problem of this method is the extremely long measurement time required. Parallel SI (PSI) [24] is introduced to largely accelerate the capture efficiency while maintaining the perfect reconstruction property. This is achieved by the local region extension (LRE) method, which assumes the visible region of each pixel is confined in a local region in PSI. These visible regions are first localized in the localization stage where adaptive regional SI is applied [25], and then imaged by projecting periodic extension patterns. The light transport coefficients are finally reconstructed by the proposed LRE reconstruction algorithm. Jiang et al. proved the LRE reconstruction theorem, which states that light transport coefficients can be perfectly reconstructed if the period of the periodic extension patterns covers the largest visible region [24].

In this research, we relax the condition of perfect reconstruction to achieve high-efficiency light transport coefficients capture. A compressive PSI (cPSI) method, which applies a compressive sensing (CS) method, is proposed to accelerate the data acquisition process in the context of PSI. CS methods have been widely introduced in SI methods, including compressive ghost imaging [26], compressive sampling SI [27], and sparse Fourier SI [28]. CS can bring benefits when indirect illumination is mainly caused by interreflections. In this case, light transport coefficients appear as several speckles because of the concave surfaces, which result in the sparsity property for light transport coefficients and is appropriate for CS methods. The characteristic-based sampling strategy is introduced, sampling frequencies that have high energy with high probability. The characteristic-based sampling strategy is inspired by efficient SI (eSI), which applies the importance sampling of the target spatial spectrum in Fourier space [29]. However, eSI is not used in the context of CS, and our characteristic sampling strategy aims to efficiently capture light transport coefficients when interreflections dominate. The probabilistic sampling function is obtained from a workpiece made with aluminum alloy. In the context of PSI, the characteristic-based sampling strategy is compared with various state-of-the-art sampling strategies, including the square, circular, uniform random, and distance-based sampling strategies. The main advantage of implementing the CS method in the context of PSI rather than in the traditional SI is the high calculation efficiency. PSI reduces the number of patterns by locating nonzero regions using the LRE method, greatly reducing the unknowns compared to traditional SI. A small-scale optimization problem must be solved in the context of PSI, making the calculation of dense 3D data points available for current computers. A 3D shape, which is exactly the same as in the standard PSI, can be obtained by triangulation after determining the corresponding points.

The rest of this paper is organized as follows. In Section 2, the basic principles are introduced. In Section 3, the experimental results are provided. In Section 4, the conclusions are presented.

2. Principles

We first introduce the principles of PSI and cPSI. Then, five sampling strategies are provided. Finally, the evaluation methods are discussed.

2.1 PSI: exactly captures light transport coefficients

To provide a self-contained overview, we review PSI, which exactly captures light transport coefficients, in condensed form. To differentiate from the cPSI introduced in the next section, the PSI in this section is also referred to as standard PSI. Reference [24] provides detailed information about PSI.

In PSI, each camera pixel is viewed as an independent unit for SI, and the goal is to obtain light transport coefficients $h(x^{\prime},y^{\prime};x,y)$. The total number of unknowns is $M \times N \times U \times V$, which is calculated by the product of the camera and projector resolutions. M and N denote the horizontal and vertical resolutions of the projector, respectively, and U and V denote those of the camera, respectively. We obtain $U \times V$ measurements for each single camera exposure. Thus, according to the theory of linear system equation, to solve an equation with $M \times N \times U \times V$ unknowns, the same number of measurements are required, resulting in at least $M \times N$ projector patterns.

PSI largely increases the capture efficiency through the LRE method while exactly reconstructing the light coefficients based on the assumption that the visible region of each camera pixel is confined in a local region such that the Nyquist–Shannon sampling theorem is satisfied even with few projections. Given its close relationship to the Nyquist–Shannon sampling theorem, PSI is originally implemented using four-step sinusoidal patterns, and the LRE method has three stages: the localization stage in which the visible region information is obtained, the capture stage in which periodic extension patterns are projected, and the reconstruction stage in which light transport coefficients are reconstructed. The last two stages are shown in Fig. 2.

  • (1) Localization stage. Adaptive regional SI [25] based on the Fourier slice theorem is utilized to obtain the visible region location of each pixel. Vertical and horizontal patterns are projected, and 1D inverse discrete Fourier transform is applied such that the projection of the light coefficients along the horizontal and vertical axes can be obtained. The visible region of each pixel in the projector coordinate can be determined by finding the areas where the values of the two projection functions are greater than a predefined noise threshold. For exact reconstruction, a rectangular area ${\Omega _s}(x^{\prime},y^{\prime};x,y)$ whose center is determined by the center of the visible region can be located, and the size of which is ${M_s} \times {N_s}$. ${M_s}$ and ${N_s}$ are the respective horizontal and vertical lengths that cover the largest visible region among the camera pixels. Reference [24] provides detailed information on the localization stage.
  • (2) Capture stage. For the exact reconstruction of light transport coefficients, periodic extension patterns with the following form
    $$\begin{aligned} {{\tilde{P}}_\phi }({x^{\prime},y^{\prime};{k_s},{l_s}} ) &= \sum\limits_{{r_1} = 0}^{\left\lceil {\frac{M}{{{M_s}}}} \right\rceil } {\sum\limits_{{r_2} = 0}^{\left\lceil {\frac{N}{{{N_s}}}} \right\rceil } {P_\phi ^B({x^{\prime} - {r_1}{M_s},y^{\prime} - {r_2}{N_s};{k_s},{l_s}} )} } \\ &= a + b\cos [2\pi \cdot (\frac{{{k_s} \cdot x^{\prime}}}{{{M_s}}} + \frac{{{l_s} \cdot y^{\prime}}}{{{N_s}}}) + \phi ], \end{aligned}$$
    are projected, where $x^{\prime}$ and $y^{\prime}$ are the pixels on the projector and take the following values: $x^{\prime} = 0,1\ldots M - 1$ and $y^{\prime} = 0,1\ldots N - 1$. ${k_s}$ and ${l_s}$ denote the discrete frequency samples and take the following values: ${k_s} = 0,1\ldots {M_s} - 1$ and ${l_s} = 0,1\ldots {N_s} - 1$. $\phi$ the initial phase, which can be $0,\pi /2,\pi ,$ or $3\pi /2$. $a$ and $b$ are the average intensity and contrast of the image, respectively. ${r_1}$ and ${r_2}$ are integers. $\lceil\bullet \rceil$ denotes the ceiling function. Thus, the number of measurements of PSI is in proportional to ${M_s} \times {N_s}$. $P_\phi ^B({x^{\prime},y^{\prime};{k_s},{l_s}} )$ are the basic patterns, which are expressed as
    $$P_\phi ^B(x^{\prime},y^{\prime}\textrm{;}{k_s},{l_s}) = a + b\cos [2\pi (\frac{{{k_s} \cdot x^{\prime}}}{{{M_s}}} + \frac{{{l_s} \cdot y^{\prime}}}{{{N_s}}}) + \phi ].$$

    Suppose ${L_\phi }({x,y;{k_s},{l_s}} )$ is the captured intensity of camera pixel $\textrm{(}x,y\textrm{)}$ when periodic extension pattern ${\tilde{P}_\phi }({x^{\prime},y^{\prime};{k_s},{l_s}} )$ is projected, then ${B_1}({x,y;{k_s},{l_s}} )$ and ${B_2}({x,y;{k_s},{l_s}} )$ are

    $$\begin{aligned} {B_1}({x,y;{k_s},{l_s}} ) &= {L_0}(x,y;{k_s},{l_s}) - {L_\pi }(x,y;{k_s},{l_s})\\ &= [{{\tilde{P}}_0}(x^{\prime},y^{\prime};{k_s},{l_s}) - {{\tilde{P}}_\pi }(x^{\prime},y^{\prime};{k_s},{l_s})]\cdot M(x^{\prime},y^{\prime};x,y)\cdot h(x^{\prime},y^{\prime};x,y), \end{aligned}$$
    and
    $$\begin{aligned} {B_2}({x,y;{k_s},{l_s}}) &= {L_{\frac{\pi }{2}}}(x,y;{k_s},{l_s}) - {L_{\frac{{3\pi }}{2}}}(x,y;{k_s},{l_s})\\ &= [{{\tilde{P}}_{\frac{\pi }{2}}}(x^{\prime},y^{\prime};{k_s},{l_s}) - {{\tilde{P}}_{\frac{{3\pi }}{2}}}(x^{\prime},y^{\prime};{k_s},{l_s})]\cdot M(x^{\prime},y^{\prime};x,y)\cdot h(x^{\prime},y^{\prime};x,y), \end{aligned}$$
    which are the real and imagery parts of the Fourier transform of the visible image patch. $M(x^{\prime},y^{\prime};x,y)$ is a mask that has a value of one when $(x^{\prime},y^{\prime}) \in {\Omega _s}(x^{\prime},y^{\prime};x,y)$ and zero otherwise.

  • (3) Reconstruction stage. The visible image patch $h_r^B({x^{\prime},y^{\prime};x,y} )$ is obtained by applying inverse fast Fourier transform (IFFT) to $H_r^B({{k_s},{l_s};x,y} )$, which is expressed as
    $$H_r^B({{k_s},{l_s};x,y} )= {B_1}({x,y;{k_s},{l_s}} )+ j{B_2}({x,y;{k_s},{l_s}} ).$$

    The light transport coefficients can be obtained by first extending the image patch $h_r^B({x^{\prime},y^{\prime};x,y} )$ periodically and then preserving the actual visible region according to the location information obtained in the localization stage. Reference [24] provides detailed information on the reconstruction stage.

 figure: Fig. 2.

Fig. 2. Comparison between PSI and cPSI. In capture stage, the sampled frequencies of PSI are deterministic, while the frequencies are random sampled in cPSI. After the determination of sampled frequencies, periodic extension patterns are generated as Eq. (1). In reconstruction stage, the visible patches are reconstructed by IFFT in PSI. For cPSI, the visible patches are reconstructed by CS reconstruction algorithm as Eqs. (6)–(8).

Download Full Size | PPT Slide | PDF

2.2 cPSI: CS method to efficiently capture light transport coefficients

In this subsection, we introduced cPSI in which the exact reconstruction condition is relaxed for a highly efficient light transport coefficients capture. CS theory is applied in the LRE method to further increase the capture efficiency. The main improvement of cPSI is in the capture and reconstruction stages. Refer to Fig. 2 for cPSI and its comparison with PSI.

  • (1) cPSI with capture stage that adopts CS sampling strategy. CS theory requires the measurement matrix to satisfy the restricted isometry condition [30], and randomly selected Fourier bases are a typical example that satisfies this condition. Equations (1)–(5) indicates that projecting the periodic extension patterns is equivalent to sampling the corresponding Fourier coefficients. Thus, we can randomly project the periodic extension patterns, which have a degree of freedom of ${M_s} \times {N_s}$, rather than the cosine patterns that would be generated for the entire projection resolution. Suppose ${S_a}$ is the sampled set of frequencies according to a certain sampling strategy, which is introduced in the followed subsection, for any frequency $({{k_s},{l_s}} )\in {S_a}$, if patterns ${\tilde{P}_\phi }({x^{\prime},y^{\prime};{k_s},{l_s}} )$ and mask $M(x^{\prime},y^{\prime};x,y)$ are linearized as vectors, then the measured real and imaginary intensities can be calculated by the first equations in Eqs. (3) and (4), and the measurement matrix is provided by the second equations in Eqs. (3) and (4).

    We only used partial sampling for cPSI, that is, the number of measured frequencies $|{{S_a}} |$ is smaller than ${{{M_s} \times {N_s}} / \textrm{2}}$, which is the number of frequencies required by PSI that exactly captures the light transport coefficients considering conjugate symmetry (black blocks in Fig. 2). Thus, an optimization problem must be solved for the light transport coefficients.

  • (2) cPSI with reconstruction stage that calculates light transport coefficients as an optimization problem. Given the sampled set of frequencies ${S_a}$ and the measurement equations in Eq. (3) and (4), the light transport coefficients can be obtained by solving the following optimization problem
    $$\begin{aligned} h_{cs}^{B}(x^{\prime},y^{\prime};x,y) &= \arg {\min \nolimits_{h(x^{\prime},y^{\prime};x,y)}}TV[h(x^{\prime},y^{\prime};x,y)]\\ &+ \lambda \{ {||{{F_1}[h(x^{\prime},y^{\prime};x,y)]} ||^2} + {||{{F_2}[h(x^{\prime},y^{\prime};x,y)]} ||^2}\} , \end{aligned}$$
    where $TV$ is the total variance operator [30], $\lambda$ is the regularization parameter, and
    $$\begin{aligned} {||{{F_1}[h(x^{\prime},y^{\prime};x,y)]} ||^2} &= \sum\limits_{({k_s},{l_s}) \in {S_a}} {\{ {B_1}} (x,y;{k_s},{l_s}) - \\ &[{{\tilde{P}}_0}(x^{\prime},y^{\prime};{k_s},{l_s}) - {{\tilde{P}}_\pi }(x^{\prime},y^{\prime};{k_s},{l_s})]\cdot M(x^{\prime},y^{\prime};x,y)\cdot h(x^{\prime},y^{\prime};x,y){\} ^\textrm{2}}, \end{aligned}$$
    $$\begin{aligned} {||{{F_\textrm{2}}[h(x^{\prime},y^{\prime};x,y)]} ||^2} &= \sum\limits_{({k_s},{l_s}) \in {S_a}} {\{ {B_\textrm{2}}} (x,y;{k_s},{l_s}) - \\ &[{{\tilde{P}}_{\frac{\pi }{\textrm{2}}}}(x^{\prime},y^{\prime};{k_s},{l_s}) - {{\tilde{P}}_{\frac{{\textrm{3}\pi }}{\textrm{2}}}}(x^{\prime},y^{\prime};{k_s},{l_s})]\cdot M(x^{\prime},y^{\prime};x,y)\cdot h(x^{\prime},y^{\prime};x,y){\} ^\textrm{2}}. \end{aligned}$$

    The above optimization problem can be solved according to [31]. Then, like Stage 3 of the LRE method, the light transport coefficients can be obtained by first extending image patch $h_{cs}^B(x^{\prime},y^{\prime};x,y)$ periodically and then preserving the actual visible region according to the location information obtained in the localization stage.

2.3 Sampling strategies

In this subsection, five types of sampling strategies are introduced. The five sampling strategies are the square, circular, uniform random, distance-based, and characteristic-based sampling strategies. For the first two sampling strategies, we compare the zeros-filled [28] and CS methods in terms of the reconstruction of light transport coefficients. The last three sampling strategies are random sampling strategies based on three frequency sampling strategies. The first four strategies have been studied in the SI context [20,28], while the last method takes samples according to the frequency energy distribution of a workpiece made of aluminum alloy. To our knowledge, this work is the first attempt to study the sampling strategy based on the characteristics of interreflections in the context of PSI. The sampled frequencies according to each sampling strategy is shown in Fig. 3, and the resolution ${M_s} \times {N_s}$ is assumed to be 128 × 128. In Fig. 3, the zero-frequency point is shifted to the center point, and the sampled set has left–right symmetry when the conjugate symmetric property is considered. For different resolutions, the index of the sampled frequencies can be rescaled appropriately according to the proportion of the resolutions.

 figure: Fig. 3.

Fig. 3. Sampling strategies investigated in this study. The yellow points are the chosen frequencies. (a) Square sampling strategy. (b) Circular sampling strategy. (c) Uniform random sampling strategy. (d) Distant based sampling strategy. (e) Characteristic based sampling strategy.

Download Full Size | PPT Slide | PDF

For the last three random sampling strategies, a probability distribution density function (PDF) is required. The PDF is defined as follows:

$$p({k_s},{l_s}) = \frac{{\rho ({k_s},{l_s})}}{P}\quad ({k_s},{l_s}) \in {\rm Z},$$
where ${k_s},{l_s}$ are frequency indexes, $\rho ({k_s},{l_s})$ is the unnormalized PDF, and P is the normalization factor, which ensures that the integral of $p({k_s},{l_s})$ is one. Given that $\rho ({k_s},{l_s})$ is proportional to $p({k_s},{l_s})$, we only provide the formula of $\rho ({k_s},{l_s})$. Sampled set ${S_a}$ can be generated using a Monte Carlo algorithm according to $\rho ({k_s},{l_s})$ [28]. ${\rm Z}$ is the definition region of $p({k_s},{l_s})$, which corresponds to ${\rm Z}\textrm{ = \{ (}{k_s},{l_s}\textrm{)|0} \le {k_s} \le {M_s}\textrm{ - 1,0} \le {l_s} \le {N_s}\textrm{ - 1\} }$ in the context of cPSI.
  • (1) Square sampling strategy. The sampled frequencies are determined by a square area around the zero frequency. $S_a^{square}$, which is sampled according to the square sampling strategy with a sampling ratio of 16%, is shown in Fig. 3(a). When the square sampling strategy is applied, we refer to the reconstruction method by zero-filling the uncaptured frequencies as SquareZeroFill (SQZF) and the CS reconstruction method as SquareCS (SQCS).
  • (2) Circular sampling strategy. The sampled frequencies are determined by a circle area around the zero frequency. $S_a^{circular}$, which is sampled according to the circular sampling strategy with a sampling ratio of 16%, is shown in Fig. 3(b). When the circular sampling strategy is applied, we refer to the reconstruction method by zero-filling the uncaptured frequencies as CircularZeroFill (CRZF) and to the CS reconstruction method as CircularCS (CRCS).
  • (3) Uniform random sampling strategy. The unnormalized PDF ${\rho _u}({k_s},{l_s})$ of this sampling strategy is constant over the entire definition region. Reference [28] provides detailed information. $S_a^u$, which has a sampling ratio of 16%, was sampled according to the uniform random sampling strategy, as shown in Fig. 3(c). We refer to the CS reconstruction method when uniform random sampling strategy is applied as UniformCS (UCS).
  • (4) Distant based sampling strategy. The unnormalized PDF ${\rho _d}({k_s},{l_s})$ of this sampling strategy is determined by the normalized distance $r({k_s},{l_s})$ between $({k_s},{l_s})$ and the zero-frequency point. Reference [28] provides detailed information. $S_a^d$, with a sampling ratio of 16%, was sampled according to the distance-based sampling strategy, as shown in Fig. 3(d). Figures 3(c) and (d) indicate that the distance-based sampling strategy take more samples that are near the zero-frequency point, whereas the uniform random sampling strategy does not show such preference. We refer to the CS reconstruction method when the distance-based sampling strategy is applied as DistantCS (DCS).
  • (5) Characteristic based sampling strategy. Unnormalized PDF ${\rho _c}({k_s},{l_s})$ should be precisely the frequency energy distribution of the Fourier spectrum of the light transport coefficients formed by interreflections. If ${\rho _c}({k_s},{l_s})$ is available, then an improved imaging quality with the same number of projections can be achieved. This information is obtained from a workpiece. First, we implement PSI to measure a workpiece made of aluminum alloy as shown in Fig. 4(b) to obtain the unnormalized PDF. Given that the Fourier spectrum distribution of the measured workpiece can reflect the characteristics of interreflections in the frequency domain, we can take samples with high probabilities if the energy of the frequency is high and samples with low probabilities if the energy of the frequency is low.

 figure: Fig. 4.

Fig. 4. Experimental setup and investigated objects. (a) Experimental setup. (b) Workpiece made by aluminum alloy. (c) Triangular groove composed by metal plates. (d). Metal blade. (e) Ceramic bowls. (f) V-Groove composed by two metal gauge blocks. (g) V–Groove with a sphere inside.

Download Full Size | PPT Slide | PDF

Sampled set $S_a^c$ with a sampling ratio of 16% according to the characteristic-based sampling strategy is shown in Fig. 3(e). The distribution is obtained from the Fourier spectrum distribution of the green shaded region of the aluminum alloy workpiece, as shown in Fig. 5(a). In Section 3.1, we introduce how this information can be obtained. Figures 3(d) and (e) indicate that although the samples taken from both methods are concentrated in the center region, which display low frequencies, the characteristic-based sampling strategy tends to take more samples that are near the horizontal and vertical axes, whereas the distance-based strategy shows no such preference. We refer to the CS reconstruction method that applies the characteristic-based sampling strategy as Charateristic CS (CCS).

 figure: Fig. 5.

Fig. 5. Training and validation of CCS. (a) The characteristic training region on the workpiece. (b) The unnormalized PDF obtained when pixels from the characteristic training region is used. (c) The 3D surface plotting of the unnormalized PDF. The magnitude is shown in logarithm scale (d) The validation region on the workpiece. (e) The PSNR value for different sampling strategies under various sampling ratio. (f) The SME value for different sampling strategies under various sampling ratio.

Download Full Size | PPT Slide | PDF

2.4 Evaluation methods

For the quantitative evaluation of the reconstruction quality of each sampling strategy, we use the peak signal-to-noise ratio (PSNR) [32] and the subpixel matching error (SME). The PSNR is used to evaluate the image quality of the sampling strategy. The SME is used to evaluate the quality of the subpixel correspondence points found by each sampling strategy. The subpixel matched point is calculated according to [22] and [23] after capturing the light transport coefficients. Given our ultimate goal of finding the correspondence points between the camera and the projector, the errors of the subpixel matched points should be more important in the context of 3D shape reconstruction.

  • (1) PSNR. PSNR is defined as follows:
    $$PSNR(h,\hat{h})\textrm{ = }10{\log _{10}}[{255^2}/MSE(h,\hat{h})],$$
    where h and $\hat{h}$ are images with the same size, and the horizontal and vertical resolutions are ${M_s}$ and ${N_s}$ respectively. In this study, h is the reconstructed light transport coefficients obtained by any of the method introduced in the previous subsection, and $\hat{h}$ is the light transport coefficients reconstructed by the standard PSI. $MSE(h,\hat{h})$ is the mean square error between h and $\hat{h}$ and defined as follows:
    $$MSE(h,\hat{h})\textrm{ = }\frac{1}{{{M_s}{N_s}}}\sum\limits_{y^{\prime} = 0}^{{N_s} - 1} {\sum\limits_{x^{\prime} = 0}^{{M_s} - 1} {[h(x^{\prime},y^{\prime}) - } } \hat{h}(x^{\prime},y^{\prime}){]^2}.$$
  • (2) SME. SME is defined as follows:
    $$SME(h,\hat{h})\textrm{ = ||SubPix(}h\textrm{) - SubPix(}\hat{h}\textrm{)|}{\textrm{|}_2},$$
    where $\textrm{SubPix}(h)$ is the subpixel correspondence point calculated by the grayscale centroid method introduced in [22] and [23], given light transport coefficients h. $\textrm{||}\cdot \textrm{|}{\textrm{|}_2}$ is the L2 norm of the error vector between the subpixel matched point found by the standard PSI and any of the method introduced in the previous subsection.

3. Experiments

The experimental system used in this study consisted of a projector and a camera, as shown in Fig. 4(a). The calibration parameters of the system were calibrated such that the 3D points can be calculated by triangulation [1] given the corresponding point pair. The camera is a monochrome CMOS camera with a resolution of 1920 × 1200, and the projector is a DMD based projector with a resolution of 1920 × 1080. The patterns generated from Eq. (1) were illuminated by the projector, with which the camera was synchronized such that the projected pattern was refreshed once the camera finished recording one frame. The capture frame rate was 155 frames per second (fps). Six scenes, including a workpiece made from aluminum alloy, a triangular groove composed of metal plates, a blade made of metal, a scene containing two ceramic bowls, a V–Groove composed of two metal gauge blocks, and a V–Groove with a sphere inside were used to investigate the performance of cPSI, as shown in Figs. 4(b)–(g).

The size of the visible region of each object was 128 × 128, resulting in the measurements of 32,768 patterns for PSI. Reference [24] provides a detailed analysis of the determination of the size of the visible region and the number of measurements required by PSI. The optimization problem of cPSI has 128 × 128 = 16,384 unknowns to solve, whereas the traditional SI has 1920 × 1080 = 2,073,600. Thus, the application of CS in the context of PSI brings huge benefits (more than 170-fold) to the calculation efficiency.

3.1 Training and validation of the aluminum alloy workpiece

The aluminum alloy workpiece was used to validate the effectiveness of cPSI. The two deterministic sampling strategies, which are SQZF and CRZF, and the five random sampling strategies, which are SQCS, CRCS, UCS, DCS, and CCS, were studied. First, the characteristic sampling function required by the CCS was obtained. Then, a quantitative comparison was performed, and the reconstructed light transport coefficients are presented.

3.1.1 Obtaining the characteristic sampling function

We obtained the unnormalized PDF ${\rho _c}({{k_s},{l_s}} )$ required by the characteristic-based sampling strategy. The unnormalized PDF can be obtained by performing PSI on a training object. We performed PSI to the upper left region of the aluminum alloy workpiece, as shown in Fig. 5(a) by the green shaded region. A total of 66,339 pixels were used for training. We refer to the region responsible for obtaining the unnormalized PDF as the characteristic training region. First, the Fourier spectrum distribution $H_r^B({{k_s},{l_s};x,y} )$ of all the pixels $({x,y} )$ inside the characteristic training region was obtained by PSI according to Eq. (5). Then, the unnormalized PDF for the characteristic-based sampling strategy could be obtained by

$${\rho _c}({k_s},{l_s})\textrm{ = }\sum\limits_{(x,y) \in {\Omega _c}} {\omega (x,y)|H_r^B({k_s},{l_s};x,y)} |,$$
where $\textrm{|}\cdot \textrm{|}$ is the module length of a complex number, and ${\Omega _c}$ corresponds to the characteristic training region. $\omega ({x,y} )$ is the weighting factor that compensates for the relative intensity of the corresponding pixel. For a 256-grayscale camera, this factor can be set as follows
$$\omega (x,y)\textrm{ = }\frac{{256}}{{1 + {L_{{\pi / 2}}}(x,y;0,0)}},$$
where ${L_{{\pi / 2}}}({x,y;0,0} )$ is the captured intensity of camera pixel $\textrm{(}x,y\textrm{)}$ when the extension pattern with a ${\pi / 2}$ shift and zero frequency ${\tilde{P}_{{\pi / 2}}}({x^{\prime},y^{\prime};0,0} )$ is projected. The obtained ${\rho _c}({{k_s},{l_s}} )$ is shown in Figs. 5(b) and (c). In Fig. 5(c), the 3D surface of the unnormalized PDF ${\rho _c}({{k_s},{l_s}} )$ is shown. The magnitude is shown in logarithm scale because the energy of low frequencies is generally orders of magnitudes higher than their high-frequency counterparts.

3.1.2 Quantitative comparison

The aluminum alloy workpiece was used to validate the effectiveness of CCS, and a comparison between different sampling strategies was performed. The pixels used for validation were selected from the validation region, as shown by the yellow shaded region in Fig. 5(d). The validation pixels were selected from the validation region with a step size of 10 pixels on each axis, resulting in 924 validation pixels. Different regions were used on the workpiece for training and validation. For each of the seven sampling strategies, the PSNR and the SME are shown in Figs. 5(e) and (f), respectively. An enlarged region is indicated by the dotted rectangle in Fig. 5(f). The PSNR and SME values are the averaged results of the validation pixels. Each line in Figs. 5(e) and (f) corresponds to one sampling strategy, as indicated by the legend. The horizontal axis corresponds to the sampling ratio. The calculated sampling ratios were 1%, 5%, 8%, 12%, 16%, 20%, 25%, 30%, 35%, 40%, 50%, 60%, 70%, 80%, and 90%, and each ratio corresponds to one data point on the curve.

The PSNR (the higher the better) and SME (the lower the better) values of CCS are the best, followed by those of DCS and UCS. The convergence rates of these three methods are much faster than those of the other four methods. For example, to achieve a similar PSNR or SME value as that of the CCS, a sampling rate of 70% or 50% is required for SQCS.

3.1.3 Reconstructing the light transport coefficients

The reconstructed light transport coefficients are shown for two points on the workpiece, as shown in Fig. 6(a). The sampling ratio was set to 12% because this sampling ratio (the fourth data points on each curve in Figs. 5) provides an ideal trade-off between the reconstructed quality and the number of measurements. Each of these two points, A and B, is indicated by a red circle with an upper-case letter inside.

 figure: Fig. 6.

Fig. 6. Light transport coefficients. (a) The workpiece, on which the two pixels A and B that responsible for the reconstructed light transport coefficients are indicated. (b) The reference light transport coefficients reconstructed by PSI for point A. (c) The reference light transport coefficients reconstructed by PSI for point B. (d) The light transport coefficients reconstructed by UCS for point A. (e) The light transport coefficients reconstructed by DCS for point A. (f) The light transport coefficients reconstructed by CCS for point A. (g) The light transport coefficients reconstructed by UCS for point B. (h) The light transport coefficients reconstructed by DCS for point B. (i) The light transport coefficients reconstructed by CCS for point B.

Download Full Size | PPT Slide | PDF

From the previous section, we can conclude that only CCS, DCS, and UCS have comparable abilities in the context of cPSI. Thus, only the light transport coefficients reconstructed by CCS, DCS, and UCS are shown. The light transport coefficients obtained by the standard PSI, which are used as reference, are shown in Figs. 6(b) and (c) for points A and B, respectively. The location of the camera pixel is shown on the upper right corner of Figs. 6(b)–(i). For visual clarity, the light transport coefficients are scaled into 0–255, and the multiplied factors are shown on the upper right. Figures 6(d) and (g) correspond to the light transport coefficients by UCS for points A and B, respectively. The PSNR and SME values are shown on the lower left corner. Figures 6(e) and (h) correspond to the light transport coefficients obtained by the DCS for points A and B, respectively. Figures 6(f) and (i) correspond to the light transport coefficients obtained by the CCS for points A and B, respectively. For the light transport coefficients obtained by these two pixels, the CCS is the best sampling strategy under the evaluation of both PSNR and SME values.

3.2 Testing the effectiveness of cPSI

Four other objects, namely, the triangular groove composed of metal plates, the scene containing two ceramic bowls, a blade made of metal, and a V–Groove composed of two metal gauge blocks, were used to test the cPSI efficiency achieved by various sampling strategies. The results are shown in Fig. 7. The first row shows the objects studied. The yellow region corresponds to the test region where the test pixels were selected from. The test pixels were selected from the test region with a step size of 10 pixels on each axis. The number of pixels in each scene is shown on the lower right corner. The PSNR value for each sampling strategy under different sampling ratios is shown in the second row. The PSNR value indicates that the CCS outperformed all the other sampling strategies because the PSNR value of the CCS was always higher than that of the other methods under the same sampling ratio. The SME value for each sampling strategy under different sampling ratios is shown in the third row. The insets provide enlarged details for the region enclosed by the dotted rectangle. The CCS still outperformed all other sampling strategies in terms of the SME, although in some cases the DCS obtained a similar SME value as the CCS. The CCS was never outperformed by the DCS in any scenario. Thus, the CCS should be applied in sampling. The PSNR and SME values in Fig. 7 are the averaged results among the pixels selected from the test region. The calculated sampling ratio is the same as the sampling ratio used for the aluminum alloy workpiece.

 figure: Fig. 7.

Fig. 7. Testing the performance of cPSI. Each row corresponds to an investigated object, where the image of object, PSNR and SME are provided. (a), (d), (g), (f) are the triangular groove composed by metal plates, metal blade, ceramic bowls and V–Groove composed by two metal gauge blocks, respectively. The yellow regions are the test regions where PSNR and SME are calculated. (b), (e), (h), (k) are the PSNR value, and (c), (f), (i), (l) are SME values for different sampling strategies for corresponding object calculated in the yellow region(s).

Download Full Size | PPT Slide | PDF

3.3 3D reconstruction by cPSI

The analysis in the previous section indicates that the CCS achieved the best performance in both PSNR and SME values. The 3D shape was reconstructed by the CCS. The PSNR and SME values shown in Figs. 5 and 7 indicate that a trade-off between efficiency and accuracy can be achieved at a sampling ratio of 12% when the number of measurements is 3,932. At this sampling ratio, the reconstruction accuracy cannot be significantly improved by increasing the number of measurements. On the contrary, the number of standard PSI requires 32,768 measurements. The 3D shape reconstruction results are shown in Fig. 8, where the traditional FPP, the standard PSI, and the CCS with a sampling ratio of 12% are compared. FPP (second column of Fig. 8) failed to reconstruct the 3D shape completely for each of the investigated objects, whereas both PSI (third column of Fig. 8) and the CCS (fourth column of Fig. 8) successfully achieved 3D reconstruction. The difference between the 3D shape results of the standard PSI and the CCS is negligible, but CCS saved 88% in measurements.

 figure: Fig. 8.

Fig. 8. 3D reconstruction results of cPSI. (a) Workpiece made by aluminium alloy. (b) The triangular groove composed by metal plates. (c) The blade made by metal. (d) The scene contained two ceramic bowls. (e) The V–Groove composed by two metal gauge blocks.

Download Full Size | PPT Slide | PDF

3.4 Accuracy analysis of cPSI

The V–Groove composed of two metal gauge blocks was used for the accuracy analysis of cPSI. To evaluate the accuracy of the 3D shape obtained by cPSI under different sampling strategies (i.e., UCS, DCS, and CCS) when interreflections dominate, a plane was fitted for both the upper and lower parts separately. The error maps between the fitted plane and the reconstructed data points are shown in Fig. 9. The 3D shape was reconstructed under the sampling ratio of 12%. The root mean square (RMS) error is shown under each corresponding subfigure. The RMS indicates that the CCS exhibits the highest accuracy, which confirms the conclusion in Section 3.2. The accuracy analysis of FPP is shown in Fig. 1(c).

 figure: Fig. 9.

Fig. 9. Accuracy analysis of cPSI. The V–Groove composed by two metal gauge blocks is used for accuracy analysis. The 3D shape reconstruction accuracy is compared for UCS, DCS and CCS. (a) The 3D shape reconstructed by corresponding sampling strategy. (b) The error map of the upper plane for each sampling strategy. (c) The error map of the lower plane for each sampling strategy. The RMS is shown at the bottom of each subfigure.

Download Full Size | PPT Slide | PDF

The V-Groove with a sphere is used to test the generalization ability of CCS when scene contains larger gradient changes, shown in Fig. 10. The diameter of the sphere was 22.210 mm. Both the diameter of the reconstructed sphere and the RMS were used as quantitative indicators for CCS. CCS with sampling ratio of 12% was used. The reconstructed diameter of the sphere was 22.195 mm, and the RMS was 0.020 mm. This experiment indicates that CCS has a good generalization ability for scenes with larger gradient.

 figure: Fig. 10.

Fig. 10. Accuracy analysis of cPSI by the scene of the V–Groove with a sphere. (a) The V–Groove with a sphere. (b) The reconstructed 3D shape by CCS (12% sampling ratio). (c) The error map of the reconstructed 3D shape.

Download Full Size | PPT Slide | PDF

4. Conclusion

In this study, cPSI is introduced for the highly efficient capture of light transport coefficients and 3D shape reconstruction in the presence of strong interreflections. The reason that CS method is suitable for PSI when interreflections dominate is twofold. Firstly, the structure of the light transport coefficients is simplified by PSI. The reconstructed images generally consist speckles, the structure of which is much simpler than that of natural images reconstructed by traditional SI methods. Secondly, the computational complexity is reduced largely, i.e., only the visible patches are required to be reconstructed, instead of the image with the resolution of the projector. This largely reduces the computation time required.

In cPSI, the CS method is applied such that only partial spectrum frequencies (12% of spectrum frequencies) are required for high-accuracy capture and reconstruction. The characteristic-based sampling strategy is introduced, sampling frequencies that have high energy with high probability. The probabilistic sampling function is obtained from a workpiece made of aluminum alloy. In the context of PSI, the characteristic-based sampling strategy is compared with various state-of-the-art sampling strategies. The performance of these sampling strategies is compared in terms of their PSNR and SME values, which indicate that the characteristic-based sampling strategy exhibits the best performance. Six objects with different materials are used to investigate the effectiveness of cPSI for 3D shape reconstruction in the presence of strong interreflections. Accuracy analysis is performed, confirming that the characteristic sampling strategy achieves the highest accuracy. The PDF for characteristic sampling strategy can be considered as a learned probability distribution in frequency domain that captures the characteristic of light transport behavior of interreflection, while no such prior information has been considered for the other state-of-the-art sampling strategies. This is the reason that explains the better reconstruction quality of characteristic-based sampling strategy.

In the proposed method, the application of the CS method in the reconstruction stage solves a large optimization problem, resulting in a time-consuming process. Meanwhile, the calculation of light transport coefficients for each camera pixel is independent from each other. Thus, the implementation of parallel computing methods by the GPU can be studied in future works to improve the calculation efficiency of cPSI.

Funding

National Natural Science Foundation of China (61735003, 61875007); National Key Research and Development Program of China (2020YFB2010701); Leading Talents Program for Enterpriser and Innovator of Qingdao (18-1-2-22-zhc).

Disclosures

The authors declare no conflicts of interest.

Data availability

No data were generated or analyzed in the presented research.

References

1. R. I. Hartley, “Triangulation,” Comput. Vis. Image Understanding 68(2), 146–157 (1997). [CrossRef]  

2. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Laser Eng. 109, 23–59 (2018). [CrossRef]  

3. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Laser Eng. 85, 84–103 (2016). [CrossRef]  

4. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010). [CrossRef]  

5. S. K. Nayar, K. Ikeuchi, and T. Kanade, “Shape from interreflections,” Int. J. Comput. Vis. 6(3), 173–195 (1991). [CrossRef]  

6. H. Zhao, Y. Xu, H. Jiang, and X. Li, “3D shape measurement in the presence of strong interreflections by epipolar imaging and regional fringe projection,” Opt. Express 26(6), 7117–7131 (2018). [CrossRef]  

7. Y. Xu, H. Zhao, H. Jiang, and X. Li, “High-accuracy 3D shape measurement of translucent objects by fringe projection profilometry,” Opt. Express 27(13), 18421–18434 (2019). [CrossRef]  

8. Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015). [CrossRef]  

9. S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM transactions on Graphics 25(3), 935–944 (2006). [CrossRef]  

10. T. Chen, H. P. Seidel, and H. P. A. Lensch, “Modulated phase-shifting for 3D scanning,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 3839–3846.

11. M. Gupta and S. K. Nayar, “Micro Phase Shifting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 813–820.

12. M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan, “A Practical approach to 3D scanning in the presence of interreflections, subsurface scattering and defocus,” Int. J. Comput. Vis. 102(1-3), 33–55 (2013). [CrossRef]  

13. J. Gu, T. Kobayashi, M. Gupta, and S. K. Nayar, “Multiplexed illumination for scene recovery in the presence of global illumination. International Conference on Computer Vision,” in Proceedings of International Conference on Computer Vision (IEEE, 2011), pp. 691–698.

14. M. O’Toole, J. Mather, and K. N. Kutulakos, “3D Shape and Indirect appearance by structured light transport,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3246–3253.

15. Y. Zhang, D. L. Lau, and Y. Yu, “Causes and Corrections for Bimodal Multi-path Scanning with Structured Light,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2019), pp. 4431–4439.

16. Q. Hu, K. G. Harding, X. Du, and D. Hamilton, “Shiny parts measurement using color separation,” Two- and Three-Dimensional Methods for Inspection and Metrology III (SPIE, 2005), pp. 60000D.

17. H. Jiang, Y. Zhou, and H. Zhao, “Using adaptive regional projection to measure parts with strong reflection,” in Proceedings of 3D Measurement Technology for Intelligent Manufacturing (SPIE, 2017), pp. 104581A.

18. Y. Xu and D. G. Aliaga, “Robust pixel classification for 3D modeling with structured light,” in Proceedings of Graphics Interface (ACM, 2007), pp.233–240.

19. Y. Xu and D. G. Aliaga, “An adaptive correspondence algorithm for modeling scenes with strong inter-reflections,” IEEE Trans. Visual. Comput. Graphics 15(3), 465–480 (2009). [CrossRef]  

20. Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Hadamard single-pixel imaging versus Fourier single-pixel imaging,” Opt. Express 25(16), 19619–19639 (2017). [CrossRef]  

21. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with Single-Pixel Detectors,” Science 340(6134), 844–847 (2013). [CrossRef]  

22. H. Jiang, H. Zhai, Y. Xu, X. Li, and H. Zhao, “3D shape measurement of translucent objects based on Fourier single-pixel imaging in projector-camera system,” Opt. Express 27(23), 33564–33574 (2019). [CrossRef]  

23. H. Jiang, Q. Yang, X. Li, H. Zhao, Y. Li, and Y. Xu, “3D shape measurement in the presence of strong interreflections by using single-pixel imaging in a camera–projector system,” Opt. Express 29(3), 3609–3620 (2021). [CrossRef]  

24. H. Jiang, Y. Li, H. Zhao, X. Li, and Y. Xu, “Parallel single-pixel imaging: A general method for direct-global separation and 3D shape reconstruction under strong global illumination,” Int. J. Comput. Vis. 129(4), 1060–1086 (2021). [CrossRef]  

25. H. Jiang, S. Zhu, H. Zhao, B. Xu, and X. Li, “Adaptive regional single-pixel imaging based on the Fourier slice theorem,” Opt. Express 25(13), 15118–15130 (2017). [CrossRef]  

26. O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009). [CrossRef]  

27. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008). [CrossRef]  

28. W. Meng, D. Shi, J. Huang, K. Yuan, Y. Wang, and C. Fan, “Sparse Fourier single-pixel imaging,” Opt. Express 27(22), 31490–31503 (2019). [CrossRef]  

29. L. Bian, J. Suo, X. Hu, F. Chen, and Q. Dai, “Efficient single pixel imaging in Fourier space,” J. Opt. 18(8), 085704 (2016). [CrossRef]  

30. E. J. Candes, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Comm. Pure Appl. Math. 59(8), 1207–1223 (2006). [CrossRef]  

31. C. Li, An efficient algorithm for total variation regularization with applications to the single pixel camera and compressive sensing (Rice University, 2010).

32. A. Hore and D. Ziou, “Image quality metrics: PSNR vs. SSIM,” in Proceedings of International Conference on Pattern Recognition (IEEE, 2010), pp. 2366–2369.

References

  • View by:
  • |
  • |
  • |

  1. R. I. Hartley, “Triangulation,” Comput. Vis. Image Understanding 68(2), 146–157 (1997).
    [Crossref]
  2. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Laser Eng. 109, 23–59 (2018).
    [Crossref]
  3. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Laser Eng. 85, 84–103 (2016).
    [Crossref]
  4. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
    [Crossref]
  5. S. K. Nayar, K. Ikeuchi, and T. Kanade, “Shape from interreflections,” Int. J. Comput. Vis. 6(3), 173–195 (1991).
    [Crossref]
  6. H. Zhao, Y. Xu, H. Jiang, and X. Li, “3D shape measurement in the presence of strong interreflections by epipolar imaging and regional fringe projection,” Opt. Express 26(6), 7117–7131 (2018).
    [Crossref]
  7. Y. Xu, H. Zhao, H. Jiang, and X. Li, “High-accuracy 3D shape measurement of translucent objects by fringe projection profilometry,” Opt. Express 27(13), 18421–18434 (2019).
    [Crossref]
  8. Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015).
    [Crossref]
  9. S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM transactions on Graphics 25(3), 935–944 (2006).
    [Crossref]
  10. T. Chen, H. P. Seidel, and H. P. A. Lensch, “Modulated phase-shifting for 3D scanning,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 3839–3846.
  11. M. Gupta and S. K. Nayar, “Micro Phase Shifting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 813–820.
  12. M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan, “A Practical approach to 3D scanning in the presence of interreflections, subsurface scattering and defocus,” Int. J. Comput. Vis. 102(1-3), 33–55 (2013).
    [Crossref]
  13. J. Gu, T. Kobayashi, M. Gupta, and S. K. Nayar, “Multiplexed illumination for scene recovery in the presence of global illumination. International Conference on Computer Vision,” in Proceedings of International Conference on Computer Vision (IEEE, 2011), pp. 691–698.
  14. M. O’Toole, J. Mather, and K. N. Kutulakos, “3D Shape and Indirect appearance by structured light transport,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3246–3253.
  15. Y. Zhang, D. L. Lau, and Y. Yu, “Causes and Corrections for Bimodal Multi-path Scanning with Structured Light,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2019), pp. 4431–4439.
  16. Q. Hu, K. G. Harding, X. Du, and D. Hamilton, “Shiny parts measurement using color separation,” Two- and Three-Dimensional Methods for Inspection and Metrology III (SPIE, 2005), pp. 60000D.
  17. H. Jiang, Y. Zhou, and H. Zhao, “Using adaptive regional projection to measure parts with strong reflection,” in Proceedings of 3D Measurement Technology for Intelligent Manufacturing (SPIE, 2017), pp. 104581A.
  18. Y. Xu and D. G. Aliaga, “Robust pixel classification for 3D modeling with structured light,” in Proceedings of Graphics Interface (ACM, 2007), pp.233–240.
  19. Y. Xu and D. G. Aliaga, “An adaptive correspondence algorithm for modeling scenes with strong inter-reflections,” IEEE Trans. Visual. Comput. Graphics 15(3), 465–480 (2009).
    [Crossref]
  20. Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Hadamard single-pixel imaging versus Fourier single-pixel imaging,” Opt. Express 25(16), 19619–19639 (2017).
    [Crossref]
  21. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with Single-Pixel Detectors,” Science 340(6134), 844–847 (2013).
    [Crossref]
  22. H. Jiang, H. Zhai, Y. Xu, X. Li, and H. Zhao, “3D shape measurement of translucent objects based on Fourier single-pixel imaging in projector-camera system,” Opt. Express 27(23), 33564–33574 (2019).
    [Crossref]
  23. H. Jiang, Q. Yang, X. Li, H. Zhao, Y. Li, and Y. Xu, “3D shape measurement in the presence of strong interreflections by using single-pixel imaging in a camera–projector system,” Opt. Express 29(3), 3609–3620 (2021).
    [Crossref]
  24. H. Jiang, Y. Li, H. Zhao, X. Li, and Y. Xu, “Parallel single-pixel imaging: A general method for direct-global separation and 3D shape reconstruction under strong global illumination,” Int. J. Comput. Vis. 129(4), 1060–1086 (2021).
    [Crossref]
  25. H. Jiang, S. Zhu, H. Zhao, B. Xu, and X. Li, “Adaptive regional single-pixel imaging based on the Fourier slice theorem,” Opt. Express 25(13), 15118–15130 (2017).
    [Crossref]
  26. O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009).
    [Crossref]
  27. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
    [Crossref]
  28. W. Meng, D. Shi, J. Huang, K. Yuan, Y. Wang, and C. Fan, “Sparse Fourier single-pixel imaging,” Opt. Express 27(22), 31490–31503 (2019).
    [Crossref]
  29. L. Bian, J. Suo, X. Hu, F. Chen, and Q. Dai, “Efficient single pixel imaging in Fourier space,” J. Opt. 18(8), 085704 (2016).
    [Crossref]
  30. E. J. Candes, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Comm. Pure Appl. Math. 59(8), 1207–1223 (2006).
    [Crossref]
  31. C. Li, An efficient algorithm for total variation regularization with applications to the single pixel camera and compressive sensing (Rice University, 2010).
  32. A. Hore and D. Ziou, “Image quality metrics: PSNR vs. SSIM,” in Proceedings of International Conference on Pattern Recognition (IEEE, 2010), pp. 2366–2369.

2021 (2)

H. Jiang, Q. Yang, X. Li, H. Zhao, Y. Li, and Y. Xu, “3D shape measurement in the presence of strong interreflections by using single-pixel imaging in a camera–projector system,” Opt. Express 29(3), 3609–3620 (2021).
[Crossref]

H. Jiang, Y. Li, H. Zhao, X. Li, and Y. Xu, “Parallel single-pixel imaging: A general method for direct-global separation and 3D shape reconstruction under strong global illumination,” Int. J. Comput. Vis. 129(4), 1060–1086 (2021).
[Crossref]

2019 (3)

2018 (2)

H. Zhao, Y. Xu, H. Jiang, and X. Li, “3D shape measurement in the presence of strong interreflections by epipolar imaging and regional fringe projection,” Opt. Express 26(6), 7117–7131 (2018).
[Crossref]

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Laser Eng. 109, 23–59 (2018).
[Crossref]

2017 (2)

2016 (2)

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Laser Eng. 85, 84–103 (2016).
[Crossref]

L. Bian, J. Suo, X. Hu, F. Chen, and Q. Dai, “Efficient single pixel imaging in Fourier space,” J. Opt. 18(8), 085704 (2016).
[Crossref]

2015 (1)

Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015).
[Crossref]

2013 (2)

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with Single-Pixel Detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan, “A Practical approach to 3D scanning in the presence of interreflections, subsurface scattering and defocus,” Int. J. Comput. Vis. 102(1-3), 33–55 (2013).
[Crossref]

2010 (1)

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

2009 (2)

Y. Xu and D. G. Aliaga, “An adaptive correspondence algorithm for modeling scenes with strong inter-reflections,” IEEE Trans. Visual. Comput. Graphics 15(3), 465–480 (2009).
[Crossref]

O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009).
[Crossref]

2008 (1)

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

2006 (2)

S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM transactions on Graphics 25(3), 935–944 (2006).
[Crossref]

E. J. Candes, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Comm. Pure Appl. Math. 59(8), 1207–1223 (2006).
[Crossref]

1997 (1)

R. I. Hartley, “Triangulation,” Comput. Vis. Image Understanding 68(2), 146–157 (1997).
[Crossref]

1991 (1)

S. K. Nayar, K. Ikeuchi, and T. Kanade, “Shape from interreflections,” Int. J. Comput. Vis. 6(3), 173–195 (1991).
[Crossref]

Agrawal, A.

M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan, “A Practical approach to 3D scanning in the presence of interreflections, subsurface scattering and defocus,” Int. J. Comput. Vis. 102(1-3), 33–55 (2013).
[Crossref]

Aliaga, D. G.

Y. Xu and D. G. Aliaga, “An adaptive correspondence algorithm for modeling scenes with strong inter-reflections,” IEEE Trans. Visual. Comput. Graphics 15(3), 465–480 (2009).
[Crossref]

Y. Xu and D. G. Aliaga, “Robust pixel classification for 3D modeling with structured light,” in Proceedings of Graphics Interface (ACM, 2007), pp.233–240.

Asundi, A.

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Laser Eng. 85, 84–103 (2016).
[Crossref]

Baraniuk, R. G.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Bian, L.

L. Bian, J. Suo, X. Hu, F. Chen, and Q. Dai, “Efficient single pixel imaging in Fourier space,” J. Opt. 18(8), 085704 (2016).
[Crossref]

Bowman, A.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with Single-Pixel Detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Bowman, R.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with Single-Pixel Detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Bromberg, Y.

O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009).
[Crossref]

Candes, E. J.

E. J. Candes, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Comm. Pure Appl. Math. 59(8), 1207–1223 (2006).
[Crossref]

Chen, F.

L. Bian, J. Suo, X. Hu, F. Chen, and Q. Dai, “Efficient single pixel imaging in Fourier space,” J. Opt. 18(8), 085704 (2016).
[Crossref]

Chen, Q.

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Laser Eng. 109, 23–59 (2018).
[Crossref]

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Laser Eng. 85, 84–103 (2016).
[Crossref]

Chen, T.

T. Chen, H. P. Seidel, and H. P. A. Lensch, “Modulated phase-shifting for 3D scanning,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 3839–3846.

Dai, Q.

L. Bian, J. Suo, X. Hu, F. Chen, and Q. Dai, “Efficient single pixel imaging in Fourier space,” J. Opt. 18(8), 085704 (2016).
[Crossref]

Davenport, M. A.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Du, X.

Q. Hu, K. G. Harding, X. Du, and D. Hamilton, “Shiny parts measurement using color separation,” Two- and Three-Dimensional Methods for Inspection and Metrology III (SPIE, 2005), pp. 60000D.

Duarte, M. F.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Edgar, M. P.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with Single-Pixel Detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Fan, C.

Feng, S.

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Laser Eng. 109, 23–59 (2018).
[Crossref]

Gorthi, S. S.

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

Grossberg, M. D.

S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM transactions on Graphics 25(3), 935–944 (2006).
[Crossref]

Gu, J.

J. Gu, T. Kobayashi, M. Gupta, and S. K. Nayar, “Multiplexed illumination for scene recovery in the presence of global illumination. International Conference on Computer Vision,” in Proceedings of International Conference on Computer Vision (IEEE, 2011), pp. 691–698.

Gupta, M.

M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan, “A Practical approach to 3D scanning in the presence of interreflections, subsurface scattering and defocus,” Int. J. Comput. Vis. 102(1-3), 33–55 (2013).
[Crossref]

M. Gupta and S. K. Nayar, “Micro Phase Shifting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 813–820.

J. Gu, T. Kobayashi, M. Gupta, and S. K. Nayar, “Multiplexed illumination for scene recovery in the presence of global illumination. International Conference on Computer Vision,” in Proceedings of International Conference on Computer Vision (IEEE, 2011), pp. 691–698.

Hamilton, D.

Q. Hu, K. G. Harding, X. Du, and D. Hamilton, “Shiny parts measurement using color separation,” Two- and Three-Dimensional Methods for Inspection and Metrology III (SPIE, 2005), pp. 60000D.

Harding, K. G.

Q. Hu, K. G. Harding, X. Du, and D. Hamilton, “Shiny parts measurement using color separation,” Two- and Three-Dimensional Methods for Inspection and Metrology III (SPIE, 2005), pp. 60000D.

Hartley, R. I.

R. I. Hartley, “Triangulation,” Comput. Vis. Image Understanding 68(2), 146–157 (1997).
[Crossref]

Hore, A.

A. Hore and D. Ziou, “Image quality metrics: PSNR vs. SSIM,” in Proceedings of International Conference on Pattern Recognition (IEEE, 2010), pp. 2366–2369.

Hu, Q.

Q. Hu, K. G. Harding, X. Du, and D. Hamilton, “Shiny parts measurement using color separation,” Two- and Three-Dimensional Methods for Inspection and Metrology III (SPIE, 2005), pp. 60000D.

Hu, X.

L. Bian, J. Suo, X. Hu, F. Chen, and Q. Dai, “Efficient single pixel imaging in Fourier space,” J. Opt. 18(8), 085704 (2016).
[Crossref]

Huang, J.

Huang, L.

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Laser Eng. 109, 23–59 (2018).
[Crossref]

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Laser Eng. 85, 84–103 (2016).
[Crossref]

Ikeuchi, K.

S. K. Nayar, K. Ikeuchi, and T. Kanade, “Shape from interreflections,” Int. J. Comput. Vis. 6(3), 173–195 (1991).
[Crossref]

Jiang, H.

Kanade, T.

S. K. Nayar, K. Ikeuchi, and T. Kanade, “Shape from interreflections,” Int. J. Comput. Vis. 6(3), 173–195 (1991).
[Crossref]

Katz, O.

O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009).
[Crossref]

Kelly, K. F.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Kobayashi, T.

J. Gu, T. Kobayashi, M. Gupta, and S. K. Nayar, “Multiplexed illumination for scene recovery in the presence of global illumination. International Conference on Computer Vision,” in Proceedings of International Conference on Computer Vision (IEEE, 2011), pp. 691–698.

Krishnan, G.

S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM transactions on Graphics 25(3), 935–944 (2006).
[Crossref]

Kutulakos, K. N.

M. O’Toole, J. Mather, and K. N. Kutulakos, “3D Shape and Indirect appearance by structured light transport,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3246–3253.

Laska, J. N.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Lau, D. L.

Y. Zhang, D. L. Lau, and Y. Yu, “Causes and Corrections for Bimodal Multi-path Scanning with Structured Light,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2019), pp. 4431–4439.

Lensch, H. P. A.

T. Chen, H. P. Seidel, and H. P. A. Lensch, “Modulated phase-shifting for 3D scanning,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 3839–3846.

Li, C.

C. Li, An efficient algorithm for total variation regularization with applications to the single pixel camera and compressive sensing (Rice University, 2010).

Li, X.

Li, Y.

H. Jiang, Y. Li, H. Zhao, X. Li, and Y. Xu, “Parallel single-pixel imaging: A general method for direct-global separation and 3D shape reconstruction under strong global illumination,” Int. J. Comput. Vis. 129(4), 1060–1086 (2021).
[Crossref]

H. Jiang, Q. Yang, X. Li, H. Zhao, Y. Li, and Y. Xu, “3D shape measurement in the presence of strong interreflections by using single-pixel imaging in a camera–projector system,” Opt. Express 29(3), 3609–3620 (2021).
[Crossref]

Ma, X.

Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015).
[Crossref]

Mather, J.

M. O’Toole, J. Mather, and K. N. Kutulakos, “3D Shape and Indirect appearance by structured light transport,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3246–3253.

Meng, W.

Narasimhan, S. G.

M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan, “A Practical approach to 3D scanning in the presence of interreflections, subsurface scattering and defocus,” Int. J. Comput. Vis. 102(1-3), 33–55 (2013).
[Crossref]

Nayar, S. K.

S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM transactions on Graphics 25(3), 935–944 (2006).
[Crossref]

S. K. Nayar, K. Ikeuchi, and T. Kanade, “Shape from interreflections,” Int. J. Comput. Vis. 6(3), 173–195 (1991).
[Crossref]

M. Gupta and S. K. Nayar, “Micro Phase Shifting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 813–820.

J. Gu, T. Kobayashi, M. Gupta, and S. K. Nayar, “Multiplexed illumination for scene recovery in the presence of global illumination. International Conference on Computer Vision,” in Proceedings of International Conference on Computer Vision (IEEE, 2011), pp. 691–698.

O’Toole, M.

M. O’Toole, J. Mather, and K. N. Kutulakos, “3D Shape and Indirect appearance by structured light transport,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3246–3253.

Padgett, M. J.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with Single-Pixel Detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Raskar, R.

S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM transactions on Graphics 25(3), 935–944 (2006).
[Crossref]

Rastogi, P.

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

Romberg, J. K.

E. J. Candes, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Comm. Pure Appl. Math. 59(8), 1207–1223 (2006).
[Crossref]

Seidel, H. P.

T. Chen, H. P. Seidel, and H. P. A. Lensch, “Modulated phase-shifting for 3D scanning,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 3839–3846.

Shi, D.

Silberberg, Y.

O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009).
[Crossref]

Sun, B.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with Single-Pixel Detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Sun, T.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Suo, J.

L. Bian, J. Suo, X. Hu, F. Chen, and Q. Dai, “Efficient single pixel imaging in Fourier space,” J. Opt. 18(8), 085704 (2016).
[Crossref]

Takhar, D.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Tao, T.

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Laser Eng. 109, 23–59 (2018).
[Crossref]

E. J. Candes, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Comm. Pure Appl. Math. 59(8), 1207–1223 (2006).
[Crossref]

Veeraraghavan, A.

M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan, “A Practical approach to 3D scanning in the presence of interreflections, subsurface scattering and defocus,” Int. J. Comput. Vis. 102(1-3), 33–55 (2013).
[Crossref]

Vittert, L. E.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with Single-Pixel Detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Wang, X.

Wang, Y.

Welsh, S.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with Single-Pixel Detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Xu, B.

Xu, Y.

H. Jiang, Y. Li, H. Zhao, X. Li, and Y. Xu, “Parallel single-pixel imaging: A general method for direct-global separation and 3D shape reconstruction under strong global illumination,” Int. J. Comput. Vis. 129(4), 1060–1086 (2021).
[Crossref]

H. Jiang, Q. Yang, X. Li, H. Zhao, Y. Li, and Y. Xu, “3D shape measurement in the presence of strong interreflections by using single-pixel imaging in a camera–projector system,” Opt. Express 29(3), 3609–3620 (2021).
[Crossref]

H. Jiang, H. Zhai, Y. Xu, X. Li, and H. Zhao, “3D shape measurement of translucent objects based on Fourier single-pixel imaging in projector-camera system,” Opt. Express 27(23), 33564–33574 (2019).
[Crossref]

Y. Xu, H. Zhao, H. Jiang, and X. Li, “High-accuracy 3D shape measurement of translucent objects by fringe projection profilometry,” Opt. Express 27(13), 18421–18434 (2019).
[Crossref]

H. Zhao, Y. Xu, H. Jiang, and X. Li, “3D shape measurement in the presence of strong interreflections by epipolar imaging and regional fringe projection,” Opt. Express 26(6), 7117–7131 (2018).
[Crossref]

Y. Xu and D. G. Aliaga, “An adaptive correspondence algorithm for modeling scenes with strong inter-reflections,” IEEE Trans. Visual. Comput. Graphics 15(3), 465–480 (2009).
[Crossref]

Y. Xu and D. G. Aliaga, “Robust pixel classification for 3D modeling with structured light,” in Proceedings of Graphics Interface (ACM, 2007), pp.233–240.

Yang, Q.

Yin, W.

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Laser Eng. 109, 23–59 (2018).
[Crossref]

Yu, Y.

Y. Zhang, D. L. Lau, and Y. Yu, “Causes and Corrections for Bimodal Multi-path Scanning with Structured Light,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2019), pp. 4431–4439.

Yuan, K.

Zhai, H.

Zhang, M.

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Laser Eng. 85, 84–103 (2016).
[Crossref]

Zhang, Y.

Y. Zhang, D. L. Lau, and Y. Yu, “Causes and Corrections for Bimodal Multi-path Scanning with Structured Light,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2019), pp. 4431–4439.

Zhang, Z.

Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Hadamard single-pixel imaging versus Fourier single-pixel imaging,” Opt. Express 25(16), 19619–19639 (2017).
[Crossref]

Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015).
[Crossref]

Zhao, H.

Zheng, G.

Zhong, J.

Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Hadamard single-pixel imaging versus Fourier single-pixel imaging,” Opt. Express 25(16), 19619–19639 (2017).
[Crossref]

Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015).
[Crossref]

Zhou, Y.

H. Jiang, Y. Zhou, and H. Zhao, “Using adaptive regional projection to measure parts with strong reflection,” in Proceedings of 3D Measurement Technology for Intelligent Manufacturing (SPIE, 2017), pp. 104581A.

Zhu, S.

Ziou, D.

A. Hore and D. Ziou, “Image quality metrics: PSNR vs. SSIM,” in Proceedings of International Conference on Pattern Recognition (IEEE, 2010), pp. 2366–2369.

Zuo, C.

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Laser Eng. 109, 23–59 (2018).
[Crossref]

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Laser Eng. 85, 84–103 (2016).
[Crossref]

ACM transactions on Graphics (1)

S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM transactions on Graphics 25(3), 935–944 (2006).
[Crossref]

Appl. Phys. Lett. (1)

O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009).
[Crossref]

Comm. Pure Appl. Math. (1)

E. J. Candes, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Comm. Pure Appl. Math. 59(8), 1207–1223 (2006).
[Crossref]

Comput. Vis. Image Understanding (1)

R. I. Hartley, “Triangulation,” Comput. Vis. Image Understanding 68(2), 146–157 (1997).
[Crossref]

IEEE Signal Process. Mag. (1)

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

IEEE Trans. Visual. Comput. Graphics (1)

Y. Xu and D. G. Aliaga, “An adaptive correspondence algorithm for modeling scenes with strong inter-reflections,” IEEE Trans. Visual. Comput. Graphics 15(3), 465–480 (2009).
[Crossref]

Int. J. Comput. Vis. (3)

M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan, “A Practical approach to 3D scanning in the presence of interreflections, subsurface scattering and defocus,” Int. J. Comput. Vis. 102(1-3), 33–55 (2013).
[Crossref]

S. K. Nayar, K. Ikeuchi, and T. Kanade, “Shape from interreflections,” Int. J. Comput. Vis. 6(3), 173–195 (1991).
[Crossref]

H. Jiang, Y. Li, H. Zhao, X. Li, and Y. Xu, “Parallel single-pixel imaging: A general method for direct-global separation and 3D shape reconstruction under strong global illumination,” Int. J. Comput. Vis. 129(4), 1060–1086 (2021).
[Crossref]

J. Opt. (1)

L. Bian, J. Suo, X. Hu, F. Chen, and Q. Dai, “Efficient single pixel imaging in Fourier space,” J. Opt. 18(8), 085704 (2016).
[Crossref]

Nat. Commun. (1)

Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015).
[Crossref]

Opt. Express (7)

Opt. Laser Eng. (2)

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Laser Eng. 109, 23–59 (2018).
[Crossref]

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Laser Eng. 85, 84–103 (2016).
[Crossref]

Opt. Lasers Eng. (1)

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

Science (1)

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with Single-Pixel Detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Other (10)

J. Gu, T. Kobayashi, M. Gupta, and S. K. Nayar, “Multiplexed illumination for scene recovery in the presence of global illumination. International Conference on Computer Vision,” in Proceedings of International Conference on Computer Vision (IEEE, 2011), pp. 691–698.

M. O’Toole, J. Mather, and K. N. Kutulakos, “3D Shape and Indirect appearance by structured light transport,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3246–3253.

Y. Zhang, D. L. Lau, and Y. Yu, “Causes and Corrections for Bimodal Multi-path Scanning with Structured Light,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2019), pp. 4431–4439.

Q. Hu, K. G. Harding, X. Du, and D. Hamilton, “Shiny parts measurement using color separation,” Two- and Three-Dimensional Methods for Inspection and Metrology III (SPIE, 2005), pp. 60000D.

H. Jiang, Y. Zhou, and H. Zhao, “Using adaptive regional projection to measure parts with strong reflection,” in Proceedings of 3D Measurement Technology for Intelligent Manufacturing (SPIE, 2017), pp. 104581A.

Y. Xu and D. G. Aliaga, “Robust pixel classification for 3D modeling with structured light,” in Proceedings of Graphics Interface (ACM, 2007), pp.233–240.

T. Chen, H. P. Seidel, and H. P. A. Lensch, “Modulated phase-shifting for 3D scanning,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 3839–3846.

M. Gupta and S. K. Nayar, “Micro Phase Shifting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 813–820.

C. Li, An efficient algorithm for total variation regularization with applications to the single pixel camera and compressive sensing (Rice University, 2010).

A. Hore and D. Ziou, “Image quality metrics: PSNR vs. SSIM,” in Proceedings of International Conference on Pattern Recognition (IEEE, 2010), pp. 2366–2369.

Data availability

No data were generated or analyzed in the presented research.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. 3D shape reconstruction in the presence of interreflections. (a) The mixture component of radiance captured by a camera pixel. (b) The overlapped patterns caused by interreflections. (c) Interreflections cause data incompleteness and large errors for traditional FPP method.
Fig. 2.
Fig. 2. Comparison between PSI and cPSI. In capture stage, the sampled frequencies of PSI are deterministic, while the frequencies are random sampled in cPSI. After the determination of sampled frequencies, periodic extension patterns are generated as Eq. (1). In reconstruction stage, the visible patches are reconstructed by IFFT in PSI. For cPSI, the visible patches are reconstructed by CS reconstruction algorithm as Eqs. (6)–(8).
Fig. 3.
Fig. 3. Sampling strategies investigated in this study. The yellow points are the chosen frequencies. (a) Square sampling strategy. (b) Circular sampling strategy. (c) Uniform random sampling strategy. (d) Distant based sampling strategy. (e) Characteristic based sampling strategy.
Fig. 4.
Fig. 4. Experimental setup and investigated objects. (a) Experimental setup. (b) Workpiece made by aluminum alloy. (c) Triangular groove composed by metal plates. (d). Metal blade. (e) Ceramic bowls. (f) V-Groove composed by two metal gauge blocks. (g) V–Groove with a sphere inside.
Fig. 5.
Fig. 5. Training and validation of CCS. (a) The characteristic training region on the workpiece. (b) The unnormalized PDF obtained when pixels from the characteristic training region is used. (c) The 3D surface plotting of the unnormalized PDF. The magnitude is shown in logarithm scale (d) The validation region on the workpiece. (e) The PSNR value for different sampling strategies under various sampling ratio. (f) The SME value for different sampling strategies under various sampling ratio.
Fig. 6.
Fig. 6. Light transport coefficients. (a) The workpiece, on which the two pixels A and B that responsible for the reconstructed light transport coefficients are indicated. (b) The reference light transport coefficients reconstructed by PSI for point A. (c) The reference light transport coefficients reconstructed by PSI for point B. (d) The light transport coefficients reconstructed by UCS for point A. (e) The light transport coefficients reconstructed by DCS for point A. (f) The light transport coefficients reconstructed by CCS for point A. (g) The light transport coefficients reconstructed by UCS for point B. (h) The light transport coefficients reconstructed by DCS for point B. (i) The light transport coefficients reconstructed by CCS for point B.
Fig. 7.
Fig. 7. Testing the performance of cPSI. Each row corresponds to an investigated object, where the image of object, PSNR and SME are provided. (a), (d), (g), (f) are the triangular groove composed by metal plates, metal blade, ceramic bowls and V–Groove composed by two metal gauge blocks, respectively. The yellow regions are the test regions where PSNR and SME are calculated. (b), (e), (h), (k) are the PSNR value, and (c), (f), (i), (l) are SME values for different sampling strategies for corresponding object calculated in the yellow region(s).
Fig. 8.
Fig. 8. 3D reconstruction results of cPSI. (a) Workpiece made by aluminium alloy. (b) The triangular groove composed by metal plates. (c) The blade made by metal. (d) The scene contained two ceramic bowls. (e) The V–Groove composed by two metal gauge blocks.
Fig. 9.
Fig. 9. Accuracy analysis of cPSI. The V–Groove composed by two metal gauge blocks is used for accuracy analysis. The 3D shape reconstruction accuracy is compared for UCS, DCS and CCS. (a) The 3D shape reconstructed by corresponding sampling strategy. (b) The error map of the upper plane for each sampling strategy. (c) The error map of the lower plane for each sampling strategy. The RMS is shown at the bottom of each subfigure.
Fig. 10.
Fig. 10. Accuracy analysis of cPSI by the scene of the V–Groove with a sphere. (a) The V–Groove with a sphere. (b) The reconstructed 3D shape by CCS (12% sampling ratio). (c) The error map of the reconstructed 3D shape.

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

P ~ ϕ ( x , y ; k s , l s ) = r 1 = 0 M M s r 2 = 0 N N s P ϕ B ( x r 1 M s , y r 2 N s ; k s , l s ) = a + b cos [ 2 π ( k s x M s + l s y N s ) + ϕ ] ,
P ϕ B ( x , y ; k s , l s ) = a + b cos [ 2 π ( k s x M s + l s y N s ) + ϕ ] .
B 1 ( x , y ; k s , l s ) = L 0 ( x , y ; k s , l s ) L π ( x , y ; k s , l s ) = [ P ~ 0 ( x , y ; k s , l s ) P ~ π ( x , y ; k s , l s ) ] M ( x , y ; x , y ) h ( x , y ; x , y ) ,
B 2 ( x , y ; k s , l s ) = L π 2 ( x , y ; k s , l s ) L 3 π 2 ( x , y ; k s , l s ) = [ P ~ π 2 ( x , y ; k s , l s ) P ~ 3 π 2 ( x , y ; k s , l s ) ] M ( x , y ; x , y ) h ( x , y ; x , y ) ,
H r B ( k s , l s ; x , y ) = B 1 ( x , y ; k s , l s ) + j B 2 ( x , y ; k s , l s ) .
h c s B ( x , y ; x , y ) = arg min h ( x , y ; x , y ) T V [ h ( x , y ; x , y ) ] + λ { | | F 1 [ h ( x , y ; x , y ) ] | | 2 + | | F 2 [ h ( x , y ; x , y ) ] | | 2 } ,
| | F 1 [ h ( x , y ; x , y ) ] | | 2 = ( k s , l s ) S a { B 1 ( x , y ; k s , l s ) [ P ~ 0 ( x , y ; k s , l s ) P ~ π ( x , y ; k s , l s ) ] M ( x , y ; x , y ) h ( x , y ; x , y ) } 2 ,
| | F 2 [ h ( x , y ; x , y ) ] | | 2 = ( k s , l s ) S a { B 2 ( x , y ; k s , l s ) [ P ~ π 2 ( x , y ; k s , l s ) P ~ 3 π 2 ( x , y ; k s , l s ) ] M ( x , y ; x , y ) h ( x , y ; x , y ) } 2 .
p ( k s , l s ) = ρ ( k s , l s ) P ( k s , l s ) Z ,
P S N R ( h , h ^ )  =  10 log 10 [ 255 2 / M S E ( h , h ^ ) ] ,
M S E ( h , h ^ )  =  1 M s N s y = 0 N s 1 x = 0 M s 1 [ h ( x , y ) h ^ ( x , y ) ] 2 .
S M E ( h , h ^ )  = ||SubPix( h ) - SubPix( h ^ )| | 2 ,
ρ c ( k s , l s )  =  ( x , y ) Ω c ω ( x , y ) | H r B ( k s , l s ; x , y ) | ,
ω ( x , y )  =  256 1 + L π / 2 ( x , y ; 0 , 0 ) ,