Abstract

High-speed panoramic three-dimensional (3D) shape measurement can be achieved by introducing plane mirrors into the traditional fringe projection profilometry (FPP) system because such a system simultaneously captures fringe patterns from three different perspectives (i.e., by a real camera and two virtual cameras in the plane mirrors). However, calibrating such a system is nontrivial due to the complicated setup. This work introduces a flexible new technique to calibrate such a system. We first present the mathematical representation of the plane mirror, and then mathematically prove that it only requires the camera to observe a set of feature point pairs (including real points and virtual points) to generate a solution to the reflection matrix of a plane mirror. By calibrating the virtual and real camera in the same world coordinate system, 3D point cloud data obtained from real and virtual perspectives can be automatically aligned to generate a panoramic 3D model of the object. Finally, we developed a system to verify the performance of the proposed calibration technique for panoramic 3D shape measurement.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical 3D shape measurement techniques are extensively applied in various fields such as AR/VR, on-line inspection, disease diagnosis, industrial quality control, and human-computer interaction. Among plenty of optical 3D measurement methods, fringe projection profilometry (FPP) is one of the most promising techniques in optical metrology because it can rapidly recover the 3D shape of test objects [16]. As a non-contact and high-accuracy 3D shape measurement technique, researchers have demonstrated its capability to measure dynamically changing complex surfaces, such as rapidly moving, spinning, or dynamically deformable objects [713]. However, a conventional FPP system with a single projector and a single camera can only acquire 3D shape from one perspective due to its limited or occluded field of view.

To achieve 360-degree overall 3D shape measurements of objects with complex surfaces, it is necessary to perform multiple measurements from different views. The existing methods are basically using either a turn-table [14,15], a robot arm [16], or plane mirrors [1719]. The first method places the object on a turn-table and acquires 3D data when the object is rotated to different perspectives. Alternatively, the second method requires the measurement system to be mounted on a movable robot arm to perform multiple measurements around the object. After scans, all scanned data are further processed using the complex and time-consuming point cloud registration algorithms such as Iterative Closest Point (ICP) to merge multiple scans together [20,21]. Since these methods require measurements at different times and some computationally expensive post-processing operations, they cannot be used for the real-time panoramic 3D measurement of dynamic scenes.

Different from the aforementioned two methods, measurement systems with two plane mirrors can achieve high-speed panoramic 3D shape measurement because they simultaneously capture deformed fringe images of the measured object from three different perspectives (i.e., by one real camera and two virtual cameras realized by plane mirrors). Epstein et al. [17] first introduced plane mirrors into FPP to create virtual cameras and projectors. By tracking the relative positions of camera, projector, and mirrors, an interactive system with structured light technique can generate 3D points to accurately estimate the pose of a mirror, while also reconstructing 3D points on the object. However, such a technique needs multiple measurements because the entire surface of the object cannot be illuminated at the same time. To solve this issue, Lanman et al. [18] presented an orthographic projection system using a digital light processing (DLP) projector and a Fresnel lens, which illuminated passive optical scatterers to create a volumetric display. They designed an unambiguous Gray code sequence to facilitates the establishment of the correspondence of projectors and cameras, recovering a dense 3D point cloud data of the entire object surface.

Because of the use of plane mirrors, the mathematical model of the system is more complex, making it difficult to achieve simple and effective calibration. Mariottini et al. [22] systematically studied the catadioptric of the mirror to propose an ideal catadioptric model that helps convert the virtual surfaces reflected by the mirror into their true positions. Following this idea, Chen et al. [19] proposed a simple calibration method to obtain the calibration data of the plane mirrors by measuring the 3D data of two speckle patterns pre-made on the mirrors. With the obtained calibration data, all surface portions can be converted into a common global coordinate thus allowing a straightforward full-surface 360 degree profile and deformation measurements. However, this method requires the plane mirror with the front surface reflection, otherwise the thickness of the plane mirror should be considered [23]. Furthermore, the calibration accuracy may suffer because of the non-uniform thickness of the speckles. In addition, since the speckle pattern is fixed on the plane mirror, it may have limited measurement volume.

In this work, a new calibration method that eliminates the use of speckle patterns to accurately calibrate the panoramic 3D shape measurement system with plane mirrors is proposed. Firstly, we review and reproduce the ideal reflection model for the plane mirror. Then, the calibration method of the mirrors is discussed mathematically to ensure the effectiveness and feasibility of the whole calibration process. Different from the traditional pattern-based calibration method, our method only requires the camera to observe a set of feature point pairs (including real points and virtual points) to generate the solution of the reflection matrix for plane mirrors, enabling the flexible calibration with high performance. By the initial estimation of the reflection matrix and the precise calibration using the Levenberg-Marquardt algorithm with the bundle adjustment strategy, the calibration information with high precision can be acquired to recover full-surface 3D shape of an object. Experimental results verify that our method can achieve high-accuracy and panoramic 3D shape measurement.

2. Principle

2.1 The ideal reflection model for the plane mirror

In this subsection, we first discuss in detail how to establish the ideal reflection model for the plane mirror as shown in Fig. 1.

 

Fig. 1. The schematic diagram of the ideal reflection model for the plane mirror.

Download Full Size | PPT Slide | PDF

In Fig. 1, ${X^o}$ is an arbitrary 3D point of the tested object with coordinates ${(x^o, y^o, z^o)}$ in the world coordinate system ${\left \{O; X, Y, Z\right \}}$, and ${d^r_o}$ is the distance between ${X^o}$ and the mirror, respectively. Due to the reflection of the plane mirror, ${X^r}$ is the corresponding virtual point of ${X^o}$ with coordinates ${(x^r, y^r, z^r)}$ in the world coordinate system ${\left \{O; X, Y, Z\right \}}$. The two light path (${\overrightarrow {OX^o}}$ and ${\overrightarrow {X^rO}}$) are built to present the relationship between ${X^o}$ and ${X^r}$ in the vector form as

$$\overrightarrow{OX^r} = \overrightarrow{OX^o} + \overrightarrow{X^oX^r}.$$
In the world coordinate system, the mirror can be modeled as ${n^r}$ and ${d^r_w}$, where ${n^r}$ is the normal vector of the mirror, and ${d^r_w}$ is the distance between ${O}$ and the mirror. Since real points ${X^o}$ and virtual points ${X^r}$ are symmetrical for the mirror (i.e., ${\overrightarrow {X^oX^r} = 2 d^r_o \overrightarrow {n^r}}$), Eq. (1) can also be rewritten as
$$\overrightarrow{OX^r} = \overrightarrow{OX^o} + 2 d^r_o \overrightarrow{n^r}.$$
Besides, given that ${d^r_o}$ is a scalar variable related to ${X^o}$, the relationship between ${d^r_o}$ and ${\overrightarrow {OX^o}}$ can be expressed as
$${d^r_o} = {d^r_w} - \overrightarrow{OX^o} \cdot \overrightarrow{n^r},$$
where ${\cdot }$ denotes the dot product of two vectors, ${\overrightarrow {OX^o} \cdot \overrightarrow {n^r}}$ is a scalar value instead of a vector that represents the projection distance of ${\overrightarrow {OX^o}}$ onto ${\overrightarrow {n^r}}$, so combining Eqs. (2) and (3) yields
$$\overrightarrow{OX^r} = \overrightarrow{OX^o} + 2 d^r_w \overrightarrow{n^r} - 2\overrightarrow{n^r}(\overrightarrow{OX^o} \cdot \overrightarrow{n^r}).$$
It should be noted that ${X^o}$, ${X^r}$, and ${n^r}$ are matrices with the size of ${3 \times 1}$. Based on matrix theory, due to ${\overrightarrow {OX^o} \cdot \overrightarrow {n^r} = {(X^o)}^{T}n^r = {(n^r)}^{T}X^o}$, Eq. (4) can also be rewritten in the matrix form as:
$$X^r = (I - 2n^r{(n^r)}^{T})X^o + 2 d^r_w n^r,$$
where ${I}$ is the ${3 \times 3}$ identity matrix. So we have
$$\left[ \begin{array}{cc} X^r\\ 1 \end{array} \right ] = D^r \left[ \begin{array}{cc} X^o\\ 1 \end{array} \right ].$$
So the reflection matrix ${D^r}$ for plane mirrors can be defined as,
$${D^r} = \left[ \begin{array}{cc} I - 2n^r{(n^r)}^{T} & 2 d^r_w n^r\\ 0 & 1 \end{array} \right ].$$
It is noted that ${D^r}$ is involutory (i.e., ${(D^r)^{-1} = {D^r}}$) in [22],
$$D^r \left[ \begin{array}{cc} X^r\\ 1 \end{array} \right ] = \left[ \begin{array}{cc} X^o\\ 1 \end{array} \right ].$$
According to Eq. (8), the reflection matrix ${D^r}$ can be obtained immediately if ${n^r}$ and ${d^r_w}$ are known, and the 3D point cloud data obtained from virtual perspectives can be converted into real ones in the world coordinate system. As a result, the core challenge for the high-precision panoramic 3D measurement is to accurately calculate ${n^r}$ and ${d^r_w}$.

2.2 The calibration method of the mirrors

At present, the conventional method requires artificially attaching a printing paper on the plane mirror to acquire the attitude information of the plane mirror for realizing the calibration of the plane mirror [19]. However, this method requires the plane mirror with the front surface reflection, otherwise, the thickness of the plane mirror should be considered [23]. In addition, it will inevitably introduce errors into the calibration results of the mirror due to the non-uniform thickness of the printing paper with speckles. On the other hand, since the paper is fixed on the plane mirror, it leads to limited 3D measurement volume for the system with plane mirrors.

Based on Eq. (8), we can find that it only requires the camera to observe ${N}$ 3D feature point pairs (including real points ${X^o}$ and virtual points ${X^r}$) to achieve the solution of the reflection matrix for plane mirrors. Moreover, minimizing Eq. (8) is a nonlinear minimization problem, which can be solved with the Levenberg-Marquardt algorithm that requires the accurate initial guesses of ${n^r}$ and ${d^r_w}$.

In this subsection, the new calibration method for plane mirrors will be discussed mathematically to ensure the effectiveness and feasibility of the calibration process, and the whole of which is divided into two steps: the initial estimation of the reflection matrix ${D^r}$ and the precise calibration using the Levenberg-Marquardt algorithm with the bundle adjustment strategy. First, let ${n^r}$ be ${(a^r, b^r, c^r)}$ and from Eq. (8), we have

$$[1-2(a^r)^2]x^r - 2a^rb^ry^r - 2a^rc^rz^r + 2a^rd^r_w = x^o,$$
$$-2 a^rb^rx^r + [1-2(b^r)^2]y^r - 2 b^rc^rz^r + 2b^rd^r_w = y^o,$$
$$-2 a^rc^rx^r - 2 b^rc^ry^r + [1-2(c^r)^2]z^r + 2c^rd^r_w = z^o.$$
Eliminating ${d^r_w}$, Eqs. (9) and (10) leads to
$$a^r(y^r-y^o) + b^r(x^o - x^r) = 0.$$
Likewise, the other two formulas can be derived from Eqs. (9)–(11)
$$a^r(z^r-z^o) + c^r(x^o - x^r) = 0,$$
and
$$b^r(z^r-z^o) + c^r(y^o - y^r) = 0.$$
So combining Eqs. (12)–(14) yields
$$\left[ \begin{array}{ccc} y^r-y^o & x^o - x^r & 0\\ z^r-z^o & 0 & x^o - x^r\\ 0 & z^r-z^o & y^o - y^r \end{array} \right ] \left[ \begin{array}{ccc} a^r\\ b^r\\ c^r \end{array} \right ] = 0.$$
Solving Eq. (15) is a least-squares minimization problem for obtaining an initial guess of ${n^r (a^r, b^r, c^r)}$. In our method, SVD is implemented to provide the exact solution for this above problem, and the last column vector of ${V}$ obtained using SVD is the initial guess ${n_0^r (a_0^r, b_0^r, c_0^r)}$. In addition, due to the parallelism between ${\overrightarrow {n^r}}$ and ${\overrightarrow {X^oX^r}}$, Eq. (15) can be also derived from ${\overrightarrow {n^r} \times \overrightarrow {X^oX^r} = 0}$. After acquiring ${n_0^r}$, Eqs. (9)–(11) can be rewritten for estimating the initial guess of ${d^r_w}$:
$$f(d^r_w) = \sum_{n=1}^N r_1^2(d^r_w) + r_2^2(d^r_w) + r_3^2(d^r_w),$$
$$r_1(d^r_w) = 2a_0^rd^r_w + [1-2(a_0^r)^2]x^r - 2a_0^rb_0^ry^r - 2a_0^rc_0^rz^r - x^o,$$
$$r_2(d^r_w) = 2b_0^rd^r_w - 2 a_0^rb_0^rx^r + [1-2(b_0^r)^2]y^r - 2 b_0^rc_0^rz^r - y^o,$$
$$r_3(d^r_w) = 2c_0^rd^r_w - 2 a_0^rc_0^rx^r - 2 b_0^rc_0^ry^r + [1-2(c_0^r)^2]z^r - z^o,$$
which ${r_1(d^r_w)}$, ${r_2(d^r_w)}$, and ${r_3(d^r_w)}$ are the residual errors, ${N}$ is the total number of 3D point pairs. Since there is only one variable ${d^r_w}$ in Eqs. (16)–(19), the initial guess of ${d^r_w}$ can be obtained by solving the first-order derivative equation (i.e., ${f'(d^r_w) = 0}$) from Eq. (16). In the meantime, since ${X^o}$ and ${X^r}$ are symmetrical for the mirror, the same initial guess of ${d^r_w}$ can be obtained based on this idea that their midpoints exist on the plane mirror. After getting the initial guesses of ${n^r}$ and ${d^r_w}$, Eqs. (9)–(11) can be rewritten again based on the Levenberg-Marquardt algorithm:
$$\sum_{n=1}^N g_1^2(G) + g_2^2(G) + g_3^2(G),$$
$$g_1(G) = [1-2(a^r)^2]x^r - 2a^rb^ry^r - 2a^rc^rz^r + 2a^rd^r_w - x^o,$$
$$g_2(G) ={-}2 a^rb^rx^r + [1-2(b^r)^2]y^r - 2 b^rc^rz^r + 2b^rd^r_w - y^o,$$
$$g_3(G) ={-}2 a^rc^rx^r - 2 b^rc^ry^r + [1-2(c^r)^2]z^r + 2c^rd^r_w - z^o,$$
where ${G = \left \{ a^r, b^r, c^r, d^r_w \right \}}$. Minimizing Eq. (20) is a nonlinear minimization problem, which can be solved with the Levenberg-Marquardt algorithm. It is worth noting that there are two key factors (${X^o}$ and ${X^r}$) affecting the accuracy of the final optimization. It is well known that FPP is capable of acquiring high-precision 3D data of actual feature points. In our system, the precision of 3D measurement obtained using traditional multi-frequency phase-shifting profilometry is about ${30\ \mu m}$ [24]. Therefore, the influence of the second factor ${X^r}$ should be considered primarily. In the above calibration process, ${X_n^r}$ (${n = 1, 2, 3, \ldots , N}$) were always taken as known input data. However, the low-precision 3D measurement for the virtual points ${X_n^r}$ (caused by the imperfect flatness of the mirror or the uneven reflectivity of the mirror) will introduce systematic errors into the final calibration results with low reliability. By further enhancing the manufacturing quality of the plane mirror, this disadvantage can be overcome to some extent that can further improve the performance of this calibration, but at the expense of fabrication cost. Therefore, the bundle adjustment strategy was introduced to avoid problems caused by the mirror with low quality [25]. According to the bundle adjustment strategy, Eq. (20) can be rewritten as
$$\sum_{n=1}^N g_1^2(G, X_n^r) + g_2^2(G, X_n^r) + g_3^2(G, X_n^r).$$
Although the total number of variables has been increased from ${4}$ to ${4+3N}$, minimizing Eq. (24) is still a nonlinear minimization problem that can be solved with the Levenberg-Marquardt method.

3. Experiments

In the experimental section, a panoramic 3D shape measurement system with two plane mirrors based on FPP is developed to verify the actual performance of the proposed method, and Fig. 2 shows the diagram of our mirror-assisted FPP system. This system includes a monochrome camera (Basler acA2440-75um with the resolution of ${2448 \times 2048}$), a DLP projector (LightCrafter 4500Pro with the resolution of ${912 \times 1140}$), and the camera is synchronized with the DLP projector. Furthermore, two plane mirrors with the front surface reflection (the size of ${30\ cm \times 30\ cm}$) are placed behind the measured scenes, and the angle between them is about ${120}$ degrees. Since the camera is placed above the projector, a series of horizontal fringe patterns are projected by the projector and captured by the camera for acquiring the 3D data of tested object.

 

Fig. 2. The diagram of the mirror-assisted FPP system.

Download Full Size | PPT Slide | PDF

From the analysis of the panoramic 3D shape measurement method, the procedures required to obtain 360-degree overall 3D shape measurements of objects with complex surfaces accurately is summarized. It mainly includes five steps:

Step 1: Calibration of the FPP system before the measurement. This step aims to obtain the calibration parameters of the conventional FPP system including a single projector and a single camera [26,27]. Using these calibration parameters, we can directly acquire the 3D information of the real points captured by the real camera based on traditional multi-frequency phase-shifting profilometry. For virtual points captured by virtual cameras due to the reflection of plane mirrors, these points can be imaged by the real camera due to the refraction of plane mirrors [19], which can be also directly measured to obtain their 3D data using the same calibration parameters in Fig. 3. In the same way, the world coordinates of the real points and the virtual points for the calibration targets can be measured, and thus, the reflection matrix of the mirror can be estimated.

 

Fig. 3. The process of obtaining panoramic 3D shape measurements of objects with complex surfaces.

Download Full Size | PPT Slide | PDF

Step 2: Acquisition for the reflection matrix ${D^r}$ of plane mirrors based on the proposed calibration method. It includes obtaining ${N}$ 3D feature point pairs (including real points ${X^o}$ and virtual points ${X^r}$) using the circular calibration board with high precision, performing in sequence the initial estimation of the reflection matrix and the precise calibration using the Levenberg-Marquardt algorithm with the bundle adjustment strategy.

Step 3: Measurements from different views at the same time. Based on traditional multi-frequency phase-shifting profilometry, a series of horizontal fringe patterns are used for acquiring the 3D data of tested object by a real camera and two virtual cameras in the plane mirrors.

Step 4: Conversion between virtual 3D points and real 3D points. By utilizing the reflection matrix ${D^r}$ obtained in Step 2, the 3D point cloud data obtained from virtual perspectives can be converted into real ones in the world coordinate system according to Eq. (8).

Step 5: Panoramic 3D shape measurement. In this step, the real 3D point cloud data obtained from three different perspectives are merged to achieve the panoramic 3D shape measurement without any post-processing operation.

Step 3, Step 4, and Step 5 are detailed in Fig. 3.

3.1 The calibration results of the mirror

In the calibration process of the mirror from Step 2, our method needs to capture multiple poses (6 postures are used in this experiment) of the circular calibration board with high precision, each of which can provide ${15}$ feature point pairs as shown in Fig. 4. Then, these feature point pairs are used to perform in sequence the initial estimation of the reflection matrix ${D^r}$ and the precise calibration using the Levenberg-Marquardt algorithm with the bundle adjustment strategy. In order to quantitatively analyze the accuracy of the proposed calibration method, the calibration residual errors at different steps are calculated as shown in Table 1. From the comparison results in Table 1, it can be found that our method can provide a relatively accurate initial guess (${n^r}$ and ${d^r_w}$) with the RMS of ${0.070\ mm}$ (left) or ${0.061\ mm}$ (right). Based on these estimations, the calibration residual errors can be further decreased to $0.058\ mm$ (left) or $0.053\ mm$ (right) using the Levenberg-Marquardt algorithm, which confirms its effectiveness. However, the low-precision 3D measurement for the virtual points will introduce systematic errors into the process of the calibration, which leads to the calibration results with low reliability. Therefore, the bundle adjustment strategy should be introduced into the calibration method to obtain results with high precision in Table 1. This result verifies that the proposed method can significantly increase the calibration accuracy as a result of the use of SVD for the initial estimation and the Levenberg-Marquardt algorithm with the bundle adjustment strategy for the precise calibration.

 

Fig. 4. The measurement result of the circular calibration board. (a) One pose data of the circular calibration board can provide ${15}$ feature point pairs. (b) The 3D data of virtual points. (c) The 3D data of real points.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 1. Comparison of calibration residual errors at different steps.

3.2 Precision analysis of panoramic 3D shape measurement

To quantitatively evaluate the accuracy of the panoramic 3D shape measurement system based on the proposed calibration approach, a standard ceramic sphere with a diameter of ${50.8\ mm}$ is measured using our system. Figures 5${(a)}$5${(c)}$ display the single-view 3D reconstruction results from a real camera and two virtual cameras in the plane mirrors. After the 3D data of the sphere surface was obtained, we perform the sphere fitting to obtain separately the corresponding 3D data of the standard spherical surface. The differences between the measured data and the corresponding fitted data represent the measurement errors with the RMS of ${27.577\ \mu m}$, ${47.531\ \mu m}$, and ${44.791\ \mu m}$ as shown in Figs. 5${(d)}$5${(f)}$. And then, merging the real 3D point cloud data obtained from three different perspectives, the full-surface 3D measurement results and the corresponding measurement errors are presented in Figs. 5${(g)}$5${(l)}$. It can be easily found that the RMS of the panoramic 3D measurement is ${65.122\ \mu m}$, respectively. This experiment verified that the proposed method can realize high-accuracy and panoramic 3D shape measurement. The whole 3D measurement results can be referred to in Visualization 1.

 

Fig. 5. The 3D measurement results of a standard ceramic sphere (Visualization 1). (a)-(c) The single-view 3D measurement results. (d)-(f) The corresponding distribution of the errors of (a)-(c). (g)-(i) The full-surface 3D measurement results. (j)-(l) The corresponding distribution of the errors of (g)-(i).

Download Full Size | PPT Slide | PDF

3.3 Panoramic 3D measurement for complex scenes

After confirming the precision of our panoramic 3D measurement system, further experiments need to be carried out to demonstrate the reliability of our method by measuring multiples objects with complex shapes. Two different objects were measured including a Voltaire model and a Venus model, and the corresponding 3D results are shown in Fig. 6${(a)}$ and Fig. 7${(a)}$. And then, the corresponding results from three different views are presented to illustrate the robustness of panoramic 3D shape measurement for objects in Figs. 6${(b)}$6${(d)}$ and Figs. 7${(b)}$7${(d)}$. In addition, due to the complex shape of the measured object and the occluded field of view from the camera, there are 360-degree overall 3D data with a few inevitable holes and misses in the final measurement results (Visualization 2 and Visualization 3), which basically confirms that the proposed method can successfully achieve panoramic 3D shape measurements of objects with complex surfaces simultaneously.

 

Fig. 6. The measurement results of a Voltaire model (Visualization 2). (a) The full-surface 3D reconstruction results of a Voltaire model. (b)-(d) The corresponding results of (a) from three different views.

Download Full Size | PPT Slide | PDF

 

Fig. 7. The measurement results of a Venus model (Visualization 3). (a) The full-surface 3D reconstruction results of a Venus model. (b)-(d) The corresponding results of (a) from three different views.

Download Full Size | PPT Slide | PDF

4. Discussions

The proposed calibration method in this work for FPP system with two plane mirrors has the following advantages over other methods:

  • Simple operation for calibrating the mirrors. Different from the traditional pattern-based calibration method, the proposed calibration method only requires the camera to observe a set of feature point pairs (including real points and virtual points) to generate the solution of the reflection matrix for plane mirrors, which makes the whole calibration process simple.
  • Effectiveness and feasibility of the calibration process. Benefited from the ideal reflection model for the plane mirror, the calibration method of the mirror is discussed mathematically to ensure the effectiveness and feasibility of the calibration process.
  • Accurate panoramic 3D shape measurement. Due to the robust and high-performance calibration method, experimental results verify that our system can achieve panoramic 3D shape measurement with an accuracy of ${65.122 \mu m}$.
  • Straightforward full-surface 360-degree 3D measurement. Due to the use of the calibration information with high precision, all 3D scanned data obtained from three different perspectives are merged to achieve the panoramic 3D shape measurement without any complex point cloud registration algorithms (e.g., Iterative Closest Point (ICP)), enabling real-time panoramic 3D measurements for dynamic scenes.

5. Conclusion

In summary, we have presented a new calibration method for fringe projection profilometry (FPP) system with plane mirrors for high-precision and panoramic 3D measurement for objects with complex surfaces. By introducing plane mirrors into the traditional fringe projection profilometry (FPP), our system can capture deformed fringe images of the measured object from three different perspectives simultaneously including a real camera and two virtual cameras obtained by plane mirrors, enabling panoramic 3D shape reconstruction only by single-shot measurement. Compared to the traditional pattern-based calibration method, a flexible new calibration technique is proposed to simply calibrate the mirror without the use of speckle patterns. Besides, in this work, the calibration method of the mirrors is discussed mathematically to ensure the effectiveness and feasibility of the calibration process, it only requires the camera to observe a set of feature point pairs (including real points and virtual points) to generate the solution of the reflection matrix for plane mirrors, and performs in sequence the initial estimation of the reflection matrix and the precise calibration using the Levenberg-Marquardt algorithm with the bundle adjustment strategy for acquiring the calibration results with high accuracy. Experimental results have demonstrated the success of our proposed method in its ability to produce panoramic 3D shape measurement with an accuracy of ${65.122 \mu m}$.

Finally, it should be also mentioned that there are several aspects exist that need to be further solved and improved in our mirror-assisted FPP system, which we will leave for future consideration. First, due to the complex surface of the measured object and the occluded field of view, there are some unavoidable outliers and holes in the 360-degree overall 3D measurement results. In order to obtain panoramic measurement results with more high completeness, it is necessary to adjust the setup of the existing system, such as increasing the number of cameras and reducing the baseline between the projector and the camera. Second, due to the uneven catadioptric effect of the plane mirror, it is based on our find that the fringe pattern with more high frequency captured by the camera may have the problem of fringe overlapping, thereby affecting the overall 3D reconstruction quality. How to handle this situation is another interesting direction for further investigation. At last, different from turn-table-based systems and robot arm-based systems, it needs to be known that only mirror-assisted systems have the potential to enable high-speed panoramic 3D shape measurement. So, in the future, we will develop some other techniques to design a more efficient 3D reconstruction method for high-speed panoramic 3D shape measurement system.

Funding

National Natural Science Foundation of China (11574152, 61705105, 61722506); National Key R&D Program of China (2017YFF0106403); Final Assembly “13th Five-Year Plan” Advanced Research Project of China (30102070102); Equipment Advanced Research Fund of China (61404150202); The Key Research and Development Program of Jiangsu Province (BE2017162); Outstanding Youth Foundation of Jiangsu Province (BK20170034); National Defense Science and Technology Foundation of China (0106173); “333 Engineering” Research Project of Jiangsu Province (BRA2016407); Fundamental Research Funds for the Central Universities (30917011204, 30919011222); Open Research Fund of Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense (3091801410411).

Disclosures

The authors declare no conflicts of interest.

References

1. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010). [CrossRef]  

2. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018). [CrossRef]  

3. S. Feng, L. Zhang, C. Zuo, T. Tao, Q. Chen, and G. Gu, “High dynamic range 3d measurements with fringe projection profilometry: a review,” Mea. Sci. Technol. 29(12), 122001 (2018). [CrossRef]  

4. S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: A review,” Opt. Lasers Eng. 107, 28–37 (2018). [CrossRef]  

5. S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Photonics 1(2), 025001 (2019). [CrossRef]  

6. C. Zuo, Q. Chen, G. Gu, S. Feng, F. Feng, R. Li, and G. Shen, “High-speed three-dimensional shape measurement for dynamic scenes using bi-frequency tripolar pulse-width-modulation fringe projection,” Opt. Lasers Eng. 51(8), 953–960 (2013). [CrossRef]  

7. Z. Zhang, “Review of single-shot 3d shape measurement by phase calculation-based fringe projection techniques,” Opt. Lasers Eng. 50(8), 1097–1106 (2012). [CrossRef]  

8. X. Su and Q. Zhang, “Dynamic 3-d shape measurement method: a review,” Opt. Lasers Eng. 48(2), 191–204 (2010). [CrossRef]  

9. C. Zuo, T. Tao, S. Feng, L. Huang, A. Asundi, and Q. Chen, “Micro fourier transform profilometry (µftp): 3d shape measurement at 10,000 frames per second,” Opt. Lasers Eng. 102, 70–91 (2018). [CrossRef]  

10. S. Zhang, “High-speed 3d shape measurement with structured light methods: A review,” Opt. Lasers Eng. 106, 119–131 (2018). [CrossRef]  

11. W. Yin, S. Feng, T. Tao, L. Huang, M. Trusiak, Q. Chen, and C. Zuo, “High-speed 3d shape measurement using the optimized composite fringe patterns and stereo-assisted structured light system,” Opt. Express 27(3), 2411–2431 (2019). [CrossRef]  

12. J.-S. Hyun, G. T.-C. Chiu, and S. Zhang, “High-speed and high-accuracy 3d surface measurement using a mechanical projector,” Opt. Express 26(2), 1474–1487 (2018). [CrossRef]  

13. W. Yin, C. Zuo, S. Feng, T. Tao, Y. Hu, L. Huang, J. Ma, and Q. Chen, “High-speed three-dimensional shape measurement using geometry-constraint-based number-theoretical phase unwrapping,” Opt. Lasers Eng. 115, 21–31 (2019). [CrossRef]  

14. X. Liu, X. Peng, H. Chen, D. He, and B. Z. Gao, “Strategy for automatic and complete three-dimensional optical digitization,” Opt. Lett. 37(15), 3126–3128 (2012). [CrossRef]  

15. L. Song, Y. Ru, Y. Yang, Q. Guo, X. Zhu, and J. Xi, “Full-view three-dimensional measurement of complex surfaces,” Opt. Eng. 57(10), 104106 (2018). [CrossRef]  

16. M. Nießner, M. Zollhöfer, S. Izadi, and M. Stamminger, “Real-time 3d reconstruction at scale using voxel hashing,” ACM Trans. Graph. 32(6), 1–11 (2013). [CrossRef]  

17. E. Epstein, M. Granger-Piché, and P. Potilin, “Exploiting mirrors in interactive reconstruction with structured light,” in VMV, (2004), pp. 125, 132.

18. D. Lanman, D. Crispell, and G. Taubin, “Surround structured lighting: 3-d scanning with orthographic illumination,” Comput. Vis. Image Underst. 113(11), 1107–1117 (2009). [CrossRef]  

19. B. Chen and B. Pan, “Mirror-assisted panoramic-digital image correlation for full-surface 360-deg deformation measurement,” Measurement 132, 350–358 (2019). [CrossRef]  

20. P. J. Besl and N. D. McKay, “A method for registration of 3-d shapes,” IEEE Transactions on Pattern Analysis Mach. Intell. 14(2), 239–256 (1992).

21. H. Mohammadzade and D. Hatzinakos, “Iterative closest normal point for 3d face recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 35(2), 381–397 (2013). [CrossRef]  

22. G. L. Mariottini, S. Scheggi, F. Morbidi, and D. Prattichizzo, “Planar mirrors for image-based robot localization and 3-d reconstruction,” Mechatronics 22(4), 398–409 (2012). [CrossRef]  

23. P. Wang, J. Wang, J. Xu, Y. Guan, G. Zhang, and K. Chen, “Calibration method for a large-scale structured light measurement system,” Appl. Opt. 56(14), 3995–4002 (2017). [CrossRef]  

24. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016). [CrossRef]  

25. X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2d reference target,” Opt. Lasers Eng. 89, 131–137 (2017). [CrossRef]  

26. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

27. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
    [Crossref]
  2. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
    [Crossref]
  3. S. Feng, L. Zhang, C. Zuo, T. Tao, Q. Chen, and G. Gu, “High dynamic range 3d measurements with fringe projection profilometry: a review,” Mea. Sci. Technol. 29(12), 122001 (2018).
    [Crossref]
  4. S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: A review,” Opt. Lasers Eng. 107, 28–37 (2018).
    [Crossref]
  5. S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Photonics 1(2), 025001 (2019).
    [Crossref]
  6. C. Zuo, Q. Chen, G. Gu, S. Feng, F. Feng, R. Li, and G. Shen, “High-speed three-dimensional shape measurement for dynamic scenes using bi-frequency tripolar pulse-width-modulation fringe projection,” Opt. Lasers Eng. 51(8), 953–960 (2013).
    [Crossref]
  7. Z. Zhang, “Review of single-shot 3d shape measurement by phase calculation-based fringe projection techniques,” Opt. Lasers Eng. 50(8), 1097–1106 (2012).
    [Crossref]
  8. X. Su and Q. Zhang, “Dynamic 3-d shape measurement method: a review,” Opt. Lasers Eng. 48(2), 191–204 (2010).
    [Crossref]
  9. C. Zuo, T. Tao, S. Feng, L. Huang, A. Asundi, and Q. Chen, “Micro fourier transform profilometry (µftp): 3d shape measurement at 10,000 frames per second,” Opt. Lasers Eng. 102, 70–91 (2018).
    [Crossref]
  10. S. Zhang, “High-speed 3d shape measurement with structured light methods: A review,” Opt. Lasers Eng. 106, 119–131 (2018).
    [Crossref]
  11. W. Yin, S. Feng, T. Tao, L. Huang, M. Trusiak, Q. Chen, and C. Zuo, “High-speed 3d shape measurement using the optimized composite fringe patterns and stereo-assisted structured light system,” Opt. Express 27(3), 2411–2431 (2019).
    [Crossref]
  12. J.-S. Hyun, G. T.-C. Chiu, and S. Zhang, “High-speed and high-accuracy 3d surface measurement using a mechanical projector,” Opt. Express 26(2), 1474–1487 (2018).
    [Crossref]
  13. W. Yin, C. Zuo, S. Feng, T. Tao, Y. Hu, L. Huang, J. Ma, and Q. Chen, “High-speed three-dimensional shape measurement using geometry-constraint-based number-theoretical phase unwrapping,” Opt. Lasers Eng. 115, 21–31 (2019).
    [Crossref]
  14. X. Liu, X. Peng, H. Chen, D. He, and B. Z. Gao, “Strategy for automatic and complete three-dimensional optical digitization,” Opt. Lett. 37(15), 3126–3128 (2012).
    [Crossref]
  15. L. Song, Y. Ru, Y. Yang, Q. Guo, X. Zhu, and J. Xi, “Full-view three-dimensional measurement of complex surfaces,” Opt. Eng. 57(10), 104106 (2018).
    [Crossref]
  16. M. Nießner, M. Zollhöfer, S. Izadi, and M. Stamminger, “Real-time 3d reconstruction at scale using voxel hashing,” ACM Trans. Graph. 32(6), 1–11 (2013).
    [Crossref]
  17. E. Epstein, M. Granger-Piché, and P. Potilin, “Exploiting mirrors in interactive reconstruction with structured light,” in VMV, (2004), pp. 125, 132.
  18. D. Lanman, D. Crispell, and G. Taubin, “Surround structured lighting: 3-d scanning with orthographic illumination,” Comput. Vis. Image Underst. 113(11), 1107–1117 (2009).
    [Crossref]
  19. B. Chen and B. Pan, “Mirror-assisted panoramic-digital image correlation for full-surface 360-deg deformation measurement,” Measurement 132, 350–358 (2019).
    [Crossref]
  20. P. J. Besl and N. D. McKay, “A method for registration of 3-d shapes,” IEEE Transactions on Pattern Analysis Mach. Intell. 14(2), 239–256 (1992).
  21. H. Mohammadzade and D. Hatzinakos, “Iterative closest normal point for 3d face recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 35(2), 381–397 (2013).
    [Crossref]
  22. G. L. Mariottini, S. Scheggi, F. Morbidi, and D. Prattichizzo, “Planar mirrors for image-based robot localization and 3-d reconstruction,” Mechatronics 22(4), 398–409 (2012).
    [Crossref]
  23. P. Wang, J. Wang, J. Xu, Y. Guan, G. Zhang, and K. Chen, “Calibration method for a large-scale structured light measurement system,” Appl. Opt. 56(14), 3995–4002 (2017).
    [Crossref]
  24. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
    [Crossref]
  25. X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2d reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
    [Crossref]
  26. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
    [Crossref]
  27. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006).
    [Crossref]

2019 (4)

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Photonics 1(2), 025001 (2019).
[Crossref]

B. Chen and B. Pan, “Mirror-assisted panoramic-digital image correlation for full-surface 360-deg deformation measurement,” Measurement 132, 350–358 (2019).
[Crossref]

W. Yin, C. Zuo, S. Feng, T. Tao, Y. Hu, L. Huang, J. Ma, and Q. Chen, “High-speed three-dimensional shape measurement using geometry-constraint-based number-theoretical phase unwrapping,” Opt. Lasers Eng. 115, 21–31 (2019).
[Crossref]

W. Yin, S. Feng, T. Tao, L. Huang, M. Trusiak, Q. Chen, and C. Zuo, “High-speed 3d shape measurement using the optimized composite fringe patterns and stereo-assisted structured light system,” Opt. Express 27(3), 2411–2431 (2019).
[Crossref]

2018 (7)

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: A review,” Opt. Lasers Eng. 107, 28–37 (2018).
[Crossref]

L. Song, Y. Ru, Y. Yang, Q. Guo, X. Zhu, and J. Xi, “Full-view three-dimensional measurement of complex surfaces,” Opt. Eng. 57(10), 104106 (2018).
[Crossref]

J.-S. Hyun, G. T.-C. Chiu, and S. Zhang, “High-speed and high-accuracy 3d surface measurement using a mechanical projector,” Opt. Express 26(2), 1474–1487 (2018).
[Crossref]

C. Zuo, T. Tao, S. Feng, L. Huang, A. Asundi, and Q. Chen, “Micro fourier transform profilometry (µftp): 3d shape measurement at 10,000 frames per second,” Opt. Lasers Eng. 102, 70–91 (2018).
[Crossref]

S. Zhang, “High-speed 3d shape measurement with structured light methods: A review,” Opt. Lasers Eng. 106, 119–131 (2018).
[Crossref]

S. Feng, L. Zhang, C. Zuo, T. Tao, Q. Chen, and G. Gu, “High dynamic range 3d measurements with fringe projection profilometry: a review,” Mea. Sci. Technol. 29(12), 122001 (2018).
[Crossref]

2017 (2)

X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2d reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
[Crossref]

P. Wang, J. Wang, J. Xu, Y. Guan, G. Zhang, and K. Chen, “Calibration method for a large-scale structured light measurement system,” Appl. Opt. 56(14), 3995–4002 (2017).
[Crossref]

2016 (1)

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

2013 (3)

C. Zuo, Q. Chen, G. Gu, S. Feng, F. Feng, R. Li, and G. Shen, “High-speed three-dimensional shape measurement for dynamic scenes using bi-frequency tripolar pulse-width-modulation fringe projection,” Opt. Lasers Eng. 51(8), 953–960 (2013).
[Crossref]

H. Mohammadzade and D. Hatzinakos, “Iterative closest normal point for 3d face recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 35(2), 381–397 (2013).
[Crossref]

M. Nießner, M. Zollhöfer, S. Izadi, and M. Stamminger, “Real-time 3d reconstruction at scale using voxel hashing,” ACM Trans. Graph. 32(6), 1–11 (2013).
[Crossref]

2012 (3)

X. Liu, X. Peng, H. Chen, D. He, and B. Z. Gao, “Strategy for automatic and complete three-dimensional optical digitization,” Opt. Lett. 37(15), 3126–3128 (2012).
[Crossref]

Z. Zhang, “Review of single-shot 3d shape measurement by phase calculation-based fringe projection techniques,” Opt. Lasers Eng. 50(8), 1097–1106 (2012).
[Crossref]

G. L. Mariottini, S. Scheggi, F. Morbidi, and D. Prattichizzo, “Planar mirrors for image-based robot localization and 3-d reconstruction,” Mechatronics 22(4), 398–409 (2012).
[Crossref]

2010 (2)

X. Su and Q. Zhang, “Dynamic 3-d shape measurement method: a review,” Opt. Lasers Eng. 48(2), 191–204 (2010).
[Crossref]

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

2009 (1)

D. Lanman, D. Crispell, and G. Taubin, “Surround structured lighting: 3-d scanning with orthographic illumination,” Comput. Vis. Image Underst. 113(11), 1107–1117 (2009).
[Crossref]

2006 (1)

S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006).
[Crossref]

2000 (1)

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

1992 (1)

P. J. Besl and N. D. McKay, “A method for registration of 3-d shapes,” IEEE Transactions on Pattern Analysis Mach. Intell. 14(2), 239–256 (1992).

Asundi, A.

C. Zuo, T. Tao, S. Feng, L. Huang, A. Asundi, and Q. Chen, “Micro fourier transform profilometry (µftp): 3d shape measurement at 10,000 frames per second,” Opt. Lasers Eng. 102, 70–91 (2018).
[Crossref]

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

Besl, P. J.

P. J. Besl and N. D. McKay, “A method for registration of 3-d shapes,” IEEE Transactions on Pattern Analysis Mach. Intell. 14(2), 239–256 (1992).

Cai, Z.

X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2d reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
[Crossref]

Chen, B.

B. Chen and B. Pan, “Mirror-assisted panoramic-digital image correlation for full-surface 360-deg deformation measurement,” Measurement 132, 350–358 (2019).
[Crossref]

Chen, H.

Chen, K.

Chen, Q.

W. Yin, C. Zuo, S. Feng, T. Tao, Y. Hu, L. Huang, J. Ma, and Q. Chen, “High-speed three-dimensional shape measurement using geometry-constraint-based number-theoretical phase unwrapping,” Opt. Lasers Eng. 115, 21–31 (2019).
[Crossref]

W. Yin, S. Feng, T. Tao, L. Huang, M. Trusiak, Q. Chen, and C. Zuo, “High-speed 3d shape measurement using the optimized composite fringe patterns and stereo-assisted structured light system,” Opt. Express 27(3), 2411–2431 (2019).
[Crossref]

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Photonics 1(2), 025001 (2019).
[Crossref]

S. Feng, L. Zhang, C. Zuo, T. Tao, Q. Chen, and G. Gu, “High dynamic range 3d measurements with fringe projection profilometry: a review,” Mea. Sci. Technol. 29(12), 122001 (2018).
[Crossref]

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

C. Zuo, T. Tao, S. Feng, L. Huang, A. Asundi, and Q. Chen, “Micro fourier transform profilometry (µftp): 3d shape measurement at 10,000 frames per second,” Opt. Lasers Eng. 102, 70–91 (2018).
[Crossref]

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

C. Zuo, Q. Chen, G. Gu, S. Feng, F. Feng, R. Li, and G. Shen, “High-speed three-dimensional shape measurement for dynamic scenes using bi-frequency tripolar pulse-width-modulation fringe projection,” Opt. Lasers Eng. 51(8), 953–960 (2013).
[Crossref]

Chiu, G. T.-C.

Crispell, D.

D. Lanman, D. Crispell, and G. Taubin, “Surround structured lighting: 3-d scanning with orthographic illumination,” Comput. Vis. Image Underst. 113(11), 1107–1117 (2009).
[Crossref]

Epstein, E.

E. Epstein, M. Granger-Piché, and P. Potilin, “Exploiting mirrors in interactive reconstruction with structured light,” in VMV, (2004), pp. 125, 132.

Feng, F.

C. Zuo, Q. Chen, G. Gu, S. Feng, F. Feng, R. Li, and G. Shen, “High-speed three-dimensional shape measurement for dynamic scenes using bi-frequency tripolar pulse-width-modulation fringe projection,” Opt. Lasers Eng. 51(8), 953–960 (2013).
[Crossref]

Feng, S.

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Photonics 1(2), 025001 (2019).
[Crossref]

W. Yin, S. Feng, T. Tao, L. Huang, M. Trusiak, Q. Chen, and C. Zuo, “High-speed 3d shape measurement using the optimized composite fringe patterns and stereo-assisted structured light system,” Opt. Express 27(3), 2411–2431 (2019).
[Crossref]

W. Yin, C. Zuo, S. Feng, T. Tao, Y. Hu, L. Huang, J. Ma, and Q. Chen, “High-speed three-dimensional shape measurement using geometry-constraint-based number-theoretical phase unwrapping,” Opt. Lasers Eng. 115, 21–31 (2019).
[Crossref]

C. Zuo, T. Tao, S. Feng, L. Huang, A. Asundi, and Q. Chen, “Micro fourier transform profilometry (µftp): 3d shape measurement at 10,000 frames per second,” Opt. Lasers Eng. 102, 70–91 (2018).
[Crossref]

S. Feng, L. Zhang, C. Zuo, T. Tao, Q. Chen, and G. Gu, “High dynamic range 3d measurements with fringe projection profilometry: a review,” Mea. Sci. Technol. 29(12), 122001 (2018).
[Crossref]

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

C. Zuo, Q. Chen, G. Gu, S. Feng, F. Feng, R. Li, and G. Shen, “High-speed three-dimensional shape measurement for dynamic scenes using bi-frequency tripolar pulse-width-modulation fringe projection,” Opt. Lasers Eng. 51(8), 953–960 (2013).
[Crossref]

Gao, B. Z.

Gorthi, S. S.

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

Granger-Piché, M.

E. Epstein, M. Granger-Piché, and P. Potilin, “Exploiting mirrors in interactive reconstruction with structured light,” in VMV, (2004), pp. 125, 132.

Gu, G.

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Photonics 1(2), 025001 (2019).
[Crossref]

S. Feng, L. Zhang, C. Zuo, T. Tao, Q. Chen, and G. Gu, “High dynamic range 3d measurements with fringe projection profilometry: a review,” Mea. Sci. Technol. 29(12), 122001 (2018).
[Crossref]

C. Zuo, Q. Chen, G. Gu, S. Feng, F. Feng, R. Li, and G. Shen, “High-speed three-dimensional shape measurement for dynamic scenes using bi-frequency tripolar pulse-width-modulation fringe projection,” Opt. Lasers Eng. 51(8), 953–960 (2013).
[Crossref]

Guan, Y.

Guo, Q.

L. Song, Y. Ru, Y. Yang, Q. Guo, X. Zhu, and J. Xi, “Full-view three-dimensional measurement of complex surfaces,” Opt. Eng. 57(10), 104106 (2018).
[Crossref]

Hatzinakos, D.

H. Mohammadzade and D. Hatzinakos, “Iterative closest normal point for 3d face recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 35(2), 381–397 (2013).
[Crossref]

He, D.

X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2d reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
[Crossref]

X. Liu, X. Peng, H. Chen, D. He, and B. Z. Gao, “Strategy for automatic and complete three-dimensional optical digitization,” Opt. Lett. 37(15), 3126–3128 (2012).
[Crossref]

He, W.

X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2d reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
[Crossref]

Hu, Y.

W. Yin, C. Zuo, S. Feng, T. Tao, Y. Hu, L. Huang, J. Ma, and Q. Chen, “High-speed three-dimensional shape measurement using geometry-constraint-based number-theoretical phase unwrapping,” Opt. Lasers Eng. 115, 21–31 (2019).
[Crossref]

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Photonics 1(2), 025001 (2019).
[Crossref]

Huang, L.

W. Yin, C. Zuo, S. Feng, T. Tao, Y. Hu, L. Huang, J. Ma, and Q. Chen, “High-speed three-dimensional shape measurement using geometry-constraint-based number-theoretical phase unwrapping,” Opt. Lasers Eng. 115, 21–31 (2019).
[Crossref]

W. Yin, S. Feng, T. Tao, L. Huang, M. Trusiak, Q. Chen, and C. Zuo, “High-speed 3d shape measurement using the optimized composite fringe patterns and stereo-assisted structured light system,” Opt. Express 27(3), 2411–2431 (2019).
[Crossref]

C. Zuo, T. Tao, S. Feng, L. Huang, A. Asundi, and Q. Chen, “Micro fourier transform profilometry (µftp): 3d shape measurement at 10,000 frames per second,” Opt. Lasers Eng. 102, 70–91 (2018).
[Crossref]

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

Huang, P. S.

S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006).
[Crossref]

Hyun, J.-S.

Izadi, S.

M. Nießner, M. Zollhöfer, S. Izadi, and M. Stamminger, “Real-time 3d reconstruction at scale using voxel hashing,” ACM Trans. Graph. 32(6), 1–11 (2013).
[Crossref]

Jiang, H.

X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2d reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
[Crossref]

Lanman, D.

D. Lanman, D. Crispell, and G. Taubin, “Surround structured lighting: 3-d scanning with orthographic illumination,” Comput. Vis. Image Underst. 113(11), 1107–1117 (2009).
[Crossref]

Li, R.

C. Zuo, Q. Chen, G. Gu, S. Feng, F. Feng, R. Li, and G. Shen, “High-speed three-dimensional shape measurement for dynamic scenes using bi-frequency tripolar pulse-width-modulation fringe projection,” Opt. Lasers Eng. 51(8), 953–960 (2013).
[Crossref]

Liu, X.

X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2d reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
[Crossref]

X. Liu, X. Peng, H. Chen, D. He, and B. Z. Gao, “Strategy for automatic and complete three-dimensional optical digitization,” Opt. Lett. 37(15), 3126–3128 (2012).
[Crossref]

Ma, J.

W. Yin, C. Zuo, S. Feng, T. Tao, Y. Hu, L. Huang, J. Ma, and Q. Chen, “High-speed three-dimensional shape measurement using geometry-constraint-based number-theoretical phase unwrapping,” Opt. Lasers Eng. 115, 21–31 (2019).
[Crossref]

Mariottini, G. L.

G. L. Mariottini, S. Scheggi, F. Morbidi, and D. Prattichizzo, “Planar mirrors for image-based robot localization and 3-d reconstruction,” Mechatronics 22(4), 398–409 (2012).
[Crossref]

McKay, N. D.

P. J. Besl and N. D. McKay, “A method for registration of 3-d shapes,” IEEE Transactions on Pattern Analysis Mach. Intell. 14(2), 239–256 (1992).

Mohammadzade, H.

H. Mohammadzade and D. Hatzinakos, “Iterative closest normal point for 3d face recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 35(2), 381–397 (2013).
[Crossref]

Morbidi, F.

G. L. Mariottini, S. Scheggi, F. Morbidi, and D. Prattichizzo, “Planar mirrors for image-based robot localization and 3-d reconstruction,” Mechatronics 22(4), 398–409 (2012).
[Crossref]

Nießner, M.

M. Nießner, M. Zollhöfer, S. Izadi, and M. Stamminger, “Real-time 3d reconstruction at scale using voxel hashing,” ACM Trans. Graph. 32(6), 1–11 (2013).
[Crossref]

Pan, B.

B. Chen and B. Pan, “Mirror-assisted panoramic-digital image correlation for full-surface 360-deg deformation measurement,” Measurement 132, 350–358 (2019).
[Crossref]

Peng, X.

X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2d reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
[Crossref]

X. Liu, X. Peng, H. Chen, D. He, and B. Z. Gao, “Strategy for automatic and complete three-dimensional optical digitization,” Opt. Lett. 37(15), 3126–3128 (2012).
[Crossref]

Potilin, P.

E. Epstein, M. Granger-Piché, and P. Potilin, “Exploiting mirrors in interactive reconstruction with structured light,” in VMV, (2004), pp. 125, 132.

Prattichizzo, D.

G. L. Mariottini, S. Scheggi, F. Morbidi, and D. Prattichizzo, “Planar mirrors for image-based robot localization and 3-d reconstruction,” Mechatronics 22(4), 398–409 (2012).
[Crossref]

Rastogi, P.

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

Ru, Y.

L. Song, Y. Ru, Y. Yang, Q. Guo, X. Zhu, and J. Xi, “Full-view three-dimensional measurement of complex surfaces,” Opt. Eng. 57(10), 104106 (2018).
[Crossref]

Scheggi, S.

G. L. Mariottini, S. Scheggi, F. Morbidi, and D. Prattichizzo, “Planar mirrors for image-based robot localization and 3-d reconstruction,” Mechatronics 22(4), 398–409 (2012).
[Crossref]

Shen, G.

C. Zuo, Q. Chen, G. Gu, S. Feng, F. Feng, R. Li, and G. Shen, “High-speed three-dimensional shape measurement for dynamic scenes using bi-frequency tripolar pulse-width-modulation fringe projection,” Opt. Lasers Eng. 51(8), 953–960 (2013).
[Crossref]

Song, L.

L. Song, Y. Ru, Y. Yang, Q. Guo, X. Zhu, and J. Xi, “Full-view three-dimensional measurement of complex surfaces,” Opt. Eng. 57(10), 104106 (2018).
[Crossref]

Stamminger, M.

M. Nießner, M. Zollhöfer, S. Izadi, and M. Stamminger, “Real-time 3d reconstruction at scale using voxel hashing,” ACM Trans. Graph. 32(6), 1–11 (2013).
[Crossref]

Su, X.

X. Su and Q. Zhang, “Dynamic 3-d shape measurement method: a review,” Opt. Lasers Eng. 48(2), 191–204 (2010).
[Crossref]

Tao, T.

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Photonics 1(2), 025001 (2019).
[Crossref]

W. Yin, C. Zuo, S. Feng, T. Tao, Y. Hu, L. Huang, J. Ma, and Q. Chen, “High-speed three-dimensional shape measurement using geometry-constraint-based number-theoretical phase unwrapping,” Opt. Lasers Eng. 115, 21–31 (2019).
[Crossref]

W. Yin, S. Feng, T. Tao, L. Huang, M. Trusiak, Q. Chen, and C. Zuo, “High-speed 3d shape measurement using the optimized composite fringe patterns and stereo-assisted structured light system,” Opt. Express 27(3), 2411–2431 (2019).
[Crossref]

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

C. Zuo, T. Tao, S. Feng, L. Huang, A. Asundi, and Q. Chen, “Micro fourier transform profilometry (µftp): 3d shape measurement at 10,000 frames per second,” Opt. Lasers Eng. 102, 70–91 (2018).
[Crossref]

S. Feng, L. Zhang, C. Zuo, T. Tao, Q. Chen, and G. Gu, “High dynamic range 3d measurements with fringe projection profilometry: a review,” Mea. Sci. Technol. 29(12), 122001 (2018).
[Crossref]

Taubin, G.

D. Lanman, D. Crispell, and G. Taubin, “Surround structured lighting: 3-d scanning with orthographic illumination,” Comput. Vis. Image Underst. 113(11), 1107–1117 (2009).
[Crossref]

Trusiak, M.

Wang, J.

Wang, P.

Xi, J.

L. Song, Y. Ru, Y. Yang, Q. Guo, X. Zhu, and J. Xi, “Full-view three-dimensional measurement of complex surfaces,” Opt. Eng. 57(10), 104106 (2018).
[Crossref]

Xu, J.

Yang, Y.

L. Song, Y. Ru, Y. Yang, Q. Guo, X. Zhu, and J. Xi, “Full-view three-dimensional measurement of complex surfaces,” Opt. Eng. 57(10), 104106 (2018).
[Crossref]

Yin, W.

W. Yin, S. Feng, T. Tao, L. Huang, M. Trusiak, Q. Chen, and C. Zuo, “High-speed 3d shape measurement using the optimized composite fringe patterns and stereo-assisted structured light system,” Opt. Express 27(3), 2411–2431 (2019).
[Crossref]

W. Yin, C. Zuo, S. Feng, T. Tao, Y. Hu, L. Huang, J. Ma, and Q. Chen, “High-speed three-dimensional shape measurement using geometry-constraint-based number-theoretical phase unwrapping,” Opt. Lasers Eng. 115, 21–31 (2019).
[Crossref]

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Photonics 1(2), 025001 (2019).
[Crossref]

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

Yin, Y.

X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2d reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
[Crossref]

Zhang, G.

Zhang, L.

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Photonics 1(2), 025001 (2019).
[Crossref]

S. Feng, L. Zhang, C. Zuo, T. Tao, Q. Chen, and G. Gu, “High dynamic range 3d measurements with fringe projection profilometry: a review,” Mea. Sci. Technol. 29(12), 122001 (2018).
[Crossref]

Zhang, M.

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

Zhang, Q.

X. Su and Q. Zhang, “Dynamic 3-d shape measurement method: a review,” Opt. Lasers Eng. 48(2), 191–204 (2010).
[Crossref]

Zhang, S.

S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: A review,” Opt. Lasers Eng. 107, 28–37 (2018).
[Crossref]

J.-S. Hyun, G. T.-C. Chiu, and S. Zhang, “High-speed and high-accuracy 3d surface measurement using a mechanical projector,” Opt. Express 26(2), 1474–1487 (2018).
[Crossref]

S. Zhang, “High-speed 3d shape measurement with structured light methods: A review,” Opt. Lasers Eng. 106, 119–131 (2018).
[Crossref]

S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006).
[Crossref]

Zhang, Z.

X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2d reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
[Crossref]

Z. Zhang, “Review of single-shot 3d shape measurement by phase calculation-based fringe projection techniques,” Opt. Lasers Eng. 50(8), 1097–1106 (2012).
[Crossref]

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

Zhu, X.

L. Song, Y. Ru, Y. Yang, Q. Guo, X. Zhu, and J. Xi, “Full-view three-dimensional measurement of complex surfaces,” Opt. Eng. 57(10), 104106 (2018).
[Crossref]

Zollhöfer, M.

M. Nießner, M. Zollhöfer, S. Izadi, and M. Stamminger, “Real-time 3d reconstruction at scale using voxel hashing,” ACM Trans. Graph. 32(6), 1–11 (2013).
[Crossref]

Zuo, C.

W. Yin, C. Zuo, S. Feng, T. Tao, Y. Hu, L. Huang, J. Ma, and Q. Chen, “High-speed three-dimensional shape measurement using geometry-constraint-based number-theoretical phase unwrapping,” Opt. Lasers Eng. 115, 21–31 (2019).
[Crossref]

W. Yin, S. Feng, T. Tao, L. Huang, M. Trusiak, Q. Chen, and C. Zuo, “High-speed 3d shape measurement using the optimized composite fringe patterns and stereo-assisted structured light system,” Opt. Express 27(3), 2411–2431 (2019).
[Crossref]

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Photonics 1(2), 025001 (2019).
[Crossref]

C. Zuo, T. Tao, S. Feng, L. Huang, A. Asundi, and Q. Chen, “Micro fourier transform profilometry (µftp): 3d shape measurement at 10,000 frames per second,” Opt. Lasers Eng. 102, 70–91 (2018).
[Crossref]

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

S. Feng, L. Zhang, C. Zuo, T. Tao, Q. Chen, and G. Gu, “High dynamic range 3d measurements with fringe projection profilometry: a review,” Mea. Sci. Technol. 29(12), 122001 (2018).
[Crossref]

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

C. Zuo, Q. Chen, G. Gu, S. Feng, F. Feng, R. Li, and G. Shen, “High-speed three-dimensional shape measurement for dynamic scenes using bi-frequency tripolar pulse-width-modulation fringe projection,” Opt. Lasers Eng. 51(8), 953–960 (2013).
[Crossref]

ACM Trans. Graph. (1)

M. Nießner, M. Zollhöfer, S. Izadi, and M. Stamminger, “Real-time 3d reconstruction at scale using voxel hashing,” ACM Trans. Graph. 32(6), 1–11 (2013).
[Crossref]

Adv. Photonics (1)

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Photonics 1(2), 025001 (2019).
[Crossref]

Appl. Opt. (1)

Comput. Vis. Image Underst. (1)

D. Lanman, D. Crispell, and G. Taubin, “Surround structured lighting: 3-d scanning with orthographic illumination,” Comput. Vis. Image Underst. 113(11), 1107–1117 (2009).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (2)

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

H. Mohammadzade and D. Hatzinakos, “Iterative closest normal point for 3d face recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 35(2), 381–397 (2013).
[Crossref]

IEEE Transactions on Pattern Analysis Mach. Intell. (1)

P. J. Besl and N. D. McKay, “A method for registration of 3-d shapes,” IEEE Transactions on Pattern Analysis Mach. Intell. 14(2), 239–256 (1992).

Mea. Sci. Technol. (1)

S. Feng, L. Zhang, C. Zuo, T. Tao, Q. Chen, and G. Gu, “High dynamic range 3d measurements with fringe projection profilometry: a review,” Mea. Sci. Technol. 29(12), 122001 (2018).
[Crossref]

Measurement (1)

B. Chen and B. Pan, “Mirror-assisted panoramic-digital image correlation for full-surface 360-deg deformation measurement,” Measurement 132, 350–358 (2019).
[Crossref]

Mechatronics (1)

G. L. Mariottini, S. Scheggi, F. Morbidi, and D. Prattichizzo, “Planar mirrors for image-based robot localization and 3-d reconstruction,” Mechatronics 22(4), 398–409 (2012).
[Crossref]

Opt. Eng. (2)

L. Song, Y. Ru, Y. Yang, Q. Guo, X. Zhu, and J. Xi, “Full-view three-dimensional measurement of complex surfaces,” Opt. Eng. 57(10), 104106 (2018).
[Crossref]

S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006).
[Crossref]

Opt. Express (2)

Opt. Lasers Eng. (11)

W. Yin, C. Zuo, S. Feng, T. Tao, Y. Hu, L. Huang, J. Ma, and Q. Chen, “High-speed three-dimensional shape measurement using geometry-constraint-based number-theoretical phase unwrapping,” Opt. Lasers Eng. 115, 21–31 (2019).
[Crossref]

S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: A review,” Opt. Lasers Eng. 107, 28–37 (2018).
[Crossref]

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

C. Zuo, Q. Chen, G. Gu, S. Feng, F. Feng, R. Li, and G. Shen, “High-speed three-dimensional shape measurement for dynamic scenes using bi-frequency tripolar pulse-width-modulation fringe projection,” Opt. Lasers Eng. 51(8), 953–960 (2013).
[Crossref]

Z. Zhang, “Review of single-shot 3d shape measurement by phase calculation-based fringe projection techniques,” Opt. Lasers Eng. 50(8), 1097–1106 (2012).
[Crossref]

X. Su and Q. Zhang, “Dynamic 3-d shape measurement method: a review,” Opt. Lasers Eng. 48(2), 191–204 (2010).
[Crossref]

C. Zuo, T. Tao, S. Feng, L. Huang, A. Asundi, and Q. Chen, “Micro fourier transform profilometry (µftp): 3d shape measurement at 10,000 frames per second,” Opt. Lasers Eng. 102, 70–91 (2018).
[Crossref]

S. Zhang, “High-speed 3d shape measurement with structured light methods: A review,” Opt. Lasers Eng. 106, 119–131 (2018).
[Crossref]

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2d reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
[Crossref]

Opt. Lett. (1)

Other (1)

E. Epstein, M. Granger-Piché, and P. Potilin, “Exploiting mirrors in interactive reconstruction with structured light,” in VMV, (2004), pp. 125, 132.

Supplementary Material (3)

NameDescription
» Visualization 1       The 3D measurement results of a standard ceramic sphere.
» Visualization 2       The measurement results of a Voltaire model.
» Visualization 3       The measurement results of a Venus model.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. The schematic diagram of the ideal reflection model for the plane mirror.
Fig. 2.
Fig. 2. The diagram of the mirror-assisted FPP system.
Fig. 3.
Fig. 3. The process of obtaining panoramic 3D shape measurements of objects with complex surfaces.
Fig. 4.
Fig. 4. The measurement result of the circular calibration board. (a) One pose data of the circular calibration board can provide ${15}$ feature point pairs. (b) The 3D data of virtual points. (c) The 3D data of real points.
Fig. 5.
Fig. 5. The 3D measurement results of a standard ceramic sphere (Visualization 1). (a)-(c) The single-view 3D measurement results. (d)-(f) The corresponding distribution of the errors of (a)-(c). (g)-(i) The full-surface 3D measurement results. (j)-(l) The corresponding distribution of the errors of (g)-(i).
Fig. 6.
Fig. 6. The measurement results of a Voltaire model (Visualization 2). (a) The full-surface 3D reconstruction results of a Voltaire model. (b)-(d) The corresponding results of (a) from three different views.
Fig. 7.
Fig. 7. The measurement results of a Venus model (Visualization 3). (a) The full-surface 3D reconstruction results of a Venus model. (b)-(d) The corresponding results of (a) from three different views.

Tables (1)

Tables Icon

Table 1. Comparison of calibration residual errors at different steps.

Equations (24)

Equations on this page are rendered with MathJax. Learn more.

O X r = O X o + X o X r .
O X r = O X o + 2 d o r n r .
d o r = d w r O X o n r ,
O X r = O X o + 2 d w r n r 2 n r ( O X o n r ) .
X r = ( I 2 n r ( n r ) T ) X o + 2 d w r n r ,
[ X r 1 ] = D r [ X o 1 ] .
D r = [ I 2 n r ( n r ) T 2 d w r n r 0 1 ] .
D r [ X r 1 ] = [ X o 1 ] .
[ 1 2 ( a r ) 2 ] x r 2 a r b r y r 2 a r c r z r + 2 a r d w r = x o ,
2 a r b r x r + [ 1 2 ( b r ) 2 ] y r 2 b r c r z r + 2 b r d w r = y o ,
2 a r c r x r 2 b r c r y r + [ 1 2 ( c r ) 2 ] z r + 2 c r d w r = z o .
a r ( y r y o ) + b r ( x o x r ) = 0.
a r ( z r z o ) + c r ( x o x r ) = 0 ,
b r ( z r z o ) + c r ( y o y r ) = 0.
[ y r y o x o x r 0 z r z o 0 x o x r 0 z r z o y o y r ] [ a r b r c r ] = 0.
f ( d w r ) = n = 1 N r 1 2 ( d w r ) + r 2 2 ( d w r ) + r 3 2 ( d w r ) ,
r 1 ( d w r ) = 2 a 0 r d w r + [ 1 2 ( a 0 r ) 2 ] x r 2 a 0 r b 0 r y r 2 a 0 r c 0 r z r x o ,
r 2 ( d w r ) = 2 b 0 r d w r 2 a 0 r b 0 r x r + [ 1 2 ( b 0 r ) 2 ] y r 2 b 0 r c 0 r z r y o ,
r 3 ( d w r ) = 2 c 0 r d w r 2 a 0 r c 0 r x r 2 b 0 r c 0 r y r + [ 1 2 ( c 0 r ) 2 ] z r z o ,
n = 1 N g 1 2 ( G ) + g 2 2 ( G ) + g 3 2 ( G ) ,
g 1 ( G ) = [ 1 2 ( a r ) 2 ] x r 2 a r b r y r 2 a r c r z r + 2 a r d w r x o ,
g 2 ( G ) = 2 a r b r x r + [ 1 2 ( b r ) 2 ] y r 2 b r c r z r + 2 b r d w r y o ,
g 3 ( G ) = 2 a r c r x r 2 b r c r y r + [ 1 2 ( c r ) 2 ] z r + 2 c r d w r z o ,
n = 1 N g 1 2 ( G , X n r ) + g 2 2 ( G , X n r ) + g 3 2 ( G , X n r ) .

Metrics