Abstract

High resolution, real-time three-dimensional (3D) measurement plays an important role in many fields. In this paper, a multi-directional dynamic real-time phase measurement profilometry based on improved optical flow is proposed. In a five-step phase shifting dynamic measurement, pixel matching is needed to make the pixels one-to-one corresponding in five patterns. However, in the frequently-used pixel matching method at present, it is necessary to calculate the correlation and traverse the whole deformed pattern for the motion information of the measured object. The huge amount of computation caused by correlation computation takes up most of the time in the process of the entire 3D reconstruction, so it can not meet the requirement of real-time dynamic measurement. In order to solve the problem, the improved optical flow algorithm is introduced to replace correlation calculation in pixel matching. In one measurement, five captured patterns need to be dealt with, and the optical flow between each two adjacent frames is calculated. Then four two-dimensional vector matrices can be obtained. The vector matrices contain the complete motion information of the measured object. Experiments and simulations prove that this method can improve the efficiency of pixel matching by 42 times and 3D reconstruction by 32 times on the premise of ensuring the accuracy.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

With the development of industry and technology, the requirements for high-resolution and real-time have become the tendency of 3D measurement technology development. Optical measurement methods, including Fourier transform profilometry (FTP) [13] and phase measurement profilometry (PMP) [410], are strongly valued and studied for the high efficiency and precision among kinds of measurement methods. These two 3D reconstructing methods contain the process of projecting, acquiring and processing the multi-frame deformed pattern. Compared with single frame measurement method based on Fourier transform profilometry, multi-frame measurement method phase measurement profilometry does not need filtering in the process of 3D reconstruction. Therefore, phase measurement profilometry has better anti-noise performance and higher precision.

In FTP, there is no difference between the reconstruction in static and dynamic measurement as only single frame should be projected and captured. However, due to the necessity of filtering in FTP, the loss of some effective information of the measured object is inevitable. In addition, during the process of dealing with spectrum information of the measured object, spectrum aliasing is another factor which affects the accuracy of the reconstruction directly.

Compared to FTP, PMP has the advantage of higher accuracy in 3D measurement under low noise condition. However, it is compulsory to capture at least three deformed patterns in which pixels are one-to-one corresponding in PMP. The movement of the measured object will cause the misalignment of pixels in all deformed patterns, leading to the failure of 3D reconstruction. Therefore, pixel matching based on correlation calculation is carried out for pixels one-to-one correspondence. Correlation calculation is a process of pixel-to-pixel calculation between the selected template and each of the area with the same size of the template in all deformed patterns. Due to the huge amounts of calculation involved, the speed of 3D reconstruction is affected immensely.

To solve this problem, an online 3D measurement method based on the characteristics of low modulation system is proposed [11]. Because the low modulation system region has obvious gray-scale characteristics and moves with the measured object, it can be utilized as a feature template to achieve pixel matching and reconstruction. This method not only ensures the measurement accuracy, but also improves the measurement speed. However, compared to the requirements of dynamic real-time measurement, the speed is still not enough. Zhang Song and others put forward some methods to solve the above problems, such as using GPU to accelerate the calculation process [12]. This method perfectly solves the requirements of dynamic real-time measurement, but it needs the help of GPU, which increases hardware costs. Another method based on feature tags is to place a marker with noticeable features besides the measured object [13]. Because of the movement of the conveyor belt in on-line measurement, feature tags are also moving along with the measured object. Because of the apparent feature of the designed marker, it is of great convenience to locate the displacement of the measured object. Although the pixel matching method of artificially adding feature markers has the advantage of high precision, it is cumbersome to make feature markers and has no universality. In general, the existing dynamic measurement methods mentioned above have some shortcomings, such as too much calculation, or adding additional hardware, which leads to not practical enough or higher measurement costs.

In order to solve the above defects of dynamic PMP [1416], it is necessary to improve and optimize the pixel matching algorithm. Real time optical flow [1722] is a method in real-time dynamic recognition in deep learning field. Due to harsh application conditions, it is unable to achieve ultra-high precision (one pixel) in the driverless field [2327], despite satisfying its needs. In dynamic real-time PMP, the accuracy of pixel matching is required to be within one pixel. However, the condition of 3D measurement test platform is much better than the road condition in the field of intelligent driving, and is very close to the ideal conditions for the application of optical flow. In addition, the pattern enhancement algorithm and the statistics are applied for improving the accuracy of the optical flow method. The basic principle is that the brightness in optical flow windows is moving with the movement of the object synchronously. In this paper, an improved optical flow algorithm is proposed to replace the correlation calculation in pixel matching. A plaster face mask and a coin in the experiment to prove that it has good accuracy and detail reduction ability.

The remainder of this paper is organized as follows – Section 2, we deduce the complete optical flow algorithm. Simulations and experimental results of dynamic PMP based on improved optical flow method are presented in Section 3 and Section 4, respectively. In Section 5, the measurement accuracy and efficiency of the improved optical flow method is compared and discussed. Finally, this paper ends with the conclusions in Section 6.

2. Principle

2.1 Principle of PMP

Figure 1 shows the 3D measurement system. The image acquisition system uses a charge-coupled device (CCD), and the projection system uses a digital light procession (DLP). The projected surface structure light is usually sinusoidal fringe patterns, and five step phase shifting algorithm is used in this paper.

 figure: Fig. 1.

Fig. 1. 3D measurement system.

Download Full Size | PPT Slide | PDF

Without the measured object on the reference surface, the sinusoidal grating fringe projected by DLP is only modulated by the reference plane height. The phase shift amount of the n-th frame sinusoidal fringe pattern relative to the first frame is ${\delta _n}$. The acquired deformed pattern is:

$${I_{0n}}({x,y} )= {R_0}({x,y} )[{A({x,y} )+ B({x,y} )\cdot \cos ({{\varphi_0}({x,y} )+ {\delta_n}} )} ],n = 1,2,3,4,5$$
In which:
$${\varphi _0}({x,y} )= 2\pi {f_0}x + {\emptyset _0}({x,y} )$$

Where $({x,y})$ is the pixel coordinate in CCD, $A({x,y})$ represents the background light intensity, and $B({x,y})$ represents the contrast of the sinusoidal pattern. ${R_0}({x,y} )$ is the reference plane reflectance distribution. ${\varphi _0}({x,y} )$ is the phase information in the first frame. $ {\emptyset _0}({x,y} )$ is the phase information modulated by the reference plane. ${f_0}$ is the frequency of the sinusoidal fringes. The phase shift amount of the n-th frame deformed pattern relative to the first frame deformed pattern is:

$${\delta _n} = \frac{{2\pi ({n - 1} )}}{5},n = 1,2,3,4,5$$

When DLP projects a sinusoidal grating fringe onto the measured object, the height of the measured object modulates the phase. The reflectivity varies at different pixel in the deformed pattern, and the light intensity distribution of the deformed pattern ${I_n}({x,y} ),n = 1,2,3,4,5$ can be shown:

$${I_n}({x,y} )= R({x,y} )[{A({x,y} )+ B({x,y} )\cdot cos ({\varphi ({x,y} )+ {\delta_n}} )} ],n = 1,2,3,4,5$$
In Eq. (4):
$$\mathrm{\varphi}({x,y} )= 2\pi {f_0}x + \emptyset ({x,y} )$$

Here $R({x,y})$ represents the reflectance distribution of the reference surface and the surface of the object. Equation (5) is the phase distribution of the deformed pattern containing the phase information of the measured object and the reference plane. $\emptyset ({x,y} )$ represents the phase modified by the measured object.

As the pixels of the measured object in each deformed pattern ${I_n}({x,y} ),n = 1,2,3,4,5$ are not corresponding, it is essential to match pixels for the coming phase calculation. The deformed patterns $I_n^{\prime}({x,y} ),n = 1,2,3,4,5$ in which pixels are one-to-one corresponding, can be cut from ${I_n}({x,y} ),n = 1,2,3,4,5{\; }$. Then the phase is carried out by following equation:

$$\varphi ({x,y} )= \arctan \left[ {\frac{{\mathop \sum \nolimits_{n = 1}^5 I_n^{\prime}({x,y} )sin\frac{{2\pi ({n - 1} )}}{5}}}{{\mathop \sum \nolimits_{n = 1}^5 I_n^{\prime}({x,y} )cos\frac{{2\pi ({n - 1} )}}{5}}}} \right],n = 1,2,3,4,5$$

Similarly, the phase information ${\varphi _0}({x,y} )$ of the reference plane can be obtained, and the phase change caused by the height of the measured object $\Delta \mathrm{\varphi}({x,y} )$ can be calculated by $ \varphi ({x,y} )- {\varphi _0}({x,y} )$. After phase unwrapping and phase-height mapping, the measured object can be reconstructed.

2.2 Pixel matching based on improved optical flow

In pixel matching, the spectrum of the deformed patterns can be gotten by Fourier transformation. In order to show the universality of this method, we use Gaussian filter. The size of the filtering window only needs to contain the main information of the fundamental frequency to meet the requirements of the method proposed in this paper. By filtering the + 1 spectrum and doing inverse Fourier transformation, we can get $ {P_n}({x,y} ),n = 1,2,3,4,5$, and then the modulation patterns ${M_n}({x,y} ),n = 1,2,3,4,5$ can be achieved by Eq. (7):

$${M_n}({x,y} )= abs[{{P_n}({x,y} )} ]= R({x,y} )B({x,y} ),n = 1,2,3,4,5$$

Where $abs$ is the modulus calculation, $R({x,y} )$ represents the surface reflectance of the measured object and the plane, and $B({x,y} )$ is the contrast of the projection grating. Usually $B({x,y} )$ is uniform and can be regarded as a constant. It is clear that the modulation ${M_n}({x,y} ),n = 1,2,3,4,5$ is linearly proportional to $R({x,y} )$ and the contrast of the measured object can be reflected by the modulation patterns. As a result, the modulation information of the measured object is moving synchronously with the measured object. It is the basis of pixel matching based on modulation patterns.

Due to the relatively centralized gray range in ${M_n}({x,y} ),n = 1,2,3,4,5$, which affects the accuracy of the optical flow algorithm, the adaptive histogram equalization enhancement algorithm [28] is applied for a uniform distribution within the whole gray range. Then $M_n^{\prime}({x,y} ),n = 1,2,3,4,5$ is gotten and it is beneficial for the following optical algorithm process.

The Lucas-Kanade (LK) optical flow algorithm [29] is a two-frame difference optical flow estimation algorithm proposed by Bruce D. Lucas and Takeo Kanade.

In the LK optical flow algorithm, three assumptions should be satisfied approximately:

  • 1. Constant brightness: the brightness value (pixel gray value) of each pixel is constant with the change of time.
  • 2. Little displacement: the measured object dose not move significantly. Then the gray changing rate calculated by partial derivative can reflect the movement of each pixel in the two adjacent frames.
  • 3. Space consistency: adjacent pixels in previous frame are also adjacent in next frame so that enough equations can be established for the pixel’s velocity in x,y direction.

Suppose that the captured time for previous frame is t, and the time for next frame is $t\; + \; d\; t$. Then the position of pixel in the previous and the next frame are $I\; ({x,\; y,\; t} )$ and $ I\; ({x\; + \; d\; x,\; y\; + \; d\; y,\; t\; + \; dt} )$, respectively.

According to the assumption of constant brightness:

$$I({x,y,t} )= I({x + dx,y + dy,t + dt} )$$
According to the hypothesis of small motion, the right side of the above formula is expanded by the Taylor series:
$$I({x + dx,y + dy,t + dt} )= I({x,y,t} )+ \frac{{dI}}{{dx}}dx + \frac{{dI}}{{dy}}dy + \frac{{dI}}{{dt}}dt + H.O.T.$$

$H.O.T.$ is the higher order term of Taylor series expansion, which can be ignored to be 0.

By combining the above two formulas, we can get:

$$\frac{{dI}}{{dx}}dx + \frac{{dI}}{{dy}}dy + \frac{{dI}}{{dt}}dt = 0$$

Take partial derivative to both sides of the equation respect to $t$:

$$\frac{{dI}}{{dx}}\frac{{dx}}{{dt}} + \frac{{dI}}{{dy}}\frac{{dy}}{{dt}} ={-} \frac{{dI}}{{dt}}$$

$\frac{{dx}}{{dt}}$ and $\frac{{dy}}{{dt}}$ represent the moving speed of pixel in x and y directions in each two adjacent frames, respectively.

$\frac{{dI}}{{dx}}$ and $\frac{{dI}}{{dy}}$ represent the gray changing rate of each two adjacent pixels in x and y directions in one frame.

$\frac{{dI}}{{dt}}$ represents the change of the gray value at the time of t and $t\; + \; dt$.

For single pixel, only one equation can be achieved and it is not enough for the calculation of $\frac{{dx}}{{dt}}$ and $\frac{{dy}}{{dt}}$. However, motion speed of pixels in a small area (optical flow window) can be regarded to be the same. Here the number of pixels is set to be n ($n = k{\ast }k$) in one optical flow window, and n equations can be established in matrix form as shown:

$$\left[ {\begin{array}{cc} {{I_{x1}}}&{{I_{y1}}}\\ {{I_{x2}}}&{{I_{y2}}}\\ {{I_{x3}}}&{{I_{y3}}}\\ \vdots & \vdots \\ {{I_{xn}}}&{{I_{yn}}} \end{array}} \right]\left[ {\begin{array}{c} {{V_x}}\\ {{V_y}} \end{array}} \right] = \left[ {\begin{array}{c} { - {I_{t1}}}\\ { - {I_{t2}}}\\ { - {I_{t3}}}\\ \vdots \\ { - {I_{tn}}} \end{array}} \right]$$

In which, ${I_x} = \frac{{dI}}{{dx}}$, ${I_y} = \frac{{dI}}{{dy}}$, ${V_x} = \frac{{dx}}{{dt}}$, ${V_y} = \frac{{dy}}{{dt}}$, ${I_t} = \frac{{dI}}{{dt}}$.

Simplification is:

$$A\vec{v} ={-} b$$

It can be solved by the least square method:

$$\vec{v} = {({{A^T}A} )^{ - 1}}{A^T}({ - b} )$$

The result is:

$$\left[ {\begin{array}{c} {{V_x}}\\ {{V_y}} \end{array}} \right] = {\left[ {\begin{array}{cc} {\sum I_{xi}^2}&{\sum {I_{xi}}{I_{yi}}}\\ {\sum {I_{xi}}{I_{yi}}}&{\sum I_{yi}^2} \end{array}} \right]^{ - 1}}\left[ {\begin{array}{c} { - \sum {I_{xi}}{I_{ti}}}\\ { - \sum {I_{yi}}{I_{ti}}} \end{array}} \right]$$
The displacement of the optical flow window at x and y direction between two adjacent frames can be carried out, and the x-direction and the y-direction optical flow calculated by the N-1 and N frames are combined to form a two-dimensional vector matrix represented by $I_{lk}^n({x,y} ),n = 1,2,3, \ldots ,N - 1$. Finally, pixel matching is completed through statistical analysis and corresponding data processing. For the efficiency of this method, it changes with different size of the selected optical flow window. The larger the optical flow window, the higher the acceptable object velocity. But it will lead to the decrease of accuracy and the increase of calculation. Therefore, the selected size of the optical flow window and the moving speed of the object should be balanced. It is further explained that the optical flow window can get the correct motion information of the object as long as the same part of the measured object appears in the same optical flow window between two adjacent enhanced frames. For the efficiency of this method, it changes with different size of the selected optical flow window. The larger the optical flow window, the higher the acceptable object velocity. But it will lead to the decrease of accuracy and the increase of calculation. Therefore, the selected size of the optical flow window and the moving speed of the object should be balanced. It is further explained that the optical flow window can get the correct motion information of the object as long as the same part of the measured object appears in the same optical flow window between two adjacent enhanced frames.

3. Simulation

To verify the validity of the proposed method, peaks function shown in Fig. 2(a) is used to be the measured object model. In order to make the simulation results closer to the real situation, we add the noise with variance of 0.005 mm and signal noise ratio (SNR) of 14.7747db into the simulation. The resolution of the measured area is 768 × 576 pixels, and the object moves along the x-axis and y-axis simultaneously. Five deformed patterns are captured, and the first two of them ${I_1}({x,y} )$ and ${I_2}({x,y} )$ are shown in Figs. 2(b) and 2(c).

 figure: Fig. 2.

Fig. 2. The measured object and the deformed patterns. (a) Measured object. (b) The first deformed pattern ${I_1}({x,y} )$. (c) The second deformed pattern ${I_2}({x,y} )$.

Download Full Size | PPT Slide | PDF

As shown in Fig. 3, the contrast of Fig. 3(b) is obviously higher than Fig. 3(a), which is extremely useful for improving the accuracy of optical flow.

 figure: Fig. 3.

Fig. 3. Modulation and enhanced images of the first frame deformed pattern. (a) The modulation of the first frame ${M_1}({x,y} )$. (b) The first enhanced frame $M_1^{\prime}({x,y} )$.

Download Full Size | PPT Slide | PDF

The motion trace of the measured object is shown in Fig. 4. The measured object in each two adjacent captured is 20 and 10 pixels away in x and y direction, respectively.

 figure: Fig. 4.

Fig. 4. Trace of measured object.

Download Full Size | PPT Slide | PDF

As shown in Fig. 5, four optical flow vector matrices are calculated by optical flow algorithm. The vector describes the size and direction of the object movement. After getting the motion information of the optical flow windows, the final motion information of the object can be obtained by data processing.

 figure: Fig. 5.

Fig. 5. Optical flow of the first five frames. (a) Optical flow calculated by the first and second frames $I_{lk}^1({x,y} )$. (b) Optical flow calculated by the second and third frames $I_{lk}^2({x,y} )$. (c) Optical flow calculated by the third and fourth frames $I_{lk}^3({x,y} )$. (d) Optical flow calculated by the fourth and fifth frames $I_{lk}^4({x,y} )$.

Download Full Size | PPT Slide | PDF

Then pixel matching is completed and the deformed pattern of the first frame and the second frame after cropping are shown as Figs. 6(a) and 6(b). The points in these two patterns are one-to-one corresponding.

 figure: Fig. 6.

Fig. 6. The intercepted deformation pattern after pixel matching. (a) Deformed pattern is intercepted from the first frame image $I_1^{\prime}({x,y} )$. (b) Deformed pattern is intercepted from the second frame image $I_2^{\prime}({x,y} )$.

Download Full Size | PPT Slide | PDF

The object recovered after phase unwrapping and height mapping is shown in Fig. 7(a), and the error between the reconstructed object and the original object is shown in Fig. 7(b). The maximum error in height is less than 0.05 mm, and the root means square (RMS) is 0.032 mm. The simulation results show that the proposed method can complete 3D reconstruction with high precision.

 figure: Fig. 7.

Fig. 7. Recovered measured objects and errors. (a) The recovered measured objects. (b) The error with the original object.

Download Full Size | PPT Slide | PDF

In order to find out whether the method proposed in this paper has satisfactory accuracy in different noise environment, we add different intensity of noise into the simulation. It can be seen from Table 1 that the displacement data obtained by pixel matching is not much different from the standard displacement data under different noise conditions. Because the displacement of the final pixel matching needs to be an integer, we make rounding treatment for the actual displacement at the end. When the SNR is 11.9491db, 14.7747db and 21.46912db, the final displacement used for pixel matching is 20 pixels. Due to the same result of pixel matching, the accuracy of the final result is also exactly the same (RMS error with standard 3D data: 0.032 mm).

Tables Icon

Table 1. Final displacement of the object and RMS error with standard 3D data under different noises

4. Experiment

To further verify the practicability of the proposed method, an experimental platform shown in Fig. 8 was constructed to perform dynamic 3D measurement for the human head shown in Fig. 9(a). The work plane is controlled by the displacement platform (TSA300-B) and the stepping motor control box (SC300-1A). One frame of sinusoidal fringe pattern is projected by DLP (HCP-75X). The five deformed patterns, in which the measured object is at a constant interval, can be captured by CCD (GEV-B1610-SC000). The processor of the computer is Core i5 8500. After DLP projects the fringe pattern onto the fixed area, Figs. 9(b) and 9(c) represent the first two frames of images ${I_1}({x,y} )$ and ${I_2}({x,y} )$ in the five frame deformed patterns captured by the CCD.

 figure: Fig. 8.

Fig. 8. The experimental platform.

Download Full Size | PPT Slide | PDF

 figure: Fig. 9.

Fig. 9. The measured object and deformed patterns. (a) Measured object. (b) The first deformed pattern ${I_1}({x,y} )$. (c) The second deformed patterns ${I_2}({x,y} )$.

Download Full Size | PPT Slide | PDF

The modulation patterns ${M_n}({x,y} ),n = 1,2,3,4,5$ are collected and enhanced. Figure 10(a) shows the gray-scale image before the enhancement, and Fig. 10(b) shows the image after the enhancement.

 figure: Fig. 10.

Fig. 10. Modulation and enhanced images of the first frame deformed pattern. (a) Modulation of the first frame ${M_1}({x,y} )$. (b) The first enhanced frame $M_1^{\prime}({x,y} )$.

Download Full Size | PPT Slide | PDF

The motion trace of the measured object is shown in Fig. 11. The measured object in each two adjacent captured is 25 and 10 pixels away in x and y direction, respectively.

 figure: Fig. 11.

Fig. 11. Trace of measured object.

Download Full Size | PPT Slide | PDF

As shown in Fig. 12, four optical flow vector matrices are obtained by calculating optical flow between the first five patterns. After getting the motion information of the optical flow windows, the pixel matching can be completed by data processing.

 figure: Fig. 12.

Fig. 12. Optical flow of the first five frames. (a) Optical flow calculated by the first and second frames $I_{lk}^1({x,y} )$. (b) Optical flow calculated by the second and third frames $I_{lk}^2({x,y} )$. (c) Optical flow calculated by the third and fourth frames $I_{lk}^3({x,y} )$. (d) Optical flow calculated by the fourth and fifth frames $I_{lk}^4({x,y} )$.

Download Full Size | PPT Slide | PDF

An image including the measured object is intercepted from each frame of the deformed patterns after pixel matching. The first and the second one are shown in Figs. 13(a) and 13(b). The points in these two patterns are one-to-one corresponding. Figure 14 shows the recovered results of the object surface information in Fig. 9(a).

 figure: Fig. 13.

Fig. 13. The intercepted deformation pattern after pixel matching. (a) Deformed pattern is intercepted from the first frame image $I_1^{\prime}({x,y} )$. (b) Deformed pattern is intercepted from the second frame image $I_2^{\prime}({x,y} )$.

Download Full Size | PPT Slide | PDF

 figure: Fig. 14.

Fig. 14. Measured object.

Download Full Size | PPT Slide | PDF

In order to further verify that the proposed method can retain most of the high-frequency information of the object. A coin with rich details as shown in Fig. 15(a) is measured. It moves in the same way as the face mask. Figure 15(b) is the first frame of the deformed patterns.

 figure: Fig. 15.

Fig. 15. The measured coin and the first deformed fringe.

Download Full Size | PPT Slide | PDF

Figure 16 shows the 3D information of the recovered coin. it can be seen that the proposed method has high precision. The result proves that the proposed method has satisfactory practicability and accuracy.

 figure: Fig. 16.

Fig. 16. The coin after 3D reconstruction.

Download Full Size | PPT Slide | PDF

5. Comparison and discussion

5.1 Accuracy

The proposed method can be divided into two parts: PMP for reconstructing static 3D object and pixel matching for converting dynamic objects to static objects. In PMP, five frame deformed patterns are captured and the object can be reconstructed. The improved method proposed in this paper only improves pixel matching. The step of 3D reconstruction has not been changed. The purpose of pixel matching is to obtain the motion information of the object, so that each pixel points of the object in the five frames correspond one by one. Finally, 3D objects can be reconstructed by PMP. Thus, we can reconstruct the 3D object accurately as long as we get the accurate displacement information of the object. That is, the obtained motion information can be applied for measuring the accuracy of this method.

Since the error caused by pixel matching needs to be less than one pixel, it is necessary to process the optical flow data shown in Fig. 17 to obtain accurate object displacement.

 figure: Fig. 17.

Fig. 17. Optical flow statistics of numerical simulation. (a) Numerical simulation of optical flow displacement statistical distribution. (b) Numerical simulation of optical flow displacement cumulative distribution.

Download Full Size | PPT Slide | PDF

As shown in Fig. 17, the value of optical flow in the $x$-axis direction is calculated and counted from the first frame and the second frame images $I_{lk}^1({x,y} )$ in the numerical simulation. It can be seen from Fig. 17(a) that a large amount of displacement data is concentrated near the standard value of 20 pixels, and the amount of data with an error greater than one pixel is much smaller than the amount of data with an error within one pixel. However, the condition of numerical simulation is better than experiment. Due to the change of background light intensity and environmental noise, the final value of optical flow in the experiments is not as accurate as that in the simulations.

Figure 18 shows that the value of optical flow in the x-axis direction is calculated from the first frame and the second frame images in the experiment. Similar to the simulation results concentrated around the standard value 20 pixels, it is clear to see the experimental results are also concentrated around 25 pixels in Fig. 18(a). In the simulations, the data results are distributed ranging from 18 pixels to 24 pixels, with a deviation of 3 pixels. However, due to the external interference in the experiment, the data results are distributed ranging from 18 pixels to 32 pixels, with a deviation of 7 pixels. The distribution range of data results increases in the experiment.

 figure: Fig. 18.

Fig. 18. Optical flow statistics of experiment. (a) Statistical distribution of optical flow displacement in experiment. (b) Cumulative distribution of optical flow displacement in experiment.

Download Full Size | PPT Slide | PDF

This phenomenon has a certain impact on the accuracy of the final displacement. Therefore, we use two methods to process the data. One is to calculate the average value of global data directly. Another is to select the interval with the largest number of points and the two adjacent intervals, and then the average value of these three intervals (three-intervals principle) is calculated. For example, the data in Fig. 17(a) selects three intervals from 19 to 21 pixels, and the data in Fig. 18(a) selects from 24 to 26 pixels. The reason for three-intervals principle is:

  • (a) In this method, all data should be equal in theory.
  • (b) The optical flow field represents a movement trend, and the interval with the most concentrated points can be considered as the mainstream movement trend, that is, the real movement of the measured object.
  • (c) As the error need to be within ± 1 pixel in pixel matching, the interval with the most points and the two adjacent ones are selected.
To verify the accuracy of three-intervals principle, three groups of comparative experiments are shown in Table 2. The calculated movements of the measured object are around 20 pixels by three methods and these three methods can work well due to the ideal environment in the simulation. In the experiment, the results by “three-intervals principle” and by correlation calculation are similar while the result by global average is far away from them. The reason is that the intervals distribution range and the error become larger with the increasing of the interference data in the experiment.

Tables Icon

Table 2. The displacement of the objects in the first and second frames of the simulation and experiment in the x-axis direction calculated by the principle of global average and three-intervals.

As shown in Table 3, four sets of displacements are obtained by different methods in pixel matching. The displacement data and 3D reconstruction data from pixel matching method based on the correlation calculation is taken as the standard, because of its low efficiency but high precision. The results calculated by pixel matching with correlation calculation and with improved optical flow are basically consistent. Moreover, the RMS error of the recovered 3D data by improved optical flow is only 0.043 mm.

Tables Icon

Table 3. Displacement of objects in the first and second frames in the x-axis direction and y-axis direction, and the RMS error of 3D data reconstructed by the method of correlation calculation.

In order to see the high precision of the proposed intuitively, the height data of the experiment object where x = 300 pixels with improved optical flow method and correlation calculation method are intercepted, as shown in the Fig. 19. It can be seen that two height curves basically coincide, which proves that the method proposed in this paper improves the measurement efficiency on the premise of ensuring the accuracy.

 figure: Fig. 19.

Fig. 19. The height of the 300th column of the measured mask, with improved optical flow method and correlation calculation method.

Download Full Size | PPT Slide | PDF

5.2 Efficiency

Many experiments have been conducted and three of them can be shown in Table 4. In pixel matching based on correlation calculation, the consuming time is 1.2679, 1.2781,1.2683 and the average is 1.2714. However, in pixel matching based on improved optical flow, the consuming time is 0.0286, 0.0294, 0.0298 and the average is 0.0293 which is around 42 times faster than that by correlation calculation.

Tables Icon

Table 4. Time to complete pixel matching for once.

Table 5 shows the total time required for the two methods to complete 25fps PMP. In the process of correlation calculation and optical flow calculation, we need to select a certain size of matching template and optical flow window.

Tables Icon

Table 5. Time spent on 3D reconstruction of 25fps.

Assuming that the resolution of the captured deformed pattern is $x\; \ast \; y$, the size of the selected template in correlation calculation and the optical window is $m\; \ast \; n,(m < x,n < y)$. By experiments, the time required for one correlation operation and one optical flow window operation is ${t_1}$ and ${t_2}$, respectively, and ${t_1}$ < ${t_2}$. The total number of times for correlation calculation is ${N_1}\; = \; ({x - m} )\; \ast \; ({y - n} )$ while number of times for improved optical flow is ${N_2}\; = \; ({x\; /\; m} )\; \ast \; ({y\; /\; n} )$. We can get $({x - m} )\; \times \; ({y - n} )\; \gg \; ({x\; /\; m} )\; \times \; ({y\; /\; n} )$. Therefore, the total time is ${T_1}$ = ${t_1}$ * ${N_1}$ > ${T_2}$ = ${t_2}$ * ${N_2}$. Further analysis, in a certain experiment, the resolution is 795 × 743 pixels, and the template size is 300 × 300. The optical flow window size is 24 × 24, thus 219285 times of correlation and 990 times of optical flow should be calculated.

In order to meet the requirements of real-time PMP, at least 24fps should be projected. From Table 5, it can be seen that the common dynamic PMP based on correlation calculation 31.2770 seconds is consumed, which cannot meet the requirements. The real-time dynamic PMP based on optical flow proposed in this paper only needs 0.9534 seconds to satisfy the requirements in real-time PMP on the premise of ensuring accuracy.

6. Summary

In this paper, the improved optical flow algorithm for dynamic phase measurement profilometry is introduced to speed up 3D reconstruction. Because pixel matching is needed in dynamic PMP, the optical flow algorithm for dynamic recognition in the field of deep learning is adopted and optimized. The displacement of each optical flow windows is obtained by preprocessing the collected image and calculating the displacement of each optical flow in each two deformed patterns. Finally, the motion information of the measured object can be got through three-intervals principle to deal with the displacement from each optical flow windows. Compared with pixel matching based on correlation calculation, the efficiency of 3D reconstruction is enormously improved. On the premise of guaranteeing the accuracy, the proposed method improves pixel matching efficiency by 42 times and 3D reconstruction efficiency by 32 times. Both simulation and experiment prove that the method is practical and feasible.

Funding

The First Batch of Cooperative Education Projects of Production and Learning (201901270002); Hubei Province University Student Innovation and Entrepreneurship Practice Project (s201910512055); National Natural Science Foundation of China (61704050); 863 National Plan Foundation (2007AA01Z333); National Scholarship Fund (201406240033).

Disclosures

The authors declare no conflicts of interest.

References

1. Y. Xu, S. Jia, Q. Bao, H. Chen, and J. Yang, “Recovery of absolute height from wrapped phase maps for fringe projection profilometry,” Opt. Express 22(14), 16819 (2014). [CrossRef]  

2. C. Li, Y. Cao, and C. Chen, “Computer-generated Moiré profilometry,” Opt. Express 25(22), 26815 (2017). [CrossRef]  

3. B. Li, Z. Liu, and S. Zhang, “Motion-induced error reduction by combining Fourier transform profilometry with phase-shifting profilometry,” Opt. Express 24(20), 23289 (2016). [CrossRef]  

4. Lei Lu, Jiangtao Xi, Yanguang Yu, and Qinghua Guo, “New approach to improve the accuracy of 3-d shape measurement of moving object using phase shifting profilometry,” Opt. Express 21(25), 30610–30622 (2013). [CrossRef]  

5. Y. Xu, S. Jia, Q. Bao, H. Chen, and J. Yang, “Recovery of absolute height from wrapped phase maps for fringe projection profilometry,” Opt. Express 22(14), 16819–16828 (2014). [CrossRef]  

6. Zewei Cai, Xiaoli Liu, and Hao Jiang, “Flexible phase error compensation based on Hilbert transform in phase shifting profilometry,” Opt. Express 23(19), 25171 (2015). [CrossRef]  

7. L. Song, Y. Chang, Z. Li, P. Wang, G. Xing, and J. Xi, “Application of global phase filtering method in multi frequency measurement,” Opt. Express 22(11), 13641–13647 (2014). [CrossRef]  

8. Zhongwei Li and Youfu Li, “Gamma-distorted fringe image modeling and accurate gamma correction for fast phase measuring profilometry,” Opt. Lett. 36(2), 154–156 (2011). [CrossRef]  

9. L. Ekstrand and S. Zhang, “Three-dimensional profilometry with nearly focused binary phase-shifting algorithms,” Opt. Lett. 36(23), 4518–4520 (2011). [CrossRef]  

10. K. Peng, Y. Cao, and Y. Wu, “A new pixel matching method using the modulation of shadow areas in online 3D measurement,” Optics and Lasers in Engineering 51(9), 1078–1084 (2013). [CrossRef]  

11. Peng Kuang and Yi-Ping Cao, “On-Line Three-Dimensional Measurement Method Based on Low Modulation Feature,” Chinese journal of lasers 40(7), 0708006 (2013).

12. S. Zhang, D. Royer, and T. Yau S, “GPU-assisted high-resolution, real-time 3-D shape measurement,” Opt. Express 14(20), 9120–9129 (2006). [CrossRef]  

13. Y. Ruizhi and C. Yiping, “An Online 3D Detection Method for Workpieces Using Phase Measurement Profilometry,” Acta Photonica Sinica 37(6), 1139–1143 (2008).

14. I. Bhattacharya, P. Ghosh, and S. Biswas, “Offline Signature Verification Using Pixel Matching Technique,” Procedia Technology 10(1), 970–977 (2013). [CrossRef]  

15. P Kuang, P Cao Y, and K Li, “A new pixel matching method using the entire modulation of the measured object in online PMP,” Optik 125(1), 137–140 (2014). [CrossRef]  

16. Wei-Hung Su, “Projected fringe profilometry using the area-encoded algorithm for spatially isolated and dynamic objects,” Opt. Express 16(4), 2590–2596 (2008). [CrossRef]  

17. Qi Zhao, Fangmin Li, and Xinhua Liu, “Real-Time Visual Odometry Based on Optical Flow and Depth Learning,” 2018 10th International Conference on Measuring Technology and Mechatronics Automation (ICMTMA). IEEE, 2018.

18. Menandro Roxas and Takeshi Oishi, “Real-Time Simultaneous 3D Reconstruction and Optical Flow Estimation,” 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE Computer Society, 2018.

19. René Schuster, Oliver Wasenmüller, and Georg Kuschk, “SceneFlowFields: “Dense Interpolation of Sparse Scene Flow Correspondences,” IEEE Winter Conference on Applications of Computer Vision (WACV) 2018. IEEE, 2018.

20. J. Diaz, E. Ros, and F. Pelayo, “FPGA-based real-time optical-flow system,” IEEE Trans. Circuits Syst. Video Technol. 16(2), 274–279 (2006). [CrossRef]  

21. L. Sirovich and M. Kirby, “Low-Dimensional Procedure for the Characterization of Human Faces,” J. Opt. Soc. Am. A 4(3), 519–524 (1987). [CrossRef]  

22. B. K. P. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence 17(1-3), 185–203 (1981). [CrossRef]  

23. J. Janai, F. Güney, and A. Behl, “Computer Vision for Autonomous Vehicles: Problems, Datasets and State of the Art,”[J]. 2017.

24. P Zhang, W Ouyang, and P. Zhang, SR-LSTM: State Refinement for LSTM towards Pedestrian Trajectory Prediction[J]. 2019.

25. E Yurtsever, J Lambert, and A. Carballo, “A Survey of Autonomous Driving: Common Practices and Emerging Technologies,” IEEE Access 8, 58443 (2020). [CrossRef]  

26. B. Paden, M. Cap, and S. Z. Yong, “A Survey of Motion Planning and Control Techniques for Self-driving Urban Vehicles,” IEEE Trans. Intell. Veh. 1(1), 33–55 (2016). [CrossRef]  

27. D. Gonzalez, J. Perez, and V. Milanes, “A Review of Motion Planning Techniques for Automated Vehicles,” IEEE Trans. Intell. Transport. Syst. 17(4), 1135–1145 (2016). [CrossRef]  

28. Karel Zuiderveld, “Contrast Limited Adaptive Histograph Equalization,” Graphic Gems IV. Academic Press Professional, San Diego, 1994. 474–485.

29. B. Lucas and T. Kanade, “An Iterative Image RegistrationTechnique with an Application to Stereo Vision,” Proc. Of 7th InternationalJoint Conference on Artificial Intelligence (IJCAI), pp. 674–679.

References

  • View by:
  • |
  • |
  • |

  1. Y. Xu, S. Jia, Q. Bao, H. Chen, and J. Yang, “Recovery of absolute height from wrapped phase maps for fringe projection profilometry,” Opt. Express 22(14), 16819 (2014).
    [Crossref]
  2. C. Li, Y. Cao, and C. Chen, “Computer-generated Moiré profilometry,” Opt. Express 25(22), 26815 (2017).
    [Crossref]
  3. B. Li, Z. Liu, and S. Zhang, “Motion-induced error reduction by combining Fourier transform profilometry with phase-shifting profilometry,” Opt. Express 24(20), 23289 (2016).
    [Crossref]
  4. Lei Lu, Jiangtao Xi, Yanguang Yu, and Qinghua Guo, “New approach to improve the accuracy of 3-d shape measurement of moving object using phase shifting profilometry,” Opt. Express 21(25), 30610–30622 (2013).
    [Crossref]
  5. Y. Xu, S. Jia, Q. Bao, H. Chen, and J. Yang, “Recovery of absolute height from wrapped phase maps for fringe projection profilometry,” Opt. Express 22(14), 16819–16828 (2014).
    [Crossref]
  6. Zewei Cai, Xiaoli Liu, and Hao Jiang, “Flexible phase error compensation based on Hilbert transform in phase shifting profilometry,” Opt. Express 23(19), 25171 (2015).
    [Crossref]
  7. L. Song, Y. Chang, Z. Li, P. Wang, G. Xing, and J. Xi, “Application of global phase filtering method in multi frequency measurement,” Opt. Express 22(11), 13641–13647 (2014).
    [Crossref]
  8. Zhongwei Li and Youfu Li, “Gamma-distorted fringe image modeling and accurate gamma correction for fast phase measuring profilometry,” Opt. Lett. 36(2), 154–156 (2011).
    [Crossref]
  9. L. Ekstrand and S. Zhang, “Three-dimensional profilometry with nearly focused binary phase-shifting algorithms,” Opt. Lett. 36(23), 4518–4520 (2011).
    [Crossref]
  10. K. Peng, Y. Cao, and Y. Wu, “A new pixel matching method using the modulation of shadow areas in online 3D measurement,” Optics and Lasers in Engineering 51(9), 1078–1084 (2013).
    [Crossref]
  11. Peng Kuang and Yi-Ping Cao, “On-Line Three-Dimensional Measurement Method Based on Low Modulation Feature,” Chinese journal of lasers 40(7), 0708006 (2013).
  12. S. Zhang, D. Royer, and T. Yau S, “GPU-assisted high-resolution, real-time 3-D shape measurement,” Opt. Express 14(20), 9120–9129 (2006).
    [Crossref]
  13. Y. Ruizhi and C. Yiping, “An Online 3D Detection Method for Workpieces Using Phase Measurement Profilometry,” Acta Photonica Sinica 37(6), 1139–1143 (2008).
  14. I. Bhattacharya, P. Ghosh, and S. Biswas, “Offline Signature Verification Using Pixel Matching Technique,” Procedia Technology 10(1), 970–977 (2013).
    [Crossref]
  15. P Kuang, P Cao Y, and K Li, “A new pixel matching method using the entire modulation of the measured object in online PMP,” Optik 125(1), 137–140 (2014).
    [Crossref]
  16. Wei-Hung Su, “Projected fringe profilometry using the area-encoded algorithm for spatially isolated and dynamic objects,” Opt. Express 16(4), 2590–2596 (2008).
    [Crossref]
  17. Qi Zhao, Fangmin Li, and Xinhua Liu, “Real-Time Visual Odometry Based on Optical Flow and Depth Learning,” 2018 10th International Conference on Measuring Technology and Mechatronics Automation (ICMTMA). IEEE, 2018.
  18. Menandro Roxas and Takeshi Oishi, “Real-Time Simultaneous 3D Reconstruction and Optical Flow Estimation,” 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE Computer Society, 2018.
  19. René Schuster, Oliver Wasenmüller, and Georg Kuschk, “SceneFlowFields: “Dense Interpolation of Sparse Scene Flow Correspondences,” IEEE Winter Conference on Applications of Computer Vision (WACV) 2018. IEEE, 2018.
  20. J. Diaz, E. Ros, and F. Pelayo, “FPGA-based real-time optical-flow system,” IEEE Trans. Circuits Syst. Video Technol. 16(2), 274–279 (2006).
    [Crossref]
  21. L. Sirovich and M. Kirby, “Low-Dimensional Procedure for the Characterization of Human Faces,” J. Opt. Soc. Am. A 4(3), 519–524 (1987).
    [Crossref]
  22. B. K. P. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence 17(1-3), 185–203 (1981).
    [Crossref]
  23. J. Janai, F. Güney, and A. Behl, “Computer Vision for Autonomous Vehicles: Problems, Datasets and State of the Art,”[J]. 2017.
  24. P Zhang, W Ouyang, and P. Zhang, SR-LSTM: State Refinement for LSTM towards Pedestrian Trajectory Prediction[J]. 2019.
  25. E Yurtsever, J Lambert, and A. Carballo, “A Survey of Autonomous Driving: Common Practices and Emerging Technologies,” IEEE Access 8, 58443 (2020).
    [Crossref]
  26. B. Paden, M. Cap, and S. Z. Yong, “A Survey of Motion Planning and Control Techniques for Self-driving Urban Vehicles,” IEEE Trans. Intell. Veh. 1(1), 33–55 (2016).
    [Crossref]
  27. D. Gonzalez, J. Perez, and V. Milanes, “A Review of Motion Planning Techniques for Automated Vehicles,” IEEE Trans. Intell. Transport. Syst. 17(4), 1135–1145 (2016).
    [Crossref]
  28. Karel Zuiderveld, “Contrast Limited Adaptive Histograph Equalization,” Graphic Gems IV. Academic Press Professional, San Diego, 1994. 474–485.
  29. B. Lucas and T. Kanade, “An Iterative Image RegistrationTechnique with an Application to Stereo Vision,” Proc. Of 7th InternationalJoint Conference on Artificial Intelligence (IJCAI), pp. 674–679.

2020 (1)

E Yurtsever, J Lambert, and A. Carballo, “A Survey of Autonomous Driving: Common Practices and Emerging Technologies,” IEEE Access 8, 58443 (2020).
[Crossref]

2017 (1)

2016 (3)

B. Li, Z. Liu, and S. Zhang, “Motion-induced error reduction by combining Fourier transform profilometry with phase-shifting profilometry,” Opt. Express 24(20), 23289 (2016).
[Crossref]

B. Paden, M. Cap, and S. Z. Yong, “A Survey of Motion Planning and Control Techniques for Self-driving Urban Vehicles,” IEEE Trans. Intell. Veh. 1(1), 33–55 (2016).
[Crossref]

D. Gonzalez, J. Perez, and V. Milanes, “A Review of Motion Planning Techniques for Automated Vehicles,” IEEE Trans. Intell. Transport. Syst. 17(4), 1135–1145 (2016).
[Crossref]

2015 (1)

2014 (4)

2013 (4)

K. Peng, Y. Cao, and Y. Wu, “A new pixel matching method using the modulation of shadow areas in online 3D measurement,” Optics and Lasers in Engineering 51(9), 1078–1084 (2013).
[Crossref]

Peng Kuang and Yi-Ping Cao, “On-Line Three-Dimensional Measurement Method Based on Low Modulation Feature,” Chinese journal of lasers 40(7), 0708006 (2013).

I. Bhattacharya, P. Ghosh, and S. Biswas, “Offline Signature Verification Using Pixel Matching Technique,” Procedia Technology 10(1), 970–977 (2013).
[Crossref]

Lei Lu, Jiangtao Xi, Yanguang Yu, and Qinghua Guo, “New approach to improve the accuracy of 3-d shape measurement of moving object using phase shifting profilometry,” Opt. Express 21(25), 30610–30622 (2013).
[Crossref]

2011 (2)

2008 (2)

Y. Ruizhi and C. Yiping, “An Online 3D Detection Method for Workpieces Using Phase Measurement Profilometry,” Acta Photonica Sinica 37(6), 1139–1143 (2008).

Wei-Hung Su, “Projected fringe profilometry using the area-encoded algorithm for spatially isolated and dynamic objects,” Opt. Express 16(4), 2590–2596 (2008).
[Crossref]

2006 (2)

J. Diaz, E. Ros, and F. Pelayo, “FPGA-based real-time optical-flow system,” IEEE Trans. Circuits Syst. Video Technol. 16(2), 274–279 (2006).
[Crossref]

S. Zhang, D. Royer, and T. Yau S, “GPU-assisted high-resolution, real-time 3-D shape measurement,” Opt. Express 14(20), 9120–9129 (2006).
[Crossref]

1987 (1)

1981 (1)

B. K. P. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence 17(1-3), 185–203 (1981).
[Crossref]

Bao, Q.

Behl, A.

J. Janai, F. Güney, and A. Behl, “Computer Vision for Autonomous Vehicles: Problems, Datasets and State of the Art,”[J]. 2017.

Bhattacharya, I.

I. Bhattacharya, P. Ghosh, and S. Biswas, “Offline Signature Verification Using Pixel Matching Technique,” Procedia Technology 10(1), 970–977 (2013).
[Crossref]

Biswas, S.

I. Bhattacharya, P. Ghosh, and S. Biswas, “Offline Signature Verification Using Pixel Matching Technique,” Procedia Technology 10(1), 970–977 (2013).
[Crossref]

Cai, Zewei

Cao, Y.

C. Li, Y. Cao, and C. Chen, “Computer-generated Moiré profilometry,” Opt. Express 25(22), 26815 (2017).
[Crossref]

K. Peng, Y. Cao, and Y. Wu, “A new pixel matching method using the modulation of shadow areas in online 3D measurement,” Optics and Lasers in Engineering 51(9), 1078–1084 (2013).
[Crossref]

Cao, Yi-Ping

Peng Kuang and Yi-Ping Cao, “On-Line Three-Dimensional Measurement Method Based on Low Modulation Feature,” Chinese journal of lasers 40(7), 0708006 (2013).

Cao Y, P

P Kuang, P Cao Y, and K Li, “A new pixel matching method using the entire modulation of the measured object in online PMP,” Optik 125(1), 137–140 (2014).
[Crossref]

Cap, M.

B. Paden, M. Cap, and S. Z. Yong, “A Survey of Motion Planning and Control Techniques for Self-driving Urban Vehicles,” IEEE Trans. Intell. Veh. 1(1), 33–55 (2016).
[Crossref]

Carballo, A.

E Yurtsever, J Lambert, and A. Carballo, “A Survey of Autonomous Driving: Common Practices and Emerging Technologies,” IEEE Access 8, 58443 (2020).
[Crossref]

Chang, Y.

Chen, C.

Chen, H.

Diaz, J.

J. Diaz, E. Ros, and F. Pelayo, “FPGA-based real-time optical-flow system,” IEEE Trans. Circuits Syst. Video Technol. 16(2), 274–279 (2006).
[Crossref]

Ekstrand, L.

Ghosh, P.

I. Bhattacharya, P. Ghosh, and S. Biswas, “Offline Signature Verification Using Pixel Matching Technique,” Procedia Technology 10(1), 970–977 (2013).
[Crossref]

Gonzalez, D.

D. Gonzalez, J. Perez, and V. Milanes, “A Review of Motion Planning Techniques for Automated Vehicles,” IEEE Trans. Intell. Transport. Syst. 17(4), 1135–1145 (2016).
[Crossref]

Güney, F.

J. Janai, F. Güney, and A. Behl, “Computer Vision for Autonomous Vehicles: Problems, Datasets and State of the Art,”[J]. 2017.

Guo, Qinghua

Horn, B. K. P.

B. K. P. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence 17(1-3), 185–203 (1981).
[Crossref]

Janai, J.

J. Janai, F. Güney, and A. Behl, “Computer Vision for Autonomous Vehicles: Problems, Datasets and State of the Art,”[J]. 2017.

Jia, S.

Jiang, Hao

Kanade, T.

B. Lucas and T. Kanade, “An Iterative Image RegistrationTechnique with an Application to Stereo Vision,” Proc. Of 7th InternationalJoint Conference on Artificial Intelligence (IJCAI), pp. 674–679.

Kirby, M.

Kuang, P

P Kuang, P Cao Y, and K Li, “A new pixel matching method using the entire modulation of the measured object in online PMP,” Optik 125(1), 137–140 (2014).
[Crossref]

Kuang, Peng

Peng Kuang and Yi-Ping Cao, “On-Line Three-Dimensional Measurement Method Based on Low Modulation Feature,” Chinese journal of lasers 40(7), 0708006 (2013).

Kuschk, Georg

René Schuster, Oliver Wasenmüller, and Georg Kuschk, “SceneFlowFields: “Dense Interpolation of Sparse Scene Flow Correspondences,” IEEE Winter Conference on Applications of Computer Vision (WACV) 2018. IEEE, 2018.

Lambert, J

E Yurtsever, J Lambert, and A. Carballo, “A Survey of Autonomous Driving: Common Practices and Emerging Technologies,” IEEE Access 8, 58443 (2020).
[Crossref]

Li, B.

Li, C.

Li, Fangmin

Qi Zhao, Fangmin Li, and Xinhua Liu, “Real-Time Visual Odometry Based on Optical Flow and Depth Learning,” 2018 10th International Conference on Measuring Technology and Mechatronics Automation (ICMTMA). IEEE, 2018.

Li, K

P Kuang, P Cao Y, and K Li, “A new pixel matching method using the entire modulation of the measured object in online PMP,” Optik 125(1), 137–140 (2014).
[Crossref]

Li, Youfu

Li, Z.

Li, Zhongwei

Liu, Xiaoli

Liu, Xinhua

Qi Zhao, Fangmin Li, and Xinhua Liu, “Real-Time Visual Odometry Based on Optical Flow and Depth Learning,” 2018 10th International Conference on Measuring Technology and Mechatronics Automation (ICMTMA). IEEE, 2018.

Liu, Z.

Lu, Lei

Lucas, B.

B. Lucas and T. Kanade, “An Iterative Image RegistrationTechnique with an Application to Stereo Vision,” Proc. Of 7th InternationalJoint Conference on Artificial Intelligence (IJCAI), pp. 674–679.

Milanes, V.

D. Gonzalez, J. Perez, and V. Milanes, “A Review of Motion Planning Techniques for Automated Vehicles,” IEEE Trans. Intell. Transport. Syst. 17(4), 1135–1145 (2016).
[Crossref]

Oishi, Takeshi

Menandro Roxas and Takeshi Oishi, “Real-Time Simultaneous 3D Reconstruction and Optical Flow Estimation,” 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE Computer Society, 2018.

Ouyang, W

P Zhang, W Ouyang, and P. Zhang, SR-LSTM: State Refinement for LSTM towards Pedestrian Trajectory Prediction[J]. 2019.

Paden, B.

B. Paden, M. Cap, and S. Z. Yong, “A Survey of Motion Planning and Control Techniques for Self-driving Urban Vehicles,” IEEE Trans. Intell. Veh. 1(1), 33–55 (2016).
[Crossref]

Pelayo, F.

J. Diaz, E. Ros, and F. Pelayo, “FPGA-based real-time optical-flow system,” IEEE Trans. Circuits Syst. Video Technol. 16(2), 274–279 (2006).
[Crossref]

Peng, K.

K. Peng, Y. Cao, and Y. Wu, “A new pixel matching method using the modulation of shadow areas in online 3D measurement,” Optics and Lasers in Engineering 51(9), 1078–1084 (2013).
[Crossref]

Perez, J.

D. Gonzalez, J. Perez, and V. Milanes, “A Review of Motion Planning Techniques for Automated Vehicles,” IEEE Trans. Intell. Transport. Syst. 17(4), 1135–1145 (2016).
[Crossref]

Ros, E.

J. Diaz, E. Ros, and F. Pelayo, “FPGA-based real-time optical-flow system,” IEEE Trans. Circuits Syst. Video Technol. 16(2), 274–279 (2006).
[Crossref]

Roxas, Menandro

Menandro Roxas and Takeshi Oishi, “Real-Time Simultaneous 3D Reconstruction and Optical Flow Estimation,” 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE Computer Society, 2018.

Royer, D.

Ruizhi, Y.

Y. Ruizhi and C. Yiping, “An Online 3D Detection Method for Workpieces Using Phase Measurement Profilometry,” Acta Photonica Sinica 37(6), 1139–1143 (2008).

Schunck, B. G.

B. K. P. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence 17(1-3), 185–203 (1981).
[Crossref]

Schuster, René

René Schuster, Oliver Wasenmüller, and Georg Kuschk, “SceneFlowFields: “Dense Interpolation of Sparse Scene Flow Correspondences,” IEEE Winter Conference on Applications of Computer Vision (WACV) 2018. IEEE, 2018.

Sirovich, L.

Song, L.

Su, Wei-Hung

Wang, P.

Wasenmüller, Oliver

René Schuster, Oliver Wasenmüller, and Georg Kuschk, “SceneFlowFields: “Dense Interpolation of Sparse Scene Flow Correspondences,” IEEE Winter Conference on Applications of Computer Vision (WACV) 2018. IEEE, 2018.

Wu, Y.

K. Peng, Y. Cao, and Y. Wu, “A new pixel matching method using the modulation of shadow areas in online 3D measurement,” Optics and Lasers in Engineering 51(9), 1078–1084 (2013).
[Crossref]

Xi, J.

Xi, Jiangtao

Xing, G.

Xu, Y.

Yang, J.

Yau S, T.

Yiping, C.

Y. Ruizhi and C. Yiping, “An Online 3D Detection Method for Workpieces Using Phase Measurement Profilometry,” Acta Photonica Sinica 37(6), 1139–1143 (2008).

Yong, S. Z.

B. Paden, M. Cap, and S. Z. Yong, “A Survey of Motion Planning and Control Techniques for Self-driving Urban Vehicles,” IEEE Trans. Intell. Veh. 1(1), 33–55 (2016).
[Crossref]

Yu, Yanguang

Yurtsever, E

E Yurtsever, J Lambert, and A. Carballo, “A Survey of Autonomous Driving: Common Practices and Emerging Technologies,” IEEE Access 8, 58443 (2020).
[Crossref]

Zhang, P

P Zhang, W Ouyang, and P. Zhang, SR-LSTM: State Refinement for LSTM towards Pedestrian Trajectory Prediction[J]. 2019.

Zhang, P.

P Zhang, W Ouyang, and P. Zhang, SR-LSTM: State Refinement for LSTM towards Pedestrian Trajectory Prediction[J]. 2019.

Zhang, S.

Zhao, Qi

Qi Zhao, Fangmin Li, and Xinhua Liu, “Real-Time Visual Odometry Based on Optical Flow and Depth Learning,” 2018 10th International Conference on Measuring Technology and Mechatronics Automation (ICMTMA). IEEE, 2018.

Zuiderveld, Karel

Karel Zuiderveld, “Contrast Limited Adaptive Histograph Equalization,” Graphic Gems IV. Academic Press Professional, San Diego, 1994. 474–485.

Acta Photonica Sinica (1)

Y. Ruizhi and C. Yiping, “An Online 3D Detection Method for Workpieces Using Phase Measurement Profilometry,” Acta Photonica Sinica 37(6), 1139–1143 (2008).

Artificial Intelligence (1)

B. K. P. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence 17(1-3), 185–203 (1981).
[Crossref]

Chinese journal of lasers (1)

Peng Kuang and Yi-Ping Cao, “On-Line Three-Dimensional Measurement Method Based on Low Modulation Feature,” Chinese journal of lasers 40(7), 0708006 (2013).

IEEE Access (1)

E Yurtsever, J Lambert, and A. Carballo, “A Survey of Autonomous Driving: Common Practices and Emerging Technologies,” IEEE Access 8, 58443 (2020).
[Crossref]

IEEE Trans. Circuits Syst. Video Technol. (1)

J. Diaz, E. Ros, and F. Pelayo, “FPGA-based real-time optical-flow system,” IEEE Trans. Circuits Syst. Video Technol. 16(2), 274–279 (2006).
[Crossref]

IEEE Trans. Intell. Transport. Syst. (1)

D. Gonzalez, J. Perez, and V. Milanes, “A Review of Motion Planning Techniques for Automated Vehicles,” IEEE Trans. Intell. Transport. Syst. 17(4), 1135–1145 (2016).
[Crossref]

IEEE Trans. Intell. Veh. (1)

B. Paden, M. Cap, and S. Z. Yong, “A Survey of Motion Planning and Control Techniques for Self-driving Urban Vehicles,” IEEE Trans. Intell. Veh. 1(1), 33–55 (2016).
[Crossref]

J. Opt. Soc. Am. A (1)

Opt. Express (9)

S. Zhang, D. Royer, and T. Yau S, “GPU-assisted high-resolution, real-time 3-D shape measurement,” Opt. Express 14(20), 9120–9129 (2006).
[Crossref]

Wei-Hung Su, “Projected fringe profilometry using the area-encoded algorithm for spatially isolated and dynamic objects,” Opt. Express 16(4), 2590–2596 (2008).
[Crossref]

Y. Xu, S. Jia, Q. Bao, H. Chen, and J. Yang, “Recovery of absolute height from wrapped phase maps for fringe projection profilometry,” Opt. Express 22(14), 16819 (2014).
[Crossref]

C. Li, Y. Cao, and C. Chen, “Computer-generated Moiré profilometry,” Opt. Express 25(22), 26815 (2017).
[Crossref]

B. Li, Z. Liu, and S. Zhang, “Motion-induced error reduction by combining Fourier transform profilometry with phase-shifting profilometry,” Opt. Express 24(20), 23289 (2016).
[Crossref]

Lei Lu, Jiangtao Xi, Yanguang Yu, and Qinghua Guo, “New approach to improve the accuracy of 3-d shape measurement of moving object using phase shifting profilometry,” Opt. Express 21(25), 30610–30622 (2013).
[Crossref]

Y. Xu, S. Jia, Q. Bao, H. Chen, and J. Yang, “Recovery of absolute height from wrapped phase maps for fringe projection profilometry,” Opt. Express 22(14), 16819–16828 (2014).
[Crossref]

Zewei Cai, Xiaoli Liu, and Hao Jiang, “Flexible phase error compensation based on Hilbert transform in phase shifting profilometry,” Opt. Express 23(19), 25171 (2015).
[Crossref]

L. Song, Y. Chang, Z. Li, P. Wang, G. Xing, and J. Xi, “Application of global phase filtering method in multi frequency measurement,” Opt. Express 22(11), 13641–13647 (2014).
[Crossref]

Opt. Lett. (2)

Optics and Lasers in Engineering (1)

K. Peng, Y. Cao, and Y. Wu, “A new pixel matching method using the modulation of shadow areas in online 3D measurement,” Optics and Lasers in Engineering 51(9), 1078–1084 (2013).
[Crossref]

Optik (1)

P Kuang, P Cao Y, and K Li, “A new pixel matching method using the entire modulation of the measured object in online PMP,” Optik 125(1), 137–140 (2014).
[Crossref]

Procedia Technology (1)

I. Bhattacharya, P. Ghosh, and S. Biswas, “Offline Signature Verification Using Pixel Matching Technique,” Procedia Technology 10(1), 970–977 (2013).
[Crossref]

Other (7)

Qi Zhao, Fangmin Li, and Xinhua Liu, “Real-Time Visual Odometry Based on Optical Flow and Depth Learning,” 2018 10th International Conference on Measuring Technology and Mechatronics Automation (ICMTMA). IEEE, 2018.

Menandro Roxas and Takeshi Oishi, “Real-Time Simultaneous 3D Reconstruction and Optical Flow Estimation,” 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE Computer Society, 2018.

René Schuster, Oliver Wasenmüller, and Georg Kuschk, “SceneFlowFields: “Dense Interpolation of Sparse Scene Flow Correspondences,” IEEE Winter Conference on Applications of Computer Vision (WACV) 2018. IEEE, 2018.

J. Janai, F. Güney, and A. Behl, “Computer Vision for Autonomous Vehicles: Problems, Datasets and State of the Art,”[J]. 2017.

P Zhang, W Ouyang, and P. Zhang, SR-LSTM: State Refinement for LSTM towards Pedestrian Trajectory Prediction[J]. 2019.

Karel Zuiderveld, “Contrast Limited Adaptive Histograph Equalization,” Graphic Gems IV. Academic Press Professional, San Diego, 1994. 474–485.

B. Lucas and T. Kanade, “An Iterative Image RegistrationTechnique with an Application to Stereo Vision,” Proc. Of 7th InternationalJoint Conference on Artificial Intelligence (IJCAI), pp. 674–679.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (19)

Fig. 1.
Fig. 1. 3D measurement system.
Fig. 2.
Fig. 2. The measured object and the deformed patterns. (a) Measured object. (b) The first deformed pattern ${I_1}({x,y} )$. (c) The second deformed pattern ${I_2}({x,y} )$.
Fig. 3.
Fig. 3. Modulation and enhanced images of the first frame deformed pattern. (a) The modulation of the first frame ${M_1}({x,y} )$. (b) The first enhanced frame $M_1^{\prime}({x,y} )$.
Fig. 4.
Fig. 4. Trace of measured object.
Fig. 5.
Fig. 5. Optical flow of the first five frames. (a) Optical flow calculated by the first and second frames $I_{lk}^1({x,y} )$. (b) Optical flow calculated by the second and third frames $I_{lk}^2({x,y} )$. (c) Optical flow calculated by the third and fourth frames $I_{lk}^3({x,y} )$. (d) Optical flow calculated by the fourth and fifth frames $I_{lk}^4({x,y} )$.
Fig. 6.
Fig. 6. The intercepted deformation pattern after pixel matching. (a) Deformed pattern is intercepted from the first frame image $I_1^{\prime}({x,y} )$. (b) Deformed pattern is intercepted from the second frame image $I_2^{\prime}({x,y} )$.
Fig. 7.
Fig. 7. Recovered measured objects and errors. (a) The recovered measured objects. (b) The error with the original object.
Fig. 8.
Fig. 8. The experimental platform.
Fig. 9.
Fig. 9. The measured object and deformed patterns. (a) Measured object. (b) The first deformed pattern ${I_1}({x,y} )$. (c) The second deformed patterns ${I_2}({x,y} )$.
Fig. 10.
Fig. 10. Modulation and enhanced images of the first frame deformed pattern. (a) Modulation of the first frame ${M_1}({x,y} )$. (b) The first enhanced frame $M_1^{\prime}({x,y} )$.
Fig. 11.
Fig. 11. Trace of measured object.
Fig. 12.
Fig. 12. Optical flow of the first five frames. (a) Optical flow calculated by the first and second frames $I_{lk}^1({x,y} )$. (b) Optical flow calculated by the second and third frames $I_{lk}^2({x,y} )$. (c) Optical flow calculated by the third and fourth frames $I_{lk}^3({x,y} )$. (d) Optical flow calculated by the fourth and fifth frames $I_{lk}^4({x,y} )$.
Fig. 13.
Fig. 13. The intercepted deformation pattern after pixel matching. (a) Deformed pattern is intercepted from the first frame image $I_1^{\prime}({x,y} )$. (b) Deformed pattern is intercepted from the second frame image $I_2^{\prime}({x,y} )$.
Fig. 14.
Fig. 14. Measured object.
Fig. 15.
Fig. 15. The measured coin and the first deformed fringe.
Fig. 16.
Fig. 16. The coin after 3D reconstruction.
Fig. 17.
Fig. 17. Optical flow statistics of numerical simulation. (a) Numerical simulation of optical flow displacement statistical distribution. (b) Numerical simulation of optical flow displacement cumulative distribution.
Fig. 18.
Fig. 18. Optical flow statistics of experiment. (a) Statistical distribution of optical flow displacement in experiment. (b) Cumulative distribution of optical flow displacement in experiment.
Fig. 19.
Fig. 19. The height of the 300th column of the measured mask, with improved optical flow method and correlation calculation method.

Tables (5)

Tables Icon

Table 1. Final displacement of the object and RMS error with standard 3D data under different noises

Tables Icon

Table 2. The displacement of the objects in the first and second frames of the simulation and experiment in the x-axis direction calculated by the principle of global average and three-intervals.

Tables Icon

Table 3. Displacement of objects in the first and second frames in the x-axis direction and y-axis direction, and the RMS error of 3D data reconstructed by the method of correlation calculation.

Tables Icon

Table 4. Time to complete pixel matching for once.

Tables Icon

Table 5. Time spent on 3D reconstruction of 25fps.

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

I 0 n ( x , y ) = R 0 ( x , y ) [ A ( x , y ) + B ( x , y ) cos ( φ 0 ( x , y ) + δ n ) ] , n = 1 , 2 , 3 , 4 , 5
φ 0 ( x , y ) = 2 π f 0 x + 0 ( x , y )
δ n = 2 π ( n 1 ) 5 , n = 1 , 2 , 3 , 4 , 5
I n ( x , y ) = R ( x , y ) [ A ( x , y ) + B ( x , y ) c o s ( φ ( x , y ) + δ n ) ] , n = 1 , 2 , 3 , 4 , 5
φ ( x , y ) = 2 π f 0 x + ( x , y )
φ ( x , y ) = arctan [ n = 1 5 I n ( x , y ) s i n 2 π ( n 1 ) 5 n = 1 5 I n ( x , y ) c o s 2 π ( n 1 ) 5 ] , n = 1 , 2 , 3 , 4 , 5
M n ( x , y ) = a b s [ P n ( x , y ) ] = R ( x , y ) B ( x , y ) , n = 1 , 2 , 3 , 4 , 5
I ( x , y , t ) = I ( x + d x , y + d y , t + d t )
I ( x + d x , y + d y , t + d t ) = I ( x , y , t ) + d I d x d x + d I d y d y + d I d t d t + H . O . T .
d I d x d x + d I d y d y + d I d t d t = 0
d I d x d x d t + d I d y d y d t = d I d t
[ I x 1 I y 1 I x 2 I y 2 I x 3 I y 3 I x n I y n ] [ V x V y ] = [ I t 1 I t 2 I t 3 I t n ]
A v = b
v = ( A T A ) 1 A T ( b )
[ V x V y ] = [ I x i 2 I x i I y i I x i I y i I y i 2 ] 1 [ I x i I t i I y i I t i ]

Metrics