Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Automated approach for the surface profile measurement of moving objects based on PSP

Open Access Open Access

Abstract

Phase shifting profilometry can achieve high accuracy for the 3D shape measurement of static object. Errors will be introduced when the object is moved during the movement. The fundamental reason causing the above issue is: PSP requires multiple fringe patterns but the reconstruction model does not include the object movement information. This paper proposes a new method to automatically measure the 3D shape of the rigid object with arbitrary 2D movement. Firstly, the object movement is tracked by the SIFT algorithm and the rotation matrix and translation vector describing the movement are estimated. Then, with the reconstruction model including movement information, a least-square algorithm is applied to retrieve the correct phase value. The proposed method can significantly reduce the errors caused by the object movement. The whole reconstruction process does not need human intervention and the proposed method has high potential to be applied in industrial applications. Experiments are presented to verify the effectiveness.

© 2017 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Automated three-dimensional (3D) surface profile measurement of moving object is an essential requirement for many scenarios (e.g. Assembly line product inspection etc.). Fringe pattern profilometry (FPP) is one of the most used technologies for 3D surface profile measurement because of its advantages of non-contact, high speed and high accuracy [1–6]. Fourier transform profilometry (FTP) and phase shifting profilometry (PSP) are the typical and popular methods implementing FPP. FTP only uses one single fringe pattern to probe the object surface [7]. Therefore it is suitable for moving object measurement. However, the reconstruction accuracy of FTP is easily affected by the overlapping between the zero-order component and fundamental component of the captured fringe pattern. In the other hand, PSP requires multiple fringe patterns (normally at least three) to reconstruct the 3D profile of the object [8]. Since multiple fringe patterns with phase shift are used, it has the advantages such as high accuracy and robustness to the influence of ambient and reflectivity variations, therefore high accuracy result can be achieved by PSP. However, the object must be kept static during the projection and capture of the multiple fringe patterns in PSP. If the object moves, errors will be introduced in the reconstructed result.

In order to measure the 3D shape of the moving object with high accuracy, Hu and He [9] proposed to use the improved π phase shifting FTP algorithm. The method can measure the object moving at a constant speed and fixed direction. During the measurement, only one fringe pattern is projected onto the object. The fringe pattern comprises two regions which have a π phase shifting to each other. Two line-scan cameras are used to capture the deformed fringe pattern in the two regions respectively. In order to find the corresponding points between the two regions, the object must be moved at a constant velocity and the movement direction should be perpendicular to the line-scan direction. At last, the π phase shifting FTP algorithm is used to reconstruct the object. As two images are used to reconstruct the object, the accuracy is improved comparing with the traditional FTP algorithm. However, the application is limited because of the movement speed and direction should be known as priori. Zhang and Yau proposed to use two-plus-one phase shifting algorithm to measure the moving object [10]. Two sinusoidal fringe patterns and a uniform flat image are employed to reconstruct the object. As only the two sinusoidal fringe patterns contain the information of object profile, the errors caused by the movement are less than the traditional PSP. However, the error still occurs when the object moves between the two sinusoidal fringe patterns. Chen and Cao et al. proposed a method to measure the fast rotating object [11]. Circular binary grating patterns are employed during the measure and high resolution and accuracy reconstruction result is achieved. This method only can be used to measure the object with two-dimensional (2D) rotation movement. High speed camera and projector are also used in the measurement of moving object [12], however the cost will be increased significantly. In our recent paper, the moving object is reconstructed by analyzing the influence on the fringe patterns caused by the movement [13]. High accuracy result is achieved when the object only has 2D movement. However, the method requires at least three markers to be put on the object surface in advance, which is not suitable for automated 3D profile measurement system. Therefore, it is high desirable to develop an automated approach for the 3D measurement of moving object.

In this paper, an automated approach reconstructing the 3D profile of the rigid object with arbitrary 2D movement based on PSP is proposed. The proposed method not only inherits the advantages of PSP (such as high accuracy and robust), it is also immune to the errors caused by the object movement. Based on the Scale-invariant feature transform (SIFT) algorithm, an automated approach tracking the object movement among the multiple captured fringe patterns is proposed. Then, a method selecting the feature points obtained from SIFT algorithm is given. The selected feature points are used to calculate the rotation matrix and translation vector which describes the object movement mathematically. Then, with the reconstruction model including the movement information, a least-square algorithm is proposed to retrieve the correct phase value. At last, the errors caused by object movement are remedied and the object is reconstructed automatically with high accuracy.

This paper is organized as follows. Section 2 analyzes the limitation of traditional PSP. In Section 3, an automated approach tracking the object movement is described. In order to obtain the pure object images used in the SIFT algorithm, two methods are given and compared. A strategy selecting the feature points is also described in this section. In Section 4, a least-square algorithm is described to retrieve the correct phase value. In Section 5, the experimental results are given to verify the effectiveness of the proposed algorithm. Section 6 concludes this paper.

2. The limitation of PSP

The structure of the measurement system employing PSP is shown in Fig. 1. It includes one camera, one projector and one reference plane. During the measurement, the projector projects a set of sinusoidal fringe patterns (normally at least three fringe patterns) onto the reference plane and captured by the camera. Then, remove the reference plane, the same set of sinusoidal fringe patterns is projected onto the object surface and also acquired by the camera [14]. Because of the height of the object, the fringe patterns on the object are distorted comparing with the ones on the reference plane. The height information can be retrieved from the phase difference between the object and reference plane.

 figure: Fig. 1

Fig. 1 The structure of the PSP system.

Download Full Size | PDF

The details of the principle of PSP can be found in [13]. High accuracy result can be obtained when the object is static. However, it can be found that, the traditional PSP does not include the information of the object movement. The reconstruction model only describes the fringe patterns when the object is static. This is the fundamental reason for the errors occurring in the moving object measurement. The object movement during the measurement will cause two violations in PSP: (1) the object position are mismatched among different fringe patterns while PSP describes the object in static; (2) the phase shift values among the captured fringe patterns of moving object are uneven while the PSP requires equal phase shift.

3. Movement tracking and mathematical description

3.1 Movement tracking

In order to remove the influence caused by the object movement in the reconstructed result, the movement information should be obtained firstly. In our recent research, the movement is tracked by three markers which are put on the object surface in advance [13]. This method cannot be used in an automated 3D reconstruction system because of the human intervention. This paper proposes to use SIFT algorithm to track the object movement automatically. SIFT is a popular algorithm detecting and describing local features in images. The detected object features are invariant to image translation, scaling and rotation [15]. However, the captured fringe patterns cannot be used directly in the SIFT algorithm. As the intensity values are used to retrieve the features, the fringes on the object surface will be seen as the “noise” in the SIFT algorithm. Furthermore, it should be noticed that during the measurement, not only the object moves among the fringe patterns, the fringe pattern is also shifted.

Two methods are proposed to remove the “fringe noise” in the captured fringe patterns. Inspired by FTP, the first method employs a filter to remove the fringes. In this method, the captured fringe pattern is firstly processed by the fast Fourier transform. Then a filter is used to remove the fundamental component in the frequency domain, which means the zero-order component is left. The zero-order component represents the slow varying background light and the fundamental component represents the fringes on the image [16]. The fringes are removed by the filter and the background light reflected from the object surface is left. At last, the inverse fast Fourier transform is applied on the zero-order component to obtain the object image without fringe patterns. However, because of the frequency leaking problem, the fringes cannot be removed clearly. Residuals are left on the object image as shown in Fig. 2.

 figure: Fig. 2

Fig. 2 Remove the fringes by filter. (a) The original fringe pattern image; (b) The result after remove the fringes by filter.

Download Full Size | PDF

The second method employs a color camera to obtain a pure object image. It is well known that the color image includes three components of red, blue and green. The fringe patterns are projected in one of the three colors, for example red color in this paper. Then, the color camera captures the fringe patterns reflected from the object surface. The red component of the captured image contains the fringe pattern information which can be used to calculate the phase value. In the other two components, the red fringe patterns are “filtered out” and the ambient light reflected from the object surface are captured. As the ambient contains the blue color and green color, a pure object image required in the SIFT algorithm can be found in the blue component or green component as shown in Fig. 3.

 figure: Fig. 3

Fig. 3 The fringe pattern image in different components. (a) The captured object image with red fringe patterns; (b) The red component of Fig. 3(a); (c) The blue component of Fig. 3(a).

Download Full Size | PDF

This paper employs the second method to obtain the pure object image and the SIFT algorithm is applied to track the object movement. The SIFT algorithm extracts distinctive features of local image which are invariant to image scale and rotation. It is also robust to noise, illumination variation, distortion and viewpoint [17]. Therefore, the SIFT algorithm can satisfy the requirement of the 2D object movement tracking in PSP. With the help of the pure object images without fringes, the feature points on the object are detected firstly. Then, the corresponding relationship of the feature points in different object images is also found. Figure 4 shows the detected feature points between different images of the PSP when the object has rotation movement. The connected lines show the corresponding relationship.

 figure: Fig. 4

Fig. 4 The detected feature points and the corresponding relationship.

Download Full Size | PDF

3.2 Feature points selecting

In order to introduce the movement information into the reconstruction model, the object movement needs to be described mathematically. As only 2D movement is analyzed in this paper, the rotation matrix and translation vector are used to describe the object movement. Under the help of the feature points from the object surface in different fringe patterns, the SVD method is employed to calculate the rotation matrix and translation vector [18].

Assume two sets of corresponding points are P={pi|i=1,...,N} and Q={qi|i=1,...,N}, pi are the feature points on the object before movement and qi are the corresponding points after movement; N is the number of the corresponding point pairs and i is the number index. With the rotation matrix and translation vector, we have:

qi=Rpi+T+γi
where R is a 2×2 rotation matrix, T is a translation vector (2×1 column matrix) and γi is a noise vector. In order to obtain the rotation matrix R and translation vector T, we need to minimize
2=i=1Nqi(Rpi+T)2
Let us define
p=1Ni=1Npi,pi=pip,
q=1Ni=1Nqi,qi=qiq.
Then we have

2=i=1NqiRpi2

Therefore, the estimated value for the rotation matrix and translation vector (R^,T^) can be obtained by two step: (1) find R^ to minimize 2 in Eq. (5); (2) the translation vector is found by T^=qR^p.

Define the 3×3 matrix H as

H=i=1NpiqiT

Find the singular value decomposition of H by H=UΛVT, the optimal rotation matrix R^ can be obtained by the following:

R^=VUT
and the translation vector is determined as

T^=qR^p

As only 2D movement is considered in this paper and the object is rigid, three pairs of the corresponding points are enough to obtain the rotation matrix and translation vector. However, the SIFT algorithm retrieves all the features points on the object. For most of the object, the number of the obtained feature points is more than 3 pairs (e.g. 31 pairs in Fig. 4).

The redundant feature points will increase the calculation cost in SVD algorithm. For the efficiency reason, a method selecting three pairs of the candidates from the feature points obtained by SIFT is proposed. The selecting method is inspired by the maximum of minimum distance approach described in [19]. The details are given in Fig. 5.

 figure: Fig. 5

Fig. 5 The flow chart of feature points selecting.

Download Full Size | PDF

Step 1: Obtain the feature points sn={s1,s2,...,sN} based on SIFT algorithm;

Step 2: Find two points D1 and D2 from sn, D1 is the point to the far left and D2 is the right most point;

Step 3: From the rest points of sn, find the points D3 who has the biggest distance to D1 and D2;

Step 4: Based on the SVD method, calculate the rotation matrix and translation vector with {D1, D2, D3} and their corresponding points.

4. Reconstruction

The movement information described by the rotation matrix and translation vector can be introduced in the reconstruction model. By analyzing the influence on the fringe patterns caused by the movement, the fringe pattern description including the movement information is proposed [13]. With the new fringe pattern description, a least-square algorithm is employed to retrieve the correct phase value in this paper. The details are described as follows.

For N-step PSP, the fringe patterns including the movement information can be described by Eq. (9), the derivation can be found in [13]:

d˜n(x,y)=a+bcos{ϕ[fn(x,y),gn(x,y)]+Φ(x,y)+2π(n1)/N}
where d˜n(x,y)n=1,2,3,...,N are the captured fringe patterns with the movement information; a is the ambient light intensity; b is the amplitude of the intensity of the sinusoidal fringe patterns; ϕ() is the phase value on the reference plane; fn(x,y) and gn(x,y) are the functions related to the rotation matrix and translation vector; Φ(x,y) is the phase value caused by the height information. In Eq. (9), the d˜n(x,y) and ϕ[fn(x,y),gn(x,y)] are the known parameters and a, b, and Φ(x,y) are the unknown parameters. When N3, Φ(x,y) can be obtained by the least-square method as follows:

Rewrite Eq. (9) as

d˜n(x,y)=a+B(x,y)cosδ+C(x,y)sinδ
In Eq. (10), the three new parameters are: B(x,y)=bcosΦ(x,y), C(x,y)=bsinΦ(x,y) and δ=ϕ[fn(x,y),gn(x,y)]+2π(n1)/N. Assume the measured fringe pattern is denoted as dnm(x,y), then the sum of the squared error in each pixel is
S(x,y)=n=1N[d˜n(x,y)dnm(x,y)]2
Based on the least-square criteria which minimize Eq. (11), we have
X(x,y)=A1(x,y)B(x,y)
where

A(x,y)=[Nn=1Ncosδn=1Nsinδn=1Ncosδn=1Ncos2δ12n=1Nsin2δn=1Nsinδ12n=1Nsin2δn=1Nsin2δ],
X(x,y)=[aB(x,y)C(x,y)]T,
B(x,y)=[n=1Ndnm(x,y)n=1Ncosδ×dnm(x,y)n=1Nsinδ×dnm(x,y)]T.

According to Eqs. (12)-(15), the unknown parameters a, B(x,y) and C(x,y) can be obtained and the phase information Φ(x,y) can be determined by:

Φ(x,y)=tan1[C(x,y)/B(x,y)]

Based on the above, the automated 3D shape measurement for the rigid object with 2D movement can be implemented by the steps below:

Step 1: Based on N-step PSP, N fringe patterns with red (blue or green) color are projected onto the object surface and captured by a color camera;

Step 2: Apply the SIFT algorithm to the blue (or different one with the projection color) component of the captured object image. Feature points on the object and the corresponding relationship in different images are obtained;

Step 3: Select three pairs of the corresponding points according to feature points selecting method;

Step 4: Determine the rotation matrix and translation vector by the SVD algorithm;

Step 5: Calculate Φ(x,y) with movement information and the red (projection color) component of the captured image by Eq. (16);

Step 6: Reconstruct the 3D information of the object.

5. Experiments

The experiment system includes a camera (Allied Vision Manta 504C) with the resolution of 2452 ☓ 2056 and a projector (Wintech DLP PRO 4500) with the resolution 912 ☓ 1140. As the proposed algorithm separates the fringe pattern and object image by the color camera, the wavelength of the projector and the spectral response of the camera are the critical factors in the measurement system. In this paper, the wavelength of the projector is 613 nm while the spectral response of the camera reaches peak for the red color and drops to the bottom for the blue color around this wavelength. Therefore, for the captured red fringe pattern image, the blue color is “filtered”. Similar, for the blue object image, the spectral response reaches bottom for the red fringe pattern and climbs to the top for the blue ambient light, resulting the clear blue object image left.

A plastic mask shown in Fig. 6(a) is used to verify the effectiveness of the proposed method. 3-step PSP is used and the object is moved randomly in two-dimension (including rotation movement and translation movement). During the experiment, three shifted fringe patterns are projected by the projector in red color as shown in Figs. 6(b)-6(d). From the first step to the second step, the object is rotated around the lower left corner in clock wise direction for approximate π/90 rad; from the second step to the third step, the object is moved in the downward direction for about 4 mm.

 figure: Fig. 6

Fig. 6 Plastic mask used in the experiment. (a) The plastic mask used in the experiment; (b)–(d) The captured object fringe patterns for 3-step PSP with object movement.

Download Full Size | PDF

The color camera is used to capture the fringe patterns respectively as shown in Fig. 7(a). Then, separate the color image into red, blue and green components. In the red component of the captured image, the fringe pattern information can be found clearly as shown in Fig. 7(b). Except the projected fringe patterns, the ambient light reflected from the object surface is also captured by the camera. As the ambient light includes the blue light and green light, the object image without fringe patterns can be captured in the other two components (blue component and green component). Figure 7(c) shows the object image obtained from the blue component. It is should be noticed that, when the ambient light is weak, a background light with blue or green color should be added in the projected fringe patterns.

 figure: Fig. 7

Fig. 7 Images in different components. (a) The captured image of the object; (b) The image of Fig. 7(a) in red component; (c) The image of Fig. 7(a) in blue component.

Download Full Size | PDF

With the clear object images, the SIFT algorithm is applied to track the object movement. The object images obtained from the blue component are used in this experiment. The object position in the first step of PSP is used as the reference. The feature points are detected and tracked by the SIFT algorithm between the first step and other two steps respectively. This means that, the feature points on the object images before movement and after movement are detected and the corresponding relationship is also found. Figure 8(a) shows the obtained feature points and the corresponding relationship. Then, by using the proposed sorting method and SVD algorithm described in Section 3.2, the rotation matrix and translation vector can be obtained.

 figure: Fig. 8

Fig. 8 The result of the SIFT algorithm. (a) The feature points obtained by SIFT algorithm and the corresponding relationship; (b) The mosaic result for the images in Fig. 8(a).

Download Full Size | PDF

In order to verify the accuracy of the rotation matrix and translation vector, the images of the object in different steps are mosaicked with the help of the rotation matrix and translation vector as shown in Fig. 8(b). In Fig. 8(b), the object images in first step and second step are mosaicked and the object part is matched accurately. Please note that, for some object with symmetry structure or feature-less surface, the SIFT algorithm may cannot retrieve the feature points correctly. The proposed algorithm can employ other feature extraction algorithms to solve the object tracking issues.

Then the object is reconstructed with the traditional PSP algorithm and the proposed algorithm respectively. The results are shown in Fig. 9. The results obtained by the traditional PSP algorithm are shown in Figs. 9(a)-9(b). It is apparent that significant errors are introduced by the movement. Figures 9(c)-9(d) show the results with the proposed approach. The object is reconstructed well and the errors caused by the movement are removed.

 figure: Fig. 9

Fig. 9 The reconstructed results with the traditional PSP and the proposed algorithm. (a) The front view of the result with the traditional PSP; (b) The mesh display of Fig. 9(a); (c) The front view of the result with the proposed algorithm; (d) The mesh display of Fig. (c).

Download Full Size | PDF

As FTP requires only one fringe pattern to reconstruct the object, it is one of the most used algorithms for moving object measurement. We compared the performance between the proposed algorithm and FTP algorithm and the results are presented in Fig. 10. The first captured fringe pattern in Fig. 6(b) is used in the FTP algorithm and the reconstructed result is shown in Figs. 10(a)-10(b). Compared with the result of the proposed algorithm shown in Figs. 9(c)-9(d), the reconstructed result of the FTP algorithm has significant errors on the object surface. For detail inspection, Figs. 10(c)-10(d) present the cross section obtained from the FTP algorithm and the proposed algorithm. It is apparent that the cross section of the proposed algorithm is smoother than the result obtained by the FTP algorithm. Please note that the PSP is more robust when the ambient light leaks to the fringe patterns.

 figure: Fig. 10

Fig. 10 The comparison result between the proposed algorithm and FTP algorithm. (a) The front view of the result with FTP; (b) The mesh display of Fig. 10(a); (c) The cross section of the dash line in Fig. 10(a) where x = 135; (d) The cross section of Fig. 9(c) where x = 135.

Download Full Size | PDF

The accuracy performance of the proposed algorithm is evaluated by calculating the RMS (root mean square) measurement error for the above experiment. The traditional PSP algorithm is applied when the object is static and the reconstructed result is used as the reference. For the results shown in Figs. 9(a)-9(b), the RMS error is 65.961 mm. In the other hand, for the results shown in Figs. 9(c)-9(d), the RMS error is 0.0878 mm. For the results shown in Figs. 10(a)-10(b), the RMS error is 1.273 mm. The proposed algorithm can reduce the RMS error significantly in the 3D reconstruction of the moving object.

6. Conclusion

This paper proposes a new approach to measure the 3D shape of the moving object automatically based on PSP. The whole measurement process does not need human intervention. The object is tracked automatically during the measurement and the movement information is utilized in the reconstruction. During the measurement, the projector firstly emits the fringe pattern on the object surface with a specific color (red, blue or green). Then, a color camera captures the fringe patterns reflected from the object surface. As the captured image has three components and the fringe pattern is in one specific color, the fringe pattern information and object surface information will be separated and stored in different components. With the object surface images, the object movement is tracked by the SIFT algorithm automatically. A new method to classify the feature points obtained by SIFT is also proposed, which improves the efficiency of the rotation matrix and translation vector calculation. At last, with the help of the movement information, a least-square algorithm is proposed to reconstruct the object with high accuracy.

Funding

Natural Science Foundation of China (NSFC) (61705060, 61471178, 61405122).

References and links

1. M. Zhang, Q. Chen, T. Tao, S. Feng, Y. Hu, H. Li, and C. Zuo, “Robust and efficient multi-frequency temporal phase unwrapping: optimal fringe frequency and pattern sequence selection,” Opt. Express 25(17), 20381–20400 (2017). [CrossRef]   [PubMed]  

2. S. Zhang, D. Van Der Weide, and J. Oliver, “Superfast phase-shifting method for 3-D shape measurement,” Opt. Express 18(9), 9684–9689 (2010). [CrossRef]   [PubMed]  

3. Y. Xing, C. Quan, and C. Tay, “A modified phase-coding method for absolute phase retrieval,” Opt. Lasers Eng. 87(1), 97–102 (2016). [CrossRef]  

4. H. Cui, W. Liao, N. Dai, and X. Cheng, “Reliability-guided phase-unwrapping algorithm for the measurement of discontinuous three-dimensional objects,” Opt. Eng. 50(6), 063602 (2011). [CrossRef]  

5. Y. Ding, K. Peng, L. Lu, K. Zhong, and Z. Zhu, “Simplified fringe order correction for absolute phase maps recovered with multiple-spatial-frequency fringe projections,” Meas. Sci. Technol. 28(2), 025203 (2017). [CrossRef]  

6. Z. Zhang, S. Huang, S. Meng, F. Gao, and X. Jiang, “A simple, flexible and automatic 3D calibration method for a phase calculation-based fringe projection imaging system,” Opt. Express 21(10), 12218–12227 (2013). [CrossRef]   [PubMed]  

7. X. Su, W. Chen, Q. Zhang, and Y. Chao, “Dynamic 3-D shape measurement method based on FTP,” Opt. Lasers Eng. 36(1), 49–64 (2001). [CrossRef]  

8. S. Zhang, “Recent progresses on real-time 3-D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010). [CrossRef]  

9. E. Hu and Y. He, “Surface profile measurement of moving objects by using an improved π phase-shifting Fourier transform profilometry,” Opt. Lasers Eng. 47(1), 57–61 (2009). [CrossRef]  

10. S. Zhang and S. Yau, “High-speed three-dimensional shape measurement system using a modified two-plus-one phase-shifting algorithm,” Opt. Eng. 46(11), 113603 (2007). [CrossRef]  

11. Y. Chen, Y. Cao, H. Yuan and Y. Wan, “A stroboscopic online three-dimensional measurement for fast rotating object with binary dithered patterns,” T. I. Meas. Control, 1–9 (2017).

12. Y. Wang, S. Zhang, and J. H. Oliver, “3D shape measurement technique for multiple rapidly moving objects,” Opt. Express 19(9), 8539–8545 (2011). [CrossRef]   [PubMed]  

13. L. Lu, J. Xi, Y. Yu, and Q. Guo, “New approach to improve the accuracy of 3-D shape measurement of moving object using phase shifting profilometry,” Opt. Express 21(25), 30610–30622 (2013). [CrossRef]   [PubMed]  

14. Y. Hu, J. Xi, J. Chicharo, W. Cheng, and Z. Yang, “Inverse function analysis method for fringe pattern profilometry,” IEEE Trans. Instrum. Meas. 58(9), 3305–3314 (2009). [CrossRef]  

15. D. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” Int. J. Comput. Vis. 60(2), 91–110 (2004). [CrossRef]  

16. M. Takeda, H. Ina, and S. Kobayashi, “Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry,” J. Opt. Soc. Am. 72(1), 156–160 (1982). [CrossRef]  

17. X. Hu, Y. Tang, and Z. Zhang, “Video object matching based on SIFT algorithm,” in Proceedings of International Conference on Neural Networks and Signal Processing (Academic, 2008), pp. 412–415.

18. K. S. Arun, T. S. Huang, and S. D. Blostein, “Least-Squares Fitting of Two 3-D Point Sets,” IEEE Trans. Pattern Anal. Mach. Intell. 9(5), 698–700 (1987). [CrossRef]   [PubMed]  

19. K. Wang, B. Cheng, and T. Long, “An Improved SIFT Feature Matching Algorithm Based on Maximizing Minimum Distance Cluster,” in Proceedings of International Conference on Computer Science and Information Technology (Academic, 2011), pp. 255–259.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 The structure of the PSP system.
Fig. 2
Fig. 2 Remove the fringes by filter. (a) The original fringe pattern image; (b) The result after remove the fringes by filter.
Fig. 3
Fig. 3 The fringe pattern image in different components. (a) The captured object image with red fringe patterns; (b) The red component of Fig. 3(a); (c) The blue component of Fig. 3(a).
Fig. 4
Fig. 4 The detected feature points and the corresponding relationship.
Fig. 5
Fig. 5 The flow chart of feature points selecting.
Fig. 6
Fig. 6 Plastic mask used in the experiment. (a) The plastic mask used in the experiment; (b)–(d) The captured object fringe patterns for 3-step PSP with object movement.
Fig. 7
Fig. 7 Images in different components. (a) The captured image of the object; (b) The image of Fig. 7(a) in red component; (c) The image of Fig. 7(a) in blue component.
Fig. 8
Fig. 8 The result of the SIFT algorithm. (a) The feature points obtained by SIFT algorithm and the corresponding relationship; (b) The mosaic result for the images in Fig. 8(a).
Fig. 9
Fig. 9 The reconstructed results with the traditional PSP and the proposed algorithm. (a) The front view of the result with the traditional PSP; (b) The mesh display of Fig. 9(a); (c) The front view of the result with the proposed algorithm; (d) The mesh display of Fig. (c).
Fig. 10
Fig. 10 The comparison result between the proposed algorithm and FTP algorithm. (a) The front view of the result with FTP; (b) The mesh display of Fig. 10(a); (c) The cross section of the dash line in Fig. 10(a) where x = 135; (d) The cross section of Fig. 9(c) where x = 135.

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

q i =R p i +T+ γ i
2 = i=1 N q i (R p i +T) 2
p = 1 N i=1 N p i , p i = p i p ,
q = 1 N i=1 N q i , q i = q i q .
2 = i=1 N q i R p i 2
H= i=1 N p i q i T
R ^ =V U T
T ^ = q R ^ p
d ˜ n (x,y)=a+bcos{ϕ[ f n (x,y), g n (x,y)]+Φ(x,y)+2π(n1)/N}
d ˜ n (x,y)=a+B(x,y)cosδ+C(x,y)sinδ
S(x,y)= n=1 N [ d ˜ n (x,y) d n m (x,y)] 2
X(x,y)= A 1 (x,y)B(x,y)
A(x,y)=[ N n=1 N cosδ n=1 N sinδ n=1 N cosδ n=1 N cos 2 δ 1 2 n=1 N sin2δ n=1 N sinδ 1 2 n=1 N sin2δ n=1 N sin 2 δ ],
X(x,y)= [ a B(x,y) C(x,y) ] T ,
B(x,y)= [ n=1 N d n m (x,y) n=1 N cosδ× d n m (x,y) n=1 N sinδ× d n m (x,y) ] T .
Φ(x,y)= tan 1 [C(x,y)/B(x,y)]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.