Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

A simple, flexible and automatic 3D calibration method for a phase calculation-based fringe projection imaging system

Open Access Open Access

Abstract

An important step of phase calculation-based fringe projection systems is 3D calibration, which builds up the relationship between an absolute phase map and 3D shape data. The existing 3D calibration methods are complicated and hard to implement in practical environments due to the requirement of a precise translating stage or gauge block. This paper presents a 3D calibration method which uses a white plate with discrete markers on the surface. Placing the plate at several random positions can determine the relationship of absolute phase and depth, as well as pixel position and X, Y coordinates. Experimental results and performance evaluations show that the proposed calibration method can easily build up the relationship between absolute phase map and 3D shape data in a simple, flexible and automatic way.

©2013 Optical Society of America

1. Introduction

Phase calculation-based fringe projection techniques are actively studied in academia and widely applied in industries because of the advantages of non-contact operation, full-field acquisition, high accuracy, fast data processing and low cost [13]. Such techniques project fringe patterns onto the measured object surface. From another viewpoint, the fringe patterns are deformed with respect to the shape of the object surface. By calculating the absolute phase in the deformed fringe pattern, a 3D shape of the object is obtained with high accuracy. Therefore, in order to obtain an accurate 3D shape, an important step is to build up the relationship between the absolute phase map and 3D shape data, known as 3D calibration [4]. Existing 3D calibration methods have complicated and hard to implement procedures in practical environments because a precise translating stage or a gauge block is required. Therefore, it is a challenging problem determining how to build up the relationship between phase map and 3D shape data in a simple, flexible and automatic way, especially out of a laboratory environment.

The reported 3D calibration methods of phase calculation-based fringe projection systems can be categorized into model-based [4,5], polynomial [6,7] and least-square [8,9] methods. All the methods need an accurately translated plate or a standard gauge block. The plate needs to be accurately positioned using a high precision translating stage, so the calibration procedure is complicated and mostly implemented on an optical breadboard in the laboratory. When a gauge block is used, the placement of the gauge should satisfy certain conditions and it is difficult to carry out the calibration procedure in the field. Therefore, the existing 3D calibration methods have complicated and hard to implement procedures limited to a laboratory environment.

We previously presented a calibration method for an uneven fringe projection 3D imaging system by using a white plate with discrete markers with known separation on the surface [10]. Later, using the same white plate and a checkerboard, a general fringe projection 3D imaging system was accurately calibrated to build up the relationship between absolute phase and depth data [11]. Although the white plate and the checkerboard can be randomly placed in the measuring volume, there are still some disadvantages: firstly, all the center positions of marker were determined by manual operation, making this method both labour intensive. Secondly, the relationship between pixel position and transverse x,y coordinates is not implemented, so it is not a real 3D calibration. Thirdly, it needs two calibration plates: a checkerboard and a white plate. Therefore, the existing calibration methods cannot automatically build up the relationship between phase map and 3D shape in an actual application field.

This paper will present a simple, flexible and automatic 3D calibration method by using a white plate with discrete hollow ring markers with known separation on the surface. The center location of each marker on the white plate is automatically extracted without any interactive manual operation. The internal parameters of a CCD camera can be calibrated by using the extracted locations of the marker. 3D position of the plate is calculated by using the obtained internal parameters of the CCD camera and the known separation between markers. At each plate position, sinusoidal fringe patterns are projected onto the plate surface to obtain phase information. Therefore, absolute phase and depth data of each pixel at different plate positions can be obtained to calculate the coefficients of a polynomial function, which builds up the relationship between absolute phase and depth data. The transverse X and Y coordinates of each pixel position are also calibrated using the internal parameters of the CCD camera and the corresponding depth data. Consequently, the proposed method can easily build up the relationship between phase map and 3D shape data, including not only the absolute phase and depth data, but the pixel positions and transverse coordinates, by using a calibration artifact in a simple, flexible and automatic way.

2. Principle and method

3D calibration of a phase calculation-based fringe projection imaging system includes two conversions of phase to depth and pixel to coordinates, called depth calibration and transverse calibration, respectively. Depth calibration builds up the relationship between the absolute phase and the depth data, while transverse calibration establishes the relationship between the pixel positions and the X, Y coordinates. A white plate with a scattered surface was designed and manufactured for 3D calibration. There are 9x12 discrete black hollow ring markers on the surface, as illustrated in Fig. 1. The separation of neighboring markers along row and column direction has the same value of 15 mm with an accuracy of 1 µm.

 figure: Fig. 1

Fig. 1 A photo image of the manufactured white plate. There are 9x12 black hollow rings on the surface with neighboring separation of 15mm in the horizontal and vertical directions.

Download Full Size | PDF

Before 3D calibration of the fringe projection system, eight internal parameters of the CCD, including two focal lengths (Fu and Fv), two principal point coordinates (Pu and Pv), and four image radial and tangential distortion coefficients (K1, K2, K3, and K4), need to be determined by capturing the white plate from several random positions. The eight parameters can be calculated from the extracted location of all the hollow ring markers at different plate positions [12,13].

2.1 Marker locations

For each captured image of the white plate the location of all the 9x12 markers is automatically extracted. This is repeated for each position. At first the threshold value of all the markers is automatically selected for one captured image of the white plate without projected fringe pattern. All the pixel coordinates belonging to one marker can be found by binarizing the captured image. An eight-field boundary tracking algorithm extracts the inner and outer edges of one marker to obtain two rings. In order to accurately determine the center of each marker, the ellipse fitting algorithm was used to calculate the center of the two rings [14,15]. Figure 2 illustrates the obtained center of each marker by using red dots.

 figure: Fig. 2

Fig. 2 Automatically locating the center of all the markers on a captured image of the white plate represented by red dots.

Download Full Size | PDF

2.2 Depth calibration

The fringe pattern has a variable period when projected onto the reference plane due to the crossed optical axes of the imaging and projection devices. This effect is illustrated in Fig. 3 [11]. Therefore, the relationship between absolute phase and depth data is a complicated function of pixel coordinate position of x and y and can be represented by the following equation [4]

z=L02πL02LcosθP0Δϕ(x,y)(L0+xcosθsinθ)2LcosθsinθL0+xcosθsinθ+1,
where z is the depth relative to M, Δϕ is the difference of the unwrapped absolute phase on the measured object and the plane M, L is the baseline between the CCD camera and the DLP projector, L0 is the working distance to M, θ is the angle between the optical axis of the projector and the camera, and P0 is the period of the projected fringe pattern on a virtual plane perpendicular to the projecting axis. It is difficult and complicated to directly calibrate the systematic parameters of L, L0, θ, and P0. Here, a polynomial-based calibration method will be used to build up the relationship between absolute phase and depth data.

 figure: Fig. 3

Fig. 3 Schematic of even fringe projection at the projector and uneven fringe pattern on the reference M. M: Reference plane.

Download Full Size | PDF

In fact, Eq. (1) can be represented by the following polynomial equation

z(x,y)=n=0Nan(x,y)Δϕ(x,y)n,
where an,an1,...a2,a1 are a coefficient set containing the systematic parameters. Because of the dependence of Eq. (2) on the x and y coordinates, the coefficient set will have different values between pixel positions. Therefore, a Look-Up Table (LUT) needs to be used at each pixel position to build up the relationship between absolute phase and depth data.

Depth calibration means to determine the coefficient set of the polynomial equation which can be realized by placing the white plate at several different positions in the measuring volume. At each position, sinusoidal fringe patterns are projected onto the plate surface to give the absolute phase information of each pixel. The center location [u,v] of each marker in the pixel coordinate system is obtained by the marker location method in subsection 2.1. From the obtained center location of each marker, the coordinates of all the markers in the pixel coordinate system at different positions can be determined. The external parameters R and T of the white plate with regard to the CCD camera are calculated by the following Eq. (3)

s[uv1]T=A[RT][xwywzw1]T,
where, R is a matrix representing the three rotation angles, T=[Tx,Ty,Tz] is a vector representing the three linear translations, [xw,yw,zw] is the coordinate vector of a marker P on the white plate, [u,v] is the coordinate vector of P in the pixel coordinate system, A is a matrix of internal parameters of the CCD camera, s is an arbitrary scaling factor and []T denotes the transposition.

The world coordinate of each pixel point on the white plate is obtained by using the external parameters R and T, so that the relative depth of each pixel to a reference plane is obtained. Therefore, the relationship between absolute phase and depth data at each pixel can be built up by the polynomial Eq. (2) to give accurate depth calibration.

2.3 Transverse calibration

Transverse calibration is to determine the relationship between pixel coordinates and X, Y coordinates. For an actual imaging system, the relationship is nonlinear because of the distortion of optical imaging and projecting lens. Although the MatLab Camera Calibration toolbox can correct for the captured image distortion, there are still some uncorrected distortions from the projecting lens of the DLP projector. Transverse calibration relates to depth information obtained from projected fringe patterns. Therefore, at each pixel position the following two polynomial equations are used to give a high accurate relationship:

{xr=a0(u,v)zr2+b0(u,v)zr+c0yr=a1(u,v)zr2+b1(u,v)zr+c1,
where a0,b0,c0,a1,b1,c1 are the coefficient set of the system parameters, [u,v] is the coordinate vector of a point in the pixel coordinate system, xr,yr,zr are the coordinates of the same point on the white plate in the reference coordinate system.

The same plate in Fig. 1 will be used to transversely calibrate the relationship between pixel positions and X, Y coordinates. With the known separation between neighboring hollow ring markers and the obtained depth, xr,yr,zr of all pixel position in the reference coordinate system can be obtained by the procedure in subsection 2.2. The polynomial coefficients of each pixel can be determined by the obtained xr,yr,zr and pixel coordinates (u, v) of all the points on the white plate. Pixel positions are independent of each other and so two LUTs need to be established to save the coefficients of each pixel to give a highly accurate relationship between pixel positions and X, Y coordinates.

3. Experiments and results

3.1 Experimental system

The hardware of the fringe projection imaging system includes a portable DLP video projector, a color 3-CCD camera connected to a personal computer (PC) by Firewire IEEE 1394, as illustrated in Fig. 4. The projector is from BenQ (Model CP270) with one-chip digital micro-mirror device (DMD) and a resolution of up to 1024 × 768 pixels (XGA). The colors red, green, and blue are produced by rapidly spinning a color filter wheel in the projector and synchronously modifying the state of the DMD. The 3-CCD camera from Hitachi (Model HV-F22F) has a resolution of 1360 × 1024 pixels. The camera has a standard zoom lens (Computar) with focal length from 12 to 36 mm and adjustable aperture. The personal computer provides system control. The PC graphics card is setup to drive two monitors, one for the DLP and the other for the control software and viewing the captured data. The system can generate red, green, blue or composite color fringe patterns. In the following experiments, green fringe patterns are used to demonstrate the calibration process.

 figure: Fig. 4

Fig. 4 The hardware setup of the fringe projection imaging system including a DLP projector, a color 3CCD camera and a personal computer.

Download Full Size | PDF

3.2 3D calibration

A white plate with 9x12 discrete black hollow ring markers was designed and manufactured by Ti-Times [16], as illustrated in Fig. 1. The separation of neighboring markers along the row and column direction has the same value of 15mm. Calibrating the camera parameters needs theoretically at least two captured images from different viewpoints. In order to cover the whole measuring volume and to accurately determine the internal parameters of the CCD camera, the plate was positioned at twenty-four random positions and orientations with a large angle between the imaging axis and the plate’s normal. At each position, the CCD camera captures one image containing all the 9x12 markers. Using the known separation in the twenty-four captured images, the internal parameters of the CCD camera are obtained by the Camera Calibration Toolbox for Matlab [13]. The radial and tangential distortions of the following captured images were corrected by the obtained internal parameters.

For calibrating the fringe projection imaging system, the same plate needs to be randomly placed in the measuring volume. Using more plate positions means better coverage of the measuring volume. This gives a higher accuracy calibration, but will take more time to finish capturing and processing the fringe pattern image data. Therefore, thirty-nine positions were randomly chosen with the plate nearly perpendicular to the imaging axis of the fringe projection imaging system. At each position, twelve sinusoidal fringe patterns with the optimum fringe numbers of 100, 99 and 90 [17] were projected onto the plate surface for absolute phase calculation and one image without projected fringe pattern was captured for marker position determination. Fringes on black hollow rings have low modulation and the absolute phase data for them was interpolated from the surrounding pixels.

In principle, a higher polynomial order can provide more accurate depth data. However, a fifth fitting was found to be sufficient to calibrate the system to the level that could be achieved given the available phase resolution. The coefficient set of the polynomial at one pixel position in Eq. (2) is solved by the absolute phase and depth data. The pixel coordinates and the transverse coordinates on the plate from the thirty-nine images were used to determine the coefficient sets of the other two polynomials by using Eq. (4). All the obtained coefficient sets are saved into three LUTs for late 3D measurement.

3.3 Quantitative evaluation

In order to evaluate the calibrated fringe projection imaging system, the same plate was placed on an accurate translating stage with resolution of 1μm. The plate was positioned at −18mm, 6mm, −6mm and −18mm with respect to the reference plane M. At each position, the depth data was calculated using the obtained coefficient sets from Eq. (2). The profile along one row near the middle is illustrated in Fig. 5 for the four positions. The measured average value, absolute error (absolute difference between the measured average distance and the distance by the stage) and standard deviation value along the middle row for the four positions were listed in the 4th, 7th and 10th column in Table 1, respectively.

 figure: Fig. 5

Fig. 5 Measured depth along middle row direction for four positions of −18mm, −6mm, 6mm and 18mm in (a), (b) (c) and (d). X-axis represents the pixel positions along row direction with a range 1,2,3…, 1024, the vertical axis is the reconstructed depth to the reference surface.

Download Full Size | PDF

Tables Icon

Table 1. Experimental results on the accurately positioned plate at-18mm, 6mm, −6mm and −18mm (Unit: mm)

By utilizing the obtained coefficient sets from Eq. (4) and the depth value at each plate position, the transverse X, Y coordinates of all the ring markers on the white plate can be calculated, so the measured distance between neighboring ring markers is obtained. Table 1 lists the average distance between all the markers along the X and Y direction in the 2nd and 3rd column respectively. The absolute error between the measured distance and 15mm is listed in in the 5th and 6th column respectively. The 8th and 9th column show the standard deviation of the measured distance along the X and Y direction. The experimental results on the four plate positions show that the proposed 3D calibration method accurately converts not only absolute phase into depth data but also pixel positions into transverse X and Y coordinates.

Another evaluation experiment was carried out by measuring a ‘step artifact’ with a set of known variable geometry steps, as illustrated in Fig. 6(a) with projected green fringe on it. Figures 6(b) and 6(c) are the absolute phase map and measured 3D shape data from the calibrated system respectively. In order to quantitatively evaluate the calibrated system, the distance between the neighboring steps was measured. All the obtained points on one step surface are fitted into a plane. The measured distance between the neighboring steps is the average distance value of all the obtained points on the other step surface to the fitted plane. The actual and measured distance between the neighboring steps, the absolute error (absolute difference between the measured average distance and the actual distance), and the standard deviation are listed in Table 2. The maximum absolute error is 0.028mm. The experimental results show again the proposed 3D calibration method accurately converts absolute phase into depth data.

 figure: Fig. 6

Fig. 6 Illustration of the step artifact and the measured 3D shape data. (a) the step artifact with green projected fringe, (b) absolute phase map, and (c) the measured 3D shape.

Download Full Size | PDF

Tables Icon

Table 2. The experimental results on the measured step (Unit: mm)

3.4 Qualitative evaluation

A toy having freeform surface was measured by the calibrated fringe projection imaging system. Twelve sinusoidal fringe patterns having the optimum fringe numbers of 100, 99 and 90 [17] were projected via green channel onto the toy’s surface for absolute phase calculation. Figure 7(a) shows one of the captured fringe pattern image. Figure 7(b) illustrates the absolute phase map obtained by using optimum three fringe number selection method. During the process of phase calculation, pixels with a modulation less than 15 grayscales are marked as invalid and these pixels are shown in black. The 3D shape data of the toy were obtained after converting the absolute phase map into depth and transverse data, as shown in Fig. 7(c).

 figure: Fig. 7

Fig. 7 The measured results on a toy having freeform surface. (a) photo of the toy with green projected fringe, (b) absolute phase map, and (c) the measured 3D shape.

Download Full Size | PDF

4. Conclusions

As proposed, a simple, flexible and automatic 3D calibration method was developed for a phase calculation-based fringe projection imaging system. A white plate was used whose surface was marked with discrete hollow ring with a known separation. By placing the plate at several random positions, the internal parameters of the CCD camera can be determined. The plate also can give absolute phase and depth data in the measuring volume to build up their relationship by a polynomial function at each pixel position. At each plate position, the absolute phase of each pixel can be calculated by projecting three fringe pattern sets with the optimum fringe numbers onto the plate surface. From the captured images, the location of all the markers is automatically and accurately extracted without any manual operation, so the relative depth of each pixel to a chosen reference plane can be obtained. Therefore, the coefficient set of the polynomial function are calculated by using the obtained absolute phase and depth data. The pixel positions and X, Y coordinates can be established by the parameters of the CCD camera and the obtained depth data. Four known depths of the plate and a ‘step artifact’ have been tested and the experimental results show the validity and flexibility of the proposed 3D calibration method. Therefore, the proposed method has the potential for accurate 3D calibration of phase calculation-based fringe projection imaging system with minimal operator skill and in a robust, flexible and automatic manner.

In comparison with the existing 3D calibration method, the proposed one has the following advantages: (1) simplification: because only a white plate is required during calibration, this method can be easily carried out in a shop floor, without limiting to a laboratory environment; (2) flexibility: the white plate can be placed randomly in the measuring field volume, do not need an accurate translating stage; (3) automation: the location of all the markers in the captured images is automatically and accurately extracted without any manual operation; (4) high accuracy and reliability: the white plate with hollow ring markers are produced with micrometer precision, so that the measured data gives high accurate 3D shape.

Acknowledgments

The authors would like to thank the National Natural Science Foundation of China (61171048), Program for New Century Excellent Talents in University (NO: NECT-11-0932), the Key Project of Chinese Ministry of Education (No: 211016), Specialized Research Fund for the Doctoral Program of Higher Education (“SRFDP”) (No: 20111317120002), the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry (NO: 20101561), Research Project supported by Hebei Education Department (No: ZD2010121). This project is also funded by China Scholarship Council and EPSRC Centre for Innovative Manufacturing in Advanced Metrology.

References and links

1. F. Chen, G. M. Brown, and M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000). [CrossRef]  

2. F. Blais, “Review of 20 years of range sensor development,” J. Electron. Imaging 13(1), 231–240 (2004). [CrossRef]  

3. Z. H. Zhang, “Review of single-shot 3D shape measurement by phase calculation-based fringe projection techniques,” Opt. Lasers Eng. 50(8), 1097–1106 (2012). [CrossRef]  

4. Z. H. Zhang, D. P. Zhang, and X. Peng, “Performance analysis of a 3-D full-field sensor based on fringe projection,” Opt. Lasers Eng. 42(3), 341–353 (2004). [CrossRef]  

5. Q. Y. Hu, P. S. Huang, Q. L. Fu, and F. P. Chiang, “Calibration of a three-dimensional shape measurement system,” Opt. Eng. 42(2), 487–493 (2003). [CrossRef]  

6. P. R. Jia, J. Kofman, and C. English, “Comparison of linear and nonlinear calibration methods for phase-measuring profilometry,” Opt. Eng. 46(4), 043601 (2007). [CrossRef]  

7. M. Vo, Z. Wang, T. Hoang, and D. Nguyen, “Flexible calibration technique for fringe-projection-based three-dimensional imaging,” Opt. Lett. 35(19), 3192–3194 (2010). [CrossRef]   [PubMed]  

8. L. Huang, P. S. K. Chua, and A. Asundi, “Least-squares calibration method for fringe projection profilometry considering camera lens distortion,” Appl. Opt. 49(9), 1539–1548 (2010). [CrossRef]   [PubMed]  

9. H. Du and Z. Wang, “Three-dimensional shape measurement with an arbitrarily arranged fringe projection profilometry system,” Opt. Lett. 32(16), 2438–2440 (2007). [CrossRef]   [PubMed]  

10. Z. H. Zhang, H. Y. Ma, S. X. Zhang, T. Guo, C. E. Towers, and D. P. Towers, “Simple calibration of a phase-based 3D imaging system based on uneven fringe projection,” Opt. Lett. 36(5), 627–629 (2011). [CrossRef]   [PubMed]  

11. Z. H. Zhang, H. Y. Ma, T. Guo, S. X. Zhang, and J. P. Chen, “Simple, flexible calibration of phase calculation-based three-dimensional imaging system,” Opt. Lett. 36(7), 1257–1259 (2011). [CrossRef]   [PubMed]  

12. Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. 22(11), 1330–1334 (2000). [CrossRef]  

13. Jean-Yves Bouguet, “Camera Calibration Toolbox for Matlab,” http://www.vision.caltech.edu/bouguetj/calib_doc/.

14. A. Fitzgibbon, M. Pilu, and R. B. Fisher, “Direct least square fitting of ellipses,” IEEE Trans. Pattern Anal. 21(5), 476–480 (1999). [CrossRef]  

15. Z. H. Zhang, S. S. Meng, and S. J. Huang,D. J. Dorantes-Gonzalez, ed., “Simple and flexible calibration method of a 3D imaging system based on fringe projection technique” in Proceedings of 16th International Conference on Mechatronics Technology, Dante J. Dorantes-Gonzalez, ed. (Tianjin Foreign Language Electronic & Audio-Video Publishing House, Tianjin, China. 2012), pp. 101–105.

16. http://www.ti-times.com/

17. Z. H. Zhang, C. E. Towers, and D. P. Towers, “Time efficient color fringe projection system for 3-D shape and colour using optimum 3-frequency Selection,” Opt. Express 14(14), 6444–6455 (2006). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 A photo image of the manufactured white plate. There are 9x12 black hollow rings on the surface with neighboring separation of 15mm in the horizontal and vertical directions.
Fig. 2
Fig. 2 Automatically locating the center of all the markers on a captured image of the white plate represented by red dots.
Fig. 3
Fig. 3 Schematic of even fringe projection at the projector and uneven fringe pattern on the reference M. M: Reference plane.
Fig. 4
Fig. 4 The hardware setup of the fringe projection imaging system including a DLP projector, a color 3CCD camera and a personal computer.
Fig. 5
Fig. 5 Measured depth along middle row direction for four positions of −18mm, −6mm, 6mm and 18mm in (a), (b) (c) and (d). X-axis represents the pixel positions along row direction with a range 1,2,3…, 1024, the vertical axis is the reconstructed depth to the reference surface.
Fig. 6
Fig. 6 Illustration of the step artifact and the measured 3D shape data. (a) the step artifact with green projected fringe, (b) absolute phase map, and (c) the measured 3D shape.
Fig. 7
Fig. 7 The measured results on a toy having freeform surface. (a) photo of the toy with green projected fringe, (b) absolute phase map, and (c) the measured 3D shape.

Tables (2)

Tables Icon

Table 1 Experimental results on the accurately positioned plate at-18mm, 6mm, −6mm and −18mm (Unit: mm)

Tables Icon

Table 2 The experimental results on the measured step (Unit: mm)

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

z= L 0 2π L 0 2 Lcosθ P 0 Δϕ( x,y ) ( L 0 +xcosθsinθ ) 2 Lcosθsinθ L 0 +xcosθsinθ +1 ,
z( x,y )= n=0 N a n ( x,y )Δϕ ( x,y ) n ,
s [ u v 1 ] T =A [ R T ] [ x w y w z w 1 ] T ,
{ x r = a 0 (u,v) z r 2 + b 0 (u,v) z r + c 0 y r = a 1 (u,v) z r 2 + b 1 (u,v) z r + c 1 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.