Abstract

Digital projectors are used as standard parts at present in fringe projection profilometry systems to project structured-light patterns onto the object surface to be measured, and the distortion of the projector lens must be calibrated and compensated accurately to satisfy the accuracy requirement of industrial applications. A novel method is proposed to determine the projector pixel coordinates of the marker points of a calibration target accurately in terms of projective transform. With the method, the projector can be calibrated with accuracy of sub-pixel level. The method is applicable for the calibration target with a chessboard pattern or a circle pattern, and the calibration result is independent on the results of camera calibration. Experimental results are shown to demonstrate the effectiveness and validity of the proposed method.

© 2017 Optical Society of America

1. Introduction

Fringe projection profilometry (FPP) is an optical 3D scanning technology and of interest to both researchers and engineers due to rapid measurement, high spatial resolution and high point density [1–3]. Digital light processing (DLP) projectors are low in cost and flexible to programming, and are used as standard parts at present in FPP systems to project structured-light patterns onto the object surface to be measured [4]. As the lens in the DLP projector is not ideal in practice, the projected patterns are distorted and additional phase errors are introduced into the absolute phase maps of the images of the projected patterns [5], and the projector distortion is one of the main sources of uncertainty of the FPP system. Therefore, the lens distortion of the DLP projector must be calibrated and compensated accurately in order to obtain precise 3D profiles with the FPP system. The DLP projector can be considered as the inverse of a camera and described with the same mathematical model. As we cannot capture an image with the DLP projector directly, we must calibrate it with the help of the camera of the FPP system.

Several projector calibration methods have been proposed and can be classified into two kinds of methods. In the first kind of methods, projector parameters are determined with a calibrated camera [6-7], and the calibration error of the camera is further accumulated and enlarged in the calibration process of the projector.

In the other kind of methods, fringe patterns are generated and projected onto a calibration target to determine the projector pixel coordinates of the marked points, i.e. corner points for chessboard patterns or circle centers for circle patterns, of the calibration target [5,8–11]. The accuracy of these methods mainly relies on the mapping accuracy between the locations of the marked points in the image plane of the camera and the corresponding projector pixel coordinates, and is independent on the camera calibration results. The mapping is pixel-to-pixel (P2P) based in the methods proposed in [5,8-9], and the calibration accuracy is limited. Huang et al. [10] proposed a method to improve the precision of the mapping to sub-pixel level, and this method can only be applied with calibration targets of circle patterns. In this method, pixels on the circle edge of the image of the calibration target are extracted and mapped onto the digital micro-mirror device (DMD) of the projector. The locations of the circle centers on the DMD are computed with the least-squares fitting technique to achieve sub-pixel precision. However, it is well known that there is an eccentricity error in the projection of a circular target [12]. In other words, there is a theoretical flaw in this method. Zhang et al. [11] proposed a projector calibration method using a camera optical coaxial with the projector. Similar to the methods proposed in [5,8-9], P2P mapping is utilized. Besides, it is difficult to align the camera and the projector in practice.

Different from the above two kinds of methods, Liu et al. [13] most recently proposed a method to generate adaptive fringe patterns to compensate the lens distortion in the projector without projector calibration. Similar to the first kind of methods, the camera must be calibrated in advance, and it is impossible to prevent the compensation effect from the influence of camera calibration.

In this paper, a novel method to improve the accuracy of projector calibration is proposed, and the principle of projective geometry, including projective invariance of the cross ratio, is employed to achieve sub-pixel level mapping between the location of a marked point in the camera image and its corresponding projector pixel coordinates. There is no restriction on the pattern type of the calibration target in the method. Besides, camera calibration is not a prerequisite to projector calibration for the method, and the accuracy of camera calibration does not affect that of the method. Projective invariance of the cross ratio has been employed in calibrating cameras [14-15], but not in calibrating the projector, to the best of our knowledge.

The rest of the paper is organized as follows. Section 2 introduces the principle and procedure of the proposed calibration method. Experimental results are given in Section 3 to demonstrate the effectiveness of the method. The paper is summarized in Section 4.

2. Principle

In this section, the projector calibration model and the P2P method to determine the projector pixel coordinates of the marked points of the calibration target are introduced briefly, and our sub-pixel method is then presented in detail.

2.1 Projector calibration model

Similar to the camera, the projector can also be described with the pinhole model as well as radial and tangential lens distortion, and the pinhole models of the camera and the projector along with their coordinate systems and a calibration target are shown in Fig. 1. Let P be a corner point on the calibration target,Pw=(x,y,z,1)denote the 3D homogeneous coordinates of the point in the world coordinate system,Pp=(up,vp,1)denote the corresponding 2D homogeneous coordinates of the point in the projector pixel coordinate system. The relationship betweenPwandPpcan be described as

s[upvp1]=[fu0up00fvvp0001][R,T][xyz1]
where s is a scaling factor;fuandfvare the focal length measured in width and height of the pixels of the projector, respectively; (up0,vp0) represent the projector pixel coordinates of the principal point; and R and T represent the rotation matrix and translation vector, respectively.

 figure: Fig. 1

Fig. 1 The pinhole model of the camera and the projector.

Download Full Size | PPT Slide | PDF

Considering radial and tangential distortion [16], the distorted projector pixel coordinatesPd(ud,vd,1)can be expressed as

Pd=(1+k1rp2+k2rp4)Pp+[2p1upvp+p2(rp2+2up2)2p2upvp+p1(rp2+2up2)]
whererp2=up2+vp2, k1 and k2 are coefficients of radial distortion, and p1 and p2 are coefficients of tangential distortion. Thus, the projector calibration model can be described with Eqs. (1-2). In this paper, only intrinsic parameters (fu,fv,up0,vp0) and distortion coefficients (k1, k2, p1, p2) are calibrated.

The projector can be calibrated with a planar chessboard calibration target or a calibration target with a circle pattern following Zhang’s method [17]. As mentioned in Section 1, we must calibrate it with the help of a camera, and the camera image coordinates of the marked points of the calibration target are mapped onto the projector DMD to determine the projector pixel coordinates of the marked points. Then the intrinsic parameters and the distortion coefficients of the projector can be determined with Zhang’s method.

2.2 P2P mapping method

In order to obtain the projector pixel coordinates of the marked points, two groups of sinusoidal fringe patterns, one vertical and the other horizontal, are generated with a computer and projected onto the surface of the calibration target with the projector. Then the images of these projected fringe patterns are captured. The vertical intensity of the captured fringe patterns can be expressed as

IVi(uc,vc)=IV+IVcos[ΦV(uc,vc)+δi]i=1,2,,N
and the horizontal intensity of the captured fringe patterns can be expressed as
IHi(uc,vc)=IH+IHcos[ΦH(uc,vc)+δi]i=1,2,,N
where(uc,vc)represent image coordinates;IV and IHdenote the vertical and horizontal average intensity, respectively;IV and IHdenote the vertical and horizontal intensity modulation, respectively; ΦV(uc,vc) and ΦH(uc,vc)are the phase of the vertical and horizontal fringe pattern, respectively; and δi=2π(i1)/Nis the phase-shifting amount.

The phaseΦV(uc,vc)andΦH(uc,vc)can be retrieved as follows:

ΦV(uc,vc)=arctan[i=1NIVi(uc,vc)sin(δi)i=1NIVi(uc,vc)cos(δi)],ΦH(uc,vc)=arctan[i=1NIHi(uc,vc)sin(δi)i=1NIHi(uc,vc)cos(δi)]

The image coordinates of the marked points can be extracted from the image of the calibration target. Based on the extracted image coordinates, the absolute phases of the marked points can be obtained from the phase distributionΦV(uc,vc)andΦH(uc,vc). Then, the projector pixel coordinates of a marked point are calculated as follows:

up=ΦV2πTV,vp=ΦH2πTH
whereΦVandΦHare the absolute phases of the marked point in vertical direction and horizontal direction, respectively;TVandTHare individual fringe widths (in pixels) in the two directions, respectively.

In Eq. (6), the projector pixel coordinates of the marked point are computed based on the absolute phases which correspond to the integral pixels in the camera image. Therefore, the projector pixel coordinates of the marked point are determined at accuracy of pixel-level although the image coordinates of the marked point can be extracted at sub-pixel level. This yields inaccurate projector pixel coordinates and finally decreases the accuracy of projector calibration.

2.3 A novel sub-pixel mapping method

In this paper, the principle of projective geometry [18] is employed to determine the projector pixel coordinates of the marked point at accuracy of sub-pixel level. As shown in Fig. 1, the camera image and the projector “image” are the projective transforms of the same calibration target, respectively, if the distortions of the camera and the projector are ignored. The cross-ratio is essentially the only projective invariant of a quadruple of collinear points in projective geometry, and the cross ratio of a quadruple of collinear points(P1,P2,P3,P4)is defined as

(P1,P2;P3,P4)=P1P3/P2P3P1P4/P2P4

In Fig. 2(a), quadruple points(P1,P2,P3,P4),(Pc1,Pc2,Pc3,Pc4),(Pp1,Pp2,Pp3,Pp4)are collinear, respectively, and (Pc1,Pc2,Pc3,Pc4)and(Pp1,Pp2,Pp3,Pp4)are the projective transform of (P1,P2,P3,P4) on the camera image plane and the projector DMD, respectively. Therefore,(P1,P2;P3,P4)=(Pc1,Pc2;Pc3,Pc4), and(P1,P2;P3,P4)=(Pp1,Pp2;Pp3,Pp4)as well, so that(Pc1,Pc2;Pc3,Pc4)=(Pp1,Pp2;Pp3,Pp4).

 figure: Fig. 2

Fig. 2 Projective geometry in FPP: (a) projective invariance of the cross ratio, (b) intersection of two lines.

Download Full Size | PPT Slide | PDF

In projective geometry, the intersection of the projective transform of two lines is the projective transform of the intersection of the two lines. As shown in Fig. 2(b), two lines intersect at P on the surface of the object to be measured. The camera images of the two lines intersect atPc, and the projective transforms of the two lines on the projector DMD intersect atPp.PcandPpare also the projective transforms of P on the camera image plane and the projector DMD, respectively. ThusPcandPpare corresponding points.

Generally, seven auxiliary points are utilized to obtain the projector pixel coordinates of the marker point in this method. The marker pointPcand the auxiliary points (ActoGc) in the camera image plane are shown in Fig. 3(a). The auxiliary pointsActoDcare four nearest integral pixels which surroundPc,Ecis the intersection between lineAcCcand lineBcDc. LineAcCcand lineBcDcdivide the areaAcBcCcDcinto four subareas, i.e.I-IV. Suppose thatPclocates inside subareaIVor on lineCcDc, but not on lineEcCcor lineEcDc. Then, the coordinates of FcandGccan be determined by intersecting lineAcCcand lineBcPc, and lineBcDcand lineAcPc, respectively. SinceAc,Bc,CcandDcare integral pixel points, their mapping points, i.e.Ap,Bp,CpandDpshown in Fig. 3(b), onto the projector DMD can be determined by substituting the phases of the four points into Eq. (6).

 figure: Fig. 3

Fig. 3 Schematic diagram of the sub-pixel mapping: (a) the marker point and auxiliary points in the camera image plane, (b) the mapping points on the projector DMD.

Download Full Size | PPT Slide | PDF

The mapping of a line in the camera image plane onto the projector DMD is distorted as shown in Fig. 3(b). However, such distortion can be ignored locally if the region is sufficient small in size. Apparently, the region bounded byApBpCpDpis quite small asAc toDcare four nearest integral pixels which surroundPc. Therefore, the region bounded byApBpCpDp is considered to be a projective transform of the region bounded byAcBcCcDc. Therefore, the intersection between lineApCp and lineBpDp, i.e. pointEp, is considered as the mapping point of Ec. The mapping points of FcandGc, i.e.FpandGp, locate on lineApCpand line BpDp, respectively, and(Ap,Ep;Fp,Cp)=(Ac,Ec;Fc,Cc)and(Bp,Ep;Gp,Dp)=(Bc,Ec;Gc,Dc). The intersection of lineApGpand lineBpFpis considered to be the mapping point of Pc.

Leta=Ap,b=ΕpAp,a¯=Bp,b¯=EpBp,AptoGpare the projector pixel coordinates of pointsApto Gp, which can be described as:

{Ap=a+λ1b,Ep=a+λ2b,Fp=a+λ3b,Cp=a+λ4bBp=a¯+τ1b¯,Ep=a¯+τ2b¯,Gp=a¯+τ3b¯,Dp=a¯+τ4b¯
whereλi(i=1,,4)andτi(i=1,,4)are coefficients. Apparently,λ1=0,λ2=1,τ1=0,τ2=1. The projector pixel coordinates of AptoEpare known, andλ4,τ4can be determined.

Substituting Eq. (8) into Eq. (7), Eq. (7) is reformulated as

(Ap,Ep;Fp,Cp)=(λ1-λ3)(λ2-λ4)(λ2-λ3)(λ1-λ4),(Bp,Ep;Gp,Dp)=(τ1-τ3)(τ2-τ4)(τ2-τ3)(τ1-τ4)

As(Ap,Ep;Fp,Cp)=(Ac,Ec;Fc,Cc)and(Bp,Ep;Gp,Dp)=(Bc,Ec;Gc,Dc), and (Ac,Ec;Fc,Cc)and(Bc,Ec;Gc,Dc)are known, it is straightforward to determine coefficientsλ3andτ3with Eq. (9). Therefore, the coordinates of FpandGpcan be computed with Eq. (8). Finally, the coordinates of Ppare determined as the intersection between linesCpGpandDpFp.

Two kinds of degenerated cases as shown in Fig. 4 must be considered in computing the mapping ofPc: (i)Pclocates onEcCc; (ii)Pclocates onEcDc. For the first case, as (Ap,Ep;Pp,Cp)=(Ac,Ec;Pc,Cc), the coordinates of Ppcan be determined. For the second case, as(Bp,Ep;Pp,Dp)=(Bc,Ec;Pc,Dc), the coordinates of Ppcan be determined.

 figure: Fig. 4

Fig. 4 Degenerated cases: (a) case 1 in camera image plane, (b) case 2 in camera image plane, (c-d) mapping points on the projector DMD.

Download Full Size | PPT Slide | PDF

If the marked pointPclocates in other subareas, i.e.I-III, the corresponding projector pixel coordinates can be determined with the similar method. As the projector pixel coordinates of the marked point is computed with that of four nearest integral pixels in the proposed method, sub-pixel accuracy is achieved. After the projector pixel coordinates of the marked points are determined, the projector can be calibrated accurately.

Since the projector pixel coordinates of the marked points are computed directly based on the image coordinates of the marked points and the absolute phases, thus the camera calibration procedure is not a prerequisite in our proposed method, as mentioned in Section 1. This means that the calibration accuracy of our proposed method cannot be influenced by the result of camera calibration.

2.4 Procedures of projector calibration

The complete calibration procedure with the proposed method can be summarized into the following steps:

  • 1) Fix a calibration target with a fixture and take an image using the camera.
  • 2) Fix a white plate in the same position. Then project two sets of fringe patterns, one horizontal and the other vertical, onto the white plate and capture the images of these fringe patterns, respectively.
  • 3) Randomly change the pose of the calibration target, and repeat steps 1 and 2 to acquire at least three groups of images.
  • 4) For each group of images, extract the camera image coordinates of the marker points in the calibration target image and calculate the absolute phase maps from the images of the fringe patterns.
  • 5) Compute the projector pixel coordinates of each marker point with the method given in Section 2.3.
  • 6) Estimate the intrinsic parameters and the distortion coefficients of the projector with Zhang’s method.

3. Experiments and results

Several experiments have been carried out to demonstrate the validity of our proposed method. The experimental system mainly consists of a DLP projector (Lightcrafter 4500 with resolution of 912×1140pixels), a CCD camera (Guppy PRO F-125B/C with resolution of 1292×964pixels), two planar calibration targets (one with a chessboard pattern and the other with a circle pattern) and a white ceramic plate.

In an arbitrary position, one image of the calibration target with a chessboard pattern and 32 images of the projected fringe patterns, 16 patterns horizontal and 16 patterns vertical, are captured. The image of the calibration target and two images of the fringe patterns, one vertical and the other horizontal, are shown in Fig. 5(a), 5(b) and 5(c), respectively. The corner points in the image of the calibration target are extracted as marked points, and 336 marked points are extracted. The absolute phase maps of vertical and horizontal fringe patterns are also computed, respectively, and the results are shown in Fig. 5(d) and 5(e). Totally 3 groups of images of the calibration target and the fringe patterns in 3 different poses are captured for the projector calibration. For each group of images, the above computation is repeated.

 figure: Fig. 5

Fig. 5 Camera images and absolute phase maps: (a) the chessboard calibration target, (b-c) the images of the projected fringe patterns in vertical and horizontal directions, respectively, (d-e) marker points and absolute phase maps in two different directions.

Download Full Size | PPT Slide | PDF

The intrinsic parameters and distortion coefficients of the projector are calibrated with our proposed method and the method based on P2P mapping, respectively. The calibration results are listed in Table 1 and Table 2, and it is obvious that the standard errors of the calibration results with our method are much smaller than that of the P2P mapping based method.

Tables Icon

Table 1. Calibrated intrinsic parameters with standard error (Unit: pixel)

Tables Icon

Table 2. Calibrated coefficients of lens distortion with standard errors

In projector calibration, re-projection error (RPE) is generally adopted to measure the matching degree between the calibration data and the calibrated parameters [8–11]. The RPE distribution of the P2P method and our proposed method are shown in Fig. 6(a) and 6(b), respectively. Figure 6(c) shows the partial enlargement of Fig. 6(b). Compared with the RPE of P2P method, the RPE of our proposed method is reduced from (0.3927, 0.2868) to (0.0550, 0.0344). The experimental results demonstrate that the accuracy of projector calibration can be improved significantly with our method.

 figure: Fig. 6

Fig. 6 Re-projection error comparison (with a chessboard pattern): (a) P2P method, (b) our proposed method, (c) partial enlargement of our proposed method.

Download Full Size | PPT Slide | PDF

In order to compare the calibration accuracy with the circle fitting method proposed in [10], the projector is also calibrated with a calibration target of circle patterns as shown in Fig. 7(a), and the circles labeled with cross at the centers are utilized in calibrating the projector. The circle centers, circle edges and the absolute phase maps in two different directions are shown in Fig. 7(b) and 7(c). The RPE distribution of the P2P method, the circle fitting method and our method are shown in Fig. 7(d), 7(e) and 7(f). The projector calibration accuracy can be improved with the circle fitting method and our method, and our proposed method outperforms the circle fitting method slightly.

 figure: Fig. 7

Fig. 7 Re-projection error comparison (with a circle pattern): (a) the calibration target, (b-c) circle centers, circle edges and absolute phase maps in two different directions, respectively, (d-f) re-projection errors of P2P method, circle fitting method and our proposed method, respectively.

Download Full Size | PPT Slide | PDF

Based on the calibrated distortion coefficients, the compensation phase can be calculated and added to the initial fringe patterns to reduce the phase errors caused by lens distortion of the projector. After compensating the phase errors, residual phase errors of absolute phase map on the white ceramic plate are evaluated to compare the accuracy of the P2P method, the circle fitting method and our method. The compensation result of the adaptive fringe pattern method proposed in [13] which can reduce projector distortion without projector calibration is also shown in this paper for comparison.

The image of the white ceramic plate with white illumination is shown in Fig. 8(a), and the absolute phase map is shown in Fig. 8(b). In this experiment, Zhang’s method [17] has been employed to compensate the phase error caused by the camera lens distortion, and the remaining phase errors are mainly caused by the distortion of the projector lens.

 figure: Fig. 8

Fig. 8 Comparison of phase error compensation: (a) a ceramic plate with white illumination, (b) absolute phase map of the plate, (c) without compensation, (d-g) compensated results by using the P2P method, the circle fitting method, the adaptive fringe pattern method and our proposed method, respectively.

Download Full Size | PPT Slide | PDF

The phase error distribution shown in Fig. 8(c) is acquired without compensation, and the phase error distributions which are compensated with the P2P method, the circle fitting method, the adaptive fringe pattern method and our proposed method are shown in Fig. 8(d), 8(e), 8(f) and 8(g), respectively.

After compensation with our method, the maximum phase error is reduced from 0.1127rad (without compensation) to 0.019rad. By contrast, the maximum phase error of the P2P method, the circle fitting method and the adaptive fringe pattern method are reduced to 0.055rad, 0.024rad and 0.037rad, respectively. The phase errors which are caused by projector lens distortion can be reduced more effectively with our method.

As for the method proposed in [11], it is difficult to align the camera and the projector in practice, and no result is given for comparison.

4. Conclusion

In this paper, a novel method with accuracy of sub-pixel level has been proposed to calibrate the projector of a fringe projection profilometry system. Experimental results demonstrate that the phase errors which are caused by projector lens distortion can be reduced effectively with the proposed method, and the method outperforms exiting methods. Besides, there is no constraint on the pattern style of the calibration target in the method, and the calibration result is independent on the result of camera calibration.

Funding

Introducing Talents of Discipline to Universities (B12019); National Natural Science Foundation of China (NSFC) (51375137); National Key Scientific Apparatus Development Project (2013YQ220893).

References and links

1. F. Chen, G. W. Brown, and M. Song, “Overview of the three-dimentional shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000). [CrossRef]  

2. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010). [CrossRef]  

3. Z. Y. Wang, D. A. Nguyen, and J. C. Barnes, “Some practical considerations in fringe projection profilometry,” Opt. Lasers Eng. 48(2), 218–225 (2010). [CrossRef]  

4. D. Li, C. Liu, and J. Tian, “Telecentric 3D profilometry based on phase-shifting fringe projection,” Opt. Express 22(26), 31826–31835 (2014). [CrossRef]   [PubMed]  

5. Z. W. Li, Y. S. Shi, C. J. Wang, and Y. Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008). [CrossRef]  

6. S. Zhang and R. Chung, “Use of LCD panel for calibrating structured-light-based range sensing system,” IEEE Trans. Instrum. Meas. 57(11), 2623–2630 (2008). [CrossRef]  

7. J. Lu, R. Mo, H. Sun, and Z. Chang, “Flexible calibration of phase-to-height conversion in fringe projection profilometry,” Appl. Opt. 55(23), 6381–6388 (2016). [CrossRef]   [PubMed]  

8. S. Zhang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). [CrossRef]  

9. H. Anwar, I. Din, and K. Park, “Projector calibration for 3D scanning using virtual target images,” Int. J. Precis. Eng. Manuf. 13(1), 125–131 (2012). [CrossRef]  

10. Z. R. Huang, J. T. Xi, Y. G. Yu, and Q. H. Guo, “Accurate projector calibration based on a new point-to-point mapping relationship between the camera and projector images,” Appl. Opt. 54(3), 347–356 (2015). [CrossRef]  

11. S. Huang, L. Xie, Z. Wang, Z. Zhang, F. Gao, and X. Jiang, “Accurate projector calibration method by using an optical coaxial camera,” Appl. Opt. 54(4), 789–795 (2015). [CrossRef]   [PubMed]  

12. D. He, X. Liu, X. Peng, Y. Ding, and B. Z. Gao, “Eccentricity error identification and compensation for high-accuracy 3D optical measurement,” Meas. Sci. Technol. 24(7), 075402 (2013). [CrossRef]   [PubMed]  

13. J. Peng, X. Liu, D. Deng, H. Guo, Z. Cai, and X. Peng, “Suppression of projector distortion in phase-measuring profilometry by projecting adaptive fringe patterns,” Opt. Express 24(19), 21846–21860 (2016). [CrossRef]   [PubMed]  

14. L. Xu, L. Chen, X. Li, and T. He, “Projective rectification of infrared images from air-cooled condenser temperature measurement by using projection profile features and cross-ratio invariability,” Appl. Opt. 53(28), 6482–6493 (2014). [CrossRef]   [PubMed]  

15. D. Li, G. Wen, B. W. Hui, S. Qiu, and W. Wang, “Cross-ratio invariant based line scan camera geometric calibration with static linear data,” Opt. Lasers Eng. 62(6), 119–125 (2014). [CrossRef]  

16. R. Y. Tsai, “A versatile camera calibration technique for high accuracy 3D machine vision metrology using off-the shelf TV cameras and lenses,” IEEE J. Robot. Autom. 3(4), 323–344 (1987). [CrossRef]  

17. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

18. E. Casas-Alvero, Analytic Projective Geometry (European Mathematical Society, 2014).

References

  • View by:
  • |
  • |
  • |

  1. F. Chen, G. W. Brown, and M. Song, “Overview of the three-dimentional shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000).
    [Crossref]
  2. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
    [Crossref]
  3. Z. Y. Wang, D. A. Nguyen, and J. C. Barnes, “Some practical considerations in fringe projection profilometry,” Opt. Lasers Eng. 48(2), 218–225 (2010).
    [Crossref]
  4. D. Li, C. Liu, and J. Tian, “Telecentric 3D profilometry based on phase-shifting fringe projection,” Opt. Express 22(26), 31826–31835 (2014).
    [Crossref] [PubMed]
  5. Z. W. Li, Y. S. Shi, C. J. Wang, and Y. Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008).
    [Crossref]
  6. S. Zhang and R. Chung, “Use of LCD panel for calibrating structured-light-based range sensing system,” IEEE Trans. Instrum. Meas. 57(11), 2623–2630 (2008).
    [Crossref]
  7. J. Lu, R. Mo, H. Sun, and Z. Chang, “Flexible calibration of phase-to-height conversion in fringe projection profilometry,” Appl. Opt. 55(23), 6381–6388 (2016).
    [Crossref] [PubMed]
  8. S. Zhang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006).
    [Crossref]
  9. H. Anwar, I. Din, and K. Park, “Projector calibration for 3D scanning using virtual target images,” Int. J. Precis. Eng. Manuf. 13(1), 125–131 (2012).
    [Crossref]
  10. Z. R. Huang, J. T. Xi, Y. G. Yu, and Q. H. Guo, “Accurate projector calibration based on a new point-to-point mapping relationship between the camera and projector images,” Appl. Opt. 54(3), 347–356 (2015).
    [Crossref]
  11. S. Huang, L. Xie, Z. Wang, Z. Zhang, F. Gao, and X. Jiang, “Accurate projector calibration method by using an optical coaxial camera,” Appl. Opt. 54(4), 789–795 (2015).
    [Crossref] [PubMed]
  12. D. He, X. Liu, X. Peng, Y. Ding, and B. Z. Gao, “Eccentricity error identification and compensation for high-accuracy 3D optical measurement,” Meas. Sci. Technol. 24(7), 075402 (2013).
    [Crossref] [PubMed]
  13. J. Peng, X. Liu, D. Deng, H. Guo, Z. Cai, and X. Peng, “Suppression of projector distortion in phase-measuring profilometry by projecting adaptive fringe patterns,” Opt. Express 24(19), 21846–21860 (2016).
    [Crossref] [PubMed]
  14. L. Xu, L. Chen, X. Li, and T. He, “Projective rectification of infrared images from air-cooled condenser temperature measurement by using projection profile features and cross-ratio invariability,” Appl. Opt. 53(28), 6482–6493 (2014).
    [Crossref] [PubMed]
  15. D. Li, G. Wen, B. W. Hui, S. Qiu, and W. Wang, “Cross-ratio invariant based line scan camera geometric calibration with static linear data,” Opt. Lasers Eng. 62(6), 119–125 (2014).
    [Crossref]
  16. R. Y. Tsai, “A versatile camera calibration technique for high accuracy 3D machine vision metrology using off-the shelf TV cameras and lenses,” IEEE J. Robot. Autom. 3(4), 323–344 (1987).
    [Crossref]
  17. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
    [Crossref]
  18. E. Casas-Alvero, Analytic Projective Geometry (European Mathematical Society, 2014).

2016 (2)

2015 (2)

2014 (3)

2013 (1)

D. He, X. Liu, X. Peng, Y. Ding, and B. Z. Gao, “Eccentricity error identification and compensation for high-accuracy 3D optical measurement,” Meas. Sci. Technol. 24(7), 075402 (2013).
[Crossref] [PubMed]

2012 (1)

H. Anwar, I. Din, and K. Park, “Projector calibration for 3D scanning using virtual target images,” Int. J. Precis. Eng. Manuf. 13(1), 125–131 (2012).
[Crossref]

2010 (2)

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

Z. Y. Wang, D. A. Nguyen, and J. C. Barnes, “Some practical considerations in fringe projection profilometry,” Opt. Lasers Eng. 48(2), 218–225 (2010).
[Crossref]

2008 (2)

Z. W. Li, Y. S. Shi, C. J. Wang, and Y. Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008).
[Crossref]

S. Zhang and R. Chung, “Use of LCD panel for calibrating structured-light-based range sensing system,” IEEE Trans. Instrum. Meas. 57(11), 2623–2630 (2008).
[Crossref]

2006 (1)

S. Zhang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006).
[Crossref]

2000 (2)

F. Chen, G. W. Brown, and M. Song, “Overview of the three-dimentional shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000).
[Crossref]

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

1987 (1)

R. Y. Tsai, “A versatile camera calibration technique for high accuracy 3D machine vision metrology using off-the shelf TV cameras and lenses,” IEEE J. Robot. Autom. 3(4), 323–344 (1987).
[Crossref]

Anwar, H.

H. Anwar, I. Din, and K. Park, “Projector calibration for 3D scanning using virtual target images,” Int. J. Precis. Eng. Manuf. 13(1), 125–131 (2012).
[Crossref]

Barnes, J. C.

Z. Y. Wang, D. A. Nguyen, and J. C. Barnes, “Some practical considerations in fringe projection profilometry,” Opt. Lasers Eng. 48(2), 218–225 (2010).
[Crossref]

Brown, G. W.

F. Chen, G. W. Brown, and M. Song, “Overview of the three-dimentional shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000).
[Crossref]

Cai, Z.

Chang, Z.

Chen, F.

F. Chen, G. W. Brown, and M. Song, “Overview of the three-dimentional shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000).
[Crossref]

Chen, L.

Chung, R.

S. Zhang and R. Chung, “Use of LCD panel for calibrating structured-light-based range sensing system,” IEEE Trans. Instrum. Meas. 57(11), 2623–2630 (2008).
[Crossref]

Deng, D.

Din, I.

H. Anwar, I. Din, and K. Park, “Projector calibration for 3D scanning using virtual target images,” Int. J. Precis. Eng. Manuf. 13(1), 125–131 (2012).
[Crossref]

Ding, Y.

D. He, X. Liu, X. Peng, Y. Ding, and B. Z. Gao, “Eccentricity error identification and compensation for high-accuracy 3D optical measurement,” Meas. Sci. Technol. 24(7), 075402 (2013).
[Crossref] [PubMed]

Gao, B. Z.

D. He, X. Liu, X. Peng, Y. Ding, and B. Z. Gao, “Eccentricity error identification and compensation for high-accuracy 3D optical measurement,” Meas. Sci. Technol. 24(7), 075402 (2013).
[Crossref] [PubMed]

Gao, F.

Gorthi, S. S.

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

Guo, H.

Guo, Q. H.

He, D.

D. He, X. Liu, X. Peng, Y. Ding, and B. Z. Gao, “Eccentricity error identification and compensation for high-accuracy 3D optical measurement,” Meas. Sci. Technol. 24(7), 075402 (2013).
[Crossref] [PubMed]

He, T.

Huang, S.

Huang, Z. R.

Hui, B. W.

D. Li, G. Wen, B. W. Hui, S. Qiu, and W. Wang, “Cross-ratio invariant based line scan camera geometric calibration with static linear data,” Opt. Lasers Eng. 62(6), 119–125 (2014).
[Crossref]

Jiang, X.

Li, D.

D. Li, C. Liu, and J. Tian, “Telecentric 3D profilometry based on phase-shifting fringe projection,” Opt. Express 22(26), 31826–31835 (2014).
[Crossref] [PubMed]

D. Li, G. Wen, B. W. Hui, S. Qiu, and W. Wang, “Cross-ratio invariant based line scan camera geometric calibration with static linear data,” Opt. Lasers Eng. 62(6), 119–125 (2014).
[Crossref]

Li, X.

Li, Z. W.

Z. W. Li, Y. S. Shi, C. J. Wang, and Y. Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008).
[Crossref]

Liu, C.

Liu, X.

J. Peng, X. Liu, D. Deng, H. Guo, Z. Cai, and X. Peng, “Suppression of projector distortion in phase-measuring profilometry by projecting adaptive fringe patterns,” Opt. Express 24(19), 21846–21860 (2016).
[Crossref] [PubMed]

D. He, X. Liu, X. Peng, Y. Ding, and B. Z. Gao, “Eccentricity error identification and compensation for high-accuracy 3D optical measurement,” Meas. Sci. Technol. 24(7), 075402 (2013).
[Crossref] [PubMed]

Lu, J.

Mo, R.

Nguyen, D. A.

Z. Y. Wang, D. A. Nguyen, and J. C. Barnes, “Some practical considerations in fringe projection profilometry,” Opt. Lasers Eng. 48(2), 218–225 (2010).
[Crossref]

Park, K.

H. Anwar, I. Din, and K. Park, “Projector calibration for 3D scanning using virtual target images,” Int. J. Precis. Eng. Manuf. 13(1), 125–131 (2012).
[Crossref]

Peng, J.

Peng, X.

J. Peng, X. Liu, D. Deng, H. Guo, Z. Cai, and X. Peng, “Suppression of projector distortion in phase-measuring profilometry by projecting adaptive fringe patterns,” Opt. Express 24(19), 21846–21860 (2016).
[Crossref] [PubMed]

D. He, X. Liu, X. Peng, Y. Ding, and B. Z. Gao, “Eccentricity error identification and compensation for high-accuracy 3D optical measurement,” Meas. Sci. Technol. 24(7), 075402 (2013).
[Crossref] [PubMed]

Qiu, S.

D. Li, G. Wen, B. W. Hui, S. Qiu, and W. Wang, “Cross-ratio invariant based line scan camera geometric calibration with static linear data,” Opt. Lasers Eng. 62(6), 119–125 (2014).
[Crossref]

Rastogi, P.

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

Shi, Y. S.

Z. W. Li, Y. S. Shi, C. J. Wang, and Y. Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008).
[Crossref]

Song, M.

F. Chen, G. W. Brown, and M. Song, “Overview of the three-dimentional shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000).
[Crossref]

Sun, H.

Tian, J.

Tsai, R. Y.

R. Y. Tsai, “A versatile camera calibration technique for high accuracy 3D machine vision metrology using off-the shelf TV cameras and lenses,” IEEE J. Robot. Autom. 3(4), 323–344 (1987).
[Crossref]

Wang, C. J.

Z. W. Li, Y. S. Shi, C. J. Wang, and Y. Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008).
[Crossref]

Wang, W.

D. Li, G. Wen, B. W. Hui, S. Qiu, and W. Wang, “Cross-ratio invariant based line scan camera geometric calibration with static linear data,” Opt. Lasers Eng. 62(6), 119–125 (2014).
[Crossref]

Wang, Y. Y.

Z. W. Li, Y. S. Shi, C. J. Wang, and Y. Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008).
[Crossref]

Wang, Z.

Wang, Z. Y.

Z. Y. Wang, D. A. Nguyen, and J. C. Barnes, “Some practical considerations in fringe projection profilometry,” Opt. Lasers Eng. 48(2), 218–225 (2010).
[Crossref]

Wen, G.

D. Li, G. Wen, B. W. Hui, S. Qiu, and W. Wang, “Cross-ratio invariant based line scan camera geometric calibration with static linear data,” Opt. Lasers Eng. 62(6), 119–125 (2014).
[Crossref]

Xi, J. T.

Xie, L.

Xu, L.

Yu, Y. G.

Zhang, S.

S. Zhang and R. Chung, “Use of LCD panel for calibrating structured-light-based range sensing system,” IEEE Trans. Instrum. Meas. 57(11), 2623–2630 (2008).
[Crossref]

S. Zhang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006).
[Crossref]

Zhang, Z.

Appl. Opt. (4)

IEEE J. Robot. Autom. (1)

R. Y. Tsai, “A versatile camera calibration technique for high accuracy 3D machine vision metrology using off-the shelf TV cameras and lenses,” IEEE J. Robot. Autom. 3(4), 323–344 (1987).
[Crossref]

IEEE Trans. Instrum. Meas. (1)

S. Zhang and R. Chung, “Use of LCD panel for calibrating structured-light-based range sensing system,” IEEE Trans. Instrum. Meas. 57(11), 2623–2630 (2008).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

Int. J. Precis. Eng. Manuf. (1)

H. Anwar, I. Din, and K. Park, “Projector calibration for 3D scanning using virtual target images,” Int. J. Precis. Eng. Manuf. 13(1), 125–131 (2012).
[Crossref]

Meas. Sci. Technol. (1)

D. He, X. Liu, X. Peng, Y. Ding, and B. Z. Gao, “Eccentricity error identification and compensation for high-accuracy 3D optical measurement,” Meas. Sci. Technol. 24(7), 075402 (2013).
[Crossref] [PubMed]

Opt. Eng. (3)

S. Zhang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006).
[Crossref]

F. Chen, G. W. Brown, and M. Song, “Overview of the three-dimentional shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000).
[Crossref]

Z. W. Li, Y. S. Shi, C. J. Wang, and Y. Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008).
[Crossref]

Opt. Express (2)

Opt. Lasers Eng. (3)

D. Li, G. Wen, B. W. Hui, S. Qiu, and W. Wang, “Cross-ratio invariant based line scan camera geometric calibration with static linear data,” Opt. Lasers Eng. 62(6), 119–125 (2014).
[Crossref]

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

Z. Y. Wang, D. A. Nguyen, and J. C. Barnes, “Some practical considerations in fringe projection profilometry,” Opt. Lasers Eng. 48(2), 218–225 (2010).
[Crossref]

Other (1)

E. Casas-Alvero, Analytic Projective Geometry (European Mathematical Society, 2014).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 The pinhole model of the camera and the projector.
Fig. 2
Fig. 2 Projective geometry in FPP: (a) projective invariance of the cross ratio, (b) intersection of two lines.
Fig. 3
Fig. 3 Schematic diagram of the sub-pixel mapping: (a) the marker point and auxiliary points in the camera image plane, (b) the mapping points on the projector DMD.
Fig. 4
Fig. 4 Degenerated cases: (a) case 1 in camera image plane, (b) case 2 in camera image plane, (c-d) mapping points on the projector DMD.
Fig. 5
Fig. 5 Camera images and absolute phase maps: (a) the chessboard calibration target, (b-c) the images of the projected fringe patterns in vertical and horizontal directions, respectively, (d-e) marker points and absolute phase maps in two different directions.
Fig. 6
Fig. 6 Re-projection error comparison (with a chessboard pattern): (a) P2P method, (b) our proposed method, (c) partial enlargement of our proposed method.
Fig. 7
Fig. 7 Re-projection error comparison (with a circle pattern): (a) the calibration target, (b-c) circle centers, circle edges and absolute phase maps in two different directions, respectively, (d-f) re-projection errors of P2P method, circle fitting method and our proposed method, respectively.
Fig. 8
Fig. 8 Comparison of phase error compensation: (a) a ceramic plate with white illumination, (b) absolute phase map of the plate, (c) without compensation, (d-g) compensated results by using the P2P method, the circle fitting method, the adaptive fringe pattern method and our proposed method, respectively.

Tables (2)

Tables Icon

Table 1 Calibrated intrinsic parameters with standard error (Unit: pixel)

Tables Icon

Table 2 Calibrated coefficients of lens distortion with standard errors

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

s [ u p v p 1 ] = [ f u 0 u p 0 0 f v v p 0 0 0 1 ] [ R , T ] [ x y z 1 ]
P d = ( 1 + k 1 r p 2 + k 2 r p 4 ) P p + [ 2 p 1 u p v p + p 2 ( r p 2 + 2 u p 2 ) 2 p 2 u p v p + p 1 ( r p 2 + 2 u p 2 ) ]
I V i ( u c , v c ) = I V + I V cos [ Φ V ( u c , v c ) + δ i ] i = 1 , 2 , , N
I H i ( u c , v c ) = I H + I H cos [ Φ H ( u c , v c ) + δ i ] i = 1 , 2 , , N
Φ V ( u c , v c ) = a r c t a n [ i = 1 N I V i ( u c , v c ) sin ( δ i ) i = 1 N I V i ( u c , v c ) cos ( δ i ) ] , Φ H ( u c , v c ) = a r c t a n [ i = 1 N I H i ( u c , v c ) sin ( δ i ) i = 1 N I H i ( u c , v c ) cos ( δ i ) ]
u p = Φ V 2 π T V , v p = Φ H 2 π T H
( P 1 , P 2 ; P 3 , P 4 ) = P 1 P 3 / P 2 P 3 P 1 P 4 / P 2 P 4
{ A p = a + λ 1 b , E p = a + λ 2 b , F p = a + λ 3 b , C p = a + λ 4 b B p = a ¯ + τ 1 b ¯ , E p = a ¯ + τ 2 b ¯ , G p = a ¯ + τ 3 b ¯ , D p = a ¯ + τ 4 b ¯
( A p ,E p ;F p ,C p ) = ( λ 1 - λ 3 ) ( λ 2 - λ 4 ) ( λ 2 - λ 3 ) ( λ 1 - λ 4 ) , ( B p ,E p ;G p ,D p ) = ( τ 1 - τ 3 ) ( τ 2 - τ 4 ) ( τ 2 - τ 3 ) ( τ 1 - τ 4 )

Metrics