Abstract

A newly developed flexible calibration algorithm for telecentric 3D measurement systems is presented in this paper. We theoretically analyzed the similarities and differences between the telecentric and entocentric system. The telecentric system can be calibrated with the aid of the traditional 2D planar calibration method. An additional two-step refining process is proposed to improve the calibration accuracy effectively. With the calibration and refining algorithm, an affine camera can be calibrated with a reprojection error of 0.07 pixel. A projector with small field of view (FOV) is applied to achieve a full 3D reconstruction in our profilometry system. Experiments with a prototype demonstrate the validation and accuracy of the proposed calibration algorithm and system configuration. The reconstruction accuracy can achieve 5 µm with a measurement FOV of 28.43 mm×21.33 mm and a working distance of 110 mm.

© 2016 Optical Society of America

1. Introduction

With the development of modern manufacturing technology, the 3D reconstruction of micro-scale objects becomes increasingly important. Numerous non-contacting optical methods have been employed to determine the profile of a small object such as confocal microscopy, white-light interferometry and microscopic fringe projection based on triangulation. Fringe projection profilometry (FPP) is one of the most widely used techniques in 3D shape measurement because of its insensitivity to the changes of background, contrast and noise. As a measurement method based on geometrical optics, FPP is used for the measurement on an object scale ranging from millimeters to meters. When a small object is profiled, the traditional microscopic profilometry system uses a stereo light microscope (SLM) as the basic optical system. The main drawback of such a system is that the depth of field (DOF) is limited to sub-millimeter order, which is not sufficient to measure a 3D object with height variations of several millimeters. Comparing to conventional microscopic lenses, telecentric lenses possess numerous unparalleled advantages such as high resolution, nearly zero distortion, increased DOF and constant magnification [1]. That’s why more and more attention has been paid to using telecentric lenses instead of microscope optic to measure a small object.

For any optical measurement system, calibration is one of the most important issues and should be considered beforehand. Unlike the perspective projection of pinhole cameras, cameras attached with telecentric lenses perform affine projection. Thus, many of the existing calibration methods for pinhole cameras cannot be applied to calibrate an affine camera. As a telecentric lens produces affine projection, we call the camera attached with a telecentric lens affine camera in the following parts of this article. So far many calibration methods for affine cameras have been proposed [2–6 ]. These methods either need special 3D calibration targets or need the camera to be placed at different positions, which are not available for the telecentric fringe projection profilometry system with a fixed camera and very limited FOV. Since the mathematical model of affine projection only involves the first two rows of the rotation matrix, we cannot obtain the closed-form solution of the calibration parameters based on orthonormality just like the conventional pinhole calibration algorithm with a planar target [7]. Therefore, both the current affine calibration algorithm and the conventional pinhole calibration algorithm are unsuitable for a fixed affine camera. Yin et al. [8] proposed a system calibration method. The authors employed the general imaging model to calibrate the telecentric profilometry system and achieved a good reconstruction accuracy. Inspired by [7], Chen et al. [9] proposed a closed-form solution to calculate the camera parameters. Both these two calibration methods can well finish the calibration task of an affine camera. But they both require a positioning stage device which will increases the hardware cost of the whole system and makes the calibration process complicated and laborious. Haskamp et al. [10] directly used a nonlinear optimization method to estimate camera parameters. However, due to the randomly chosen initial guesses of the parameters, the global minimum might not be achieved in real experiments. Dong Li et al. [11] proposed a parametric method to calibrate a camera with a telecentric lens. However, his method needs to know the size of the pixel of the camera sensor before calibrating the camera, which brings limitations to his method once such information is unknown. Lanman et al. [12] proposed an interesting method to determine the 3D shape by designing a multi-flash illumination system. This method is not suitable in fringe projection profilometry system because a telecentric lens with a large FOV is required in his measurement system. Recently, Beiwen Li et al. [13] proposed a calibration algorithm to calibrate a telecentric profilometry system with the aid of an additional projector with a long working distance lens. This calibration framework can well finish the calibration task and achieves a relative accuracy of 0.1%. But apart from the affine camera and projector, another projector attached with a long working distance lens is needed to finish the calibration quest, which will complicates the calibration procedure and the reconstruction system.

In this paper we propose a flexible calibration method for telecentric profilometry system. We thoroughly analyze the geometric models of the telecentric system and entocentric system. We take advantage of the traditional planar calibration method proposed by Zhang et al. [7] and deduce an effective calibration method for telecentric system. In our method, only a planar target is needed to finish the whole calibration task and no additional device is needed. Additionally, a two-step compensation method is proposed to counteract the out-of-focus phenomenon brought about by the limited DOF of the telecentric lens and to improve the localization accuracy of the control points.

The remainder of this paper is organized as follows: In Sec. 2, the details of affine camera calibration and the system calibration are given. Sec. 3 describes the system setup and the experiments. The advantages and shortcomings of the proposed method are fully discussed in Sec. 4 and Sec. 5 concludes this work.

2. Principle

2.1. Calibration method for telecentric profilometry system

2.1.1. Affine camera calibration

The aperture stop of an object-side telecentric lens is placed at the image-side focal point [14] and only the light rays parallel to the optical axis from the object side are allowed to pass through. In the geometric model, the entrance pupil is located at infinity on the object side because the aperture stop is placed at the image-side focal point f 0. Therefore, the projection center also lies at infinity on the object side [15]. Because of this property, the object-side telecentric lens produces an affine projection [16]. As a consequence, all equally sized objects in the world space, however close to or far from the lens, would have the same sized projection in the image. This explains why telecentric lens produces constant magnification.

Trying to find a calibration method for the affine camera, we firstly explored the similarities and differences between the telecentric and entocentric geometry. Typically, Fig. 1(a) shows the projection process of a point M in the world frame into the image coordinate system. Ow(Xw,Yw,Zw,1), Oc(Xc,Yc,Zc,1), Oi(xi,yi,1) and Os(u,v,1) represent the world, the camera, the image and the pixel coordinate systems in homogeneous form, respectively. Equation (1) is the well-known projection process of a pinhole model: a point (Xw,Yw,Zw) in the world frame is projected into the point (u,v) in the image coordinate system.

Zc[uv1]=[fmxsuo0fmyvo001][r11r12r13txr21r22r23tyr31r32r33tz][XwYwZw1].

 

Fig. 1 Pinhole camera geometry.

Download Full Size | PPT Slide | PDF

Ignoring the extrinsic transformation and the mapping from the image system to the pixel coordinate system, a point M in space with coordinates (Xc,Yc,Zc)T is mapped to the point on the image plane where a line joining the point M to the center of projection meets the image plane. This process is shown in Fig. 1(b). By similar triangles, one quickly computes that the point (Xc,Yc,Zc)T is mapped to the point (f · Xc/Zc, f · Yc/Zc, f)T on the image plane. We can get:

[uv1]=[(f/Zc)Xc(f/Zc)Yc1],
which can be further transformed to the following form:
Zc[uv1]=[fXcfYcZc]=[f0000f000010][XcYcZc1],
where diag(f, f, 1)[I|0] is the simplest camera matrix of a pinhole camera. The camera matrix of an affine camera, however, is different from that of a pinhole camera: It is known that an affine camera is also an infinite camera which is not sensitive to the Z information along the optical center of the camera. The imaging process of an affine camera is demonstrated in Fig. 2. It can be noticed that the telecentric lens performs a simple magnification in both X and Y direction, as shown in Eq. (4).
[uv1]=[mXcmYc1]=[m0000m000001][XcYcZc1],
where m represents the magnification factor of the telecentric lens, which can also be noted as f/z 0, z 0 is the working distance of the lens and f is the focal length. Note that the magnification factor is a fixed number once the telecentric lens is specified. In Eq. (2), however, (f/Zc) is the variable of the Z coordinate of the reconstructed point in camera coordinate system.

 

Fig. 2 The object-side telecentric imaging process.

Download Full Size | PPT Slide | PDF

Equation (3) and Eq. (4) demonstrate the difference between the pinhole camera model and the affine model. Once the extrinsic transformation and the mapping from the image system to the pixel coordinate system are taken into consideration, the whole projection of an affine camera becomes:

[uv1]=[mmxsuo0mmyvo001][r11r12r13txr21r22r23ty0001][XwYwZw1],
where mx and my are the numbers of pixels per unit distance in x and y direction in image coordinates. Equation (5) is also called the affine calibration model which can also be presented as:
[uv1]=H[XwYwZw1],H=A[Rt],
where H represents the homography between a world point and its image; A and [R t] denote the intrinsic and extrinsic matrix of the affine camera, respectively.

Rather than two totally different geometrical models, pinhole model and affine calibration model have strong connections: the affine camera is a special case of pinhole camera by moving its camera center C backwards along the principle ray into infinite. More details about the connections between the affine camera and the pinhole camera can be found in [16].

As we introduced previously, the affine calibration model only involves the first two rows of the rotation matrix. Therefore, we cannot obtain the closed-form solution of the calibration parameters. The attempt to apply the traditional planar calibration method [7] to calibrate an affine camera will fail. However, it is worthwhile analyzing what will happen if we attempt to do so. To build the camera coordinate system, the algorithm [7] tries to find a camera center lies in the principal axis with a distance of f to the image plane (sensor plane). Theoretically, the center of the affine camera lies in the infinite, which means the camera coordinate system cannot be built. But in reality, due to the inaccurate manufacture, the telecentric lenses used in affine cameras always have a parameter named “telecentric slope” or “telecentricity” [1]. Such a parameter will lead to the fact that the optical rays passing through the telecentric lens are not strictly parallel to each other. They will indeed intersect at a point which is far away from the image plane. When we try to apply the traditional planar calibration method to the telecentric system to obtain the intrinsic and extrinsic parameters directly, the telecentric system will be recognized as an entocentric system with a really large focal length f and thus we can obtain a set of intrinsic and extrinsic parameters. The accuracy, however, is limited due to the fact that the parameters are calculated by using an “unsuitable” algorithm. Notice that no matter the system is telecentric or entocentric, the extrinsic parameters represent the same meaning: the rotation and translation between the world system and the camera system. Thus once we obtain the extrinsic parameters by applying the traditional planar calibration method to the imperfect telecentric system. The calculated extrinsic parameters in Eq. (1), which we call the raw extrinsic parameters, can be used to roughly describe the rotation and translation between the two coordinate systems. One issue should be stressed here, among the calculated extrinsic parameters, tz cannot be used to describe the Z transformation between the world and camera coordinate system anymore. From Eq. (5) we can notice that only eight parameters, apart from r 31, r 32, r 33, tz, need to be taken from Eq. (1) to represent the rotation and translation of the affine camera. Thus now we have all the extrinsic parameters in the affine calibration model.

Once the raw extrinsic parameters are obtained, we are able to calculate the intrinsic parameters of the affine camera easily using the following equation:

Ai=Hi[Rt]i1,i=[1,Np],
where the footnote i denotes the ith position of the calibration board, Np is the total times we need to move the calibration board during the camera calibration process. Finally, we can obtain the raw intrinsic matrix A by averaging Ai.

One issue should be pointed out about Eq. (5): according to [16], the value of principle point (u 0,v 0) is dependent on the particular choice of world coordinates, and hence is not an intrinsic property of the camera itself. This means that the camera matrix A does not have a principal point, which is in accord with the fact that the center of an affine camera lies in an infinite point. Therefore, the final imaging model of the affine camera is equally changed to the following form:

[uv1]=[mmxs00mmy0001][r11r12r13txr21r22r23ty0001][XwYwZw1],
where
[txty]=[txty]+[mmxs0mmy]1[u0v0].

Since telecentric lenses commonly exhibit a very low distortion compared to entocentric lenses [1], we only consider the radial distortion effect which is expressed as follows:

xcn=(1+k1r2+k2r4)xcnycn=(1+k1r2+k2r4)ycn,
where radial distance r=(xcn2+ycn2)1/2, k1 and k 2 are the coefficients of radial distortion; (xcn, ycn) and (xcn,ycn) denote the distortion-free and distorted points, respectively.

All the extrinsic and intrinsic parameters we obtained so far are the initial guesses which should be refined by minimizing Eq. (11).

i=1Npj=1Ncpijp^(A,Ri,ti,k1,k2,Mj)2,
where Nc means the number of control points we used to calibrate the camera; pij represents the located pixel coordinate of the jth control point in the ith position and p^(A,Ri,ti,k1,k2,Mj) is the projection of point Mj in image i according to Eq. (8). Minimizing Eq. (11) is a nonlinear minimization problem, which can be solved with the Levenberg-Marquardt (L-M) algorithm [17].

One may ask if the traditional calibration method can be used to calculate the extrinsic parameters, why we don’t directly use the pinhole model to describe the imaging process of an affine camera in reality. The reason is that the calibration accuracy of applying the planar calibration method [7] to an affine camera is very limited no matter how well we optimize the parameters. However, by using the affine model in Eq. (5) the calibration accuracy could jump into a higher level after a mathematical optimizing process, which will be further proven in the experiments in Sec. 3.2. It should be stressed that the foundation of using the traditional planar calibration method to obtain the raw extrinsic parameters of an affine camera is that the system is not perfect telecentric in reality because of the inaccuracy of manufacturing and various noises in the whole imaging process. The idea that the extrinsic parameters, except for tz, are invariable in both perspective and affine projection, is another theoretic foundation we rely on to calibrate an affine camera.

Now we have the whole calibration procedure of an affine camera:

  • Get the homography matrices Hi, i = 1,2,…,N of N views.
  • Calculate the raw extrinsic parameters by using the planar calibration algorithm in [7].
  • Obtain r 11, r 12, r 13, tx, r 21, r 22, r 23, ty from the calculated extrinsic parameters.
  • Apply Eq. (7) to get the intrinsic parameters of the affine camera.
  • Optimize the intrinsic and extrinsic parameters as well as the distortion parameters by minimizing Eq. (11) using the Levenberg-Marquardt algorithm.

2.1.2. System calibration

To calibrate the projector, several sinusoidal and binary fringe patterns are projected onto the planar calibration target to get the horizontal and vertical phase maps φh and φv, respectively. Using such phase information we can map a circle center from the camera sensor plane to the projector DMD plane, thus the projector is able to “capture” images indirectly [18].

We extract the sub-pixel position of each circle center Oic from the calibration plane image. The absolute phases of the point Oic in the two directions φh, φv can be obtained by phase-shifting algorithm and linear interpolation. The coordinates (uip,vip) of the homographic point Oip, which has the same absolute phase in the two directions, can be calculated as follows:

uip=φv(Oic)2πNvW,vip=φh(Oic)2πNhH,
where Nv, Nh are the numbers of periods in the vertical and horizontal fringe patterns, respectively. W and H are the width and height of the fringe patterns in the projector frame.

In our telecentric profilometry system, the projector with small FOV can be calibrated in the same way as the camera once the control points were localized in the projector’s sensor plane.

2.2. Refinement of the calibration target control points

In general, the detection of control points, i.e., the centers of the circular targets or the corners of the chessboard, can be carried out with the ellipse fitting technique [19] or corner detection algorithm. In experiments, however, the true locations of these centers can only be estimated with uncertainty. Two main factors that affect the localization precision in the imaging process are the out-of-focus effect and the non-frontal orientation of the calibration board. To reduce the calibration errors associated with these two factors, we come up with a two-step refining process. Firstly, we deblur the captured images by carrying out a self convolution algorithm [20]. Secondly, the raw images are undistorted and reversely projected onto the frontal image plane in the world coordinate system through using Eq. (8) with the aid of an interpolation algorithm. After localizing the control points in the frontal image, we project these control points to the camera image plane to calculate the calibration parameters again using the introduced calibration procedure. Since this process is an iterative refinement [21], we call it the iterative control points refinement.

It is known that the DOF of telecentric lens is often between a fraction of millimeter to several millimeters, which means if we move the calibration board to several positions during the calibration procedure, it is likely that some parts of the planar board are out of the focus area, as shown in the Fig. 3(a). Reference [20] proposed a self convolution method to obtain sharpened images. The algorithm has been proven validate to deblur images with both spatially invariant and spatially varying blur. Here in our telecentric system, we apply this algorithm to counteract the out-of-focus effect caused by the limited DOF. Figure 3 demonstrates the captured image before and after deblurring process. Figure 3(b) clearly shows that the edges of the circles can be obtained more reliably comparing to Fig. 3(a) which is blurred due to the out-of-focus phenomenon. As a result, the circle centers can be pinpointed more accurately in the deblurred image and a better calibration result can be obtained. It should be stressed that this algorithm should only be used on the images of the calibration board such as Fig. 3(a). Once several fringe patterns are projected onto the board to calibrate the projector, this algorithm should not be carried out because it will not doubt jeopardize the phase quality calculated from the fringe patterns.

 

Fig. 3 Using the self convolution method, the image (a) Before and (b) after being deblurred.

Download Full Size | PPT Slide | PDF

Another good idea to improve the control points’ localization precision was first proposed by Datta et al. [21]. The basic idea is that the control points can be pinpointed more accurately in a canonical fronto-parallel image than in a non-frontal image. In light of this idea, we calculate the calibration parameters by using the non-frontal image at first. Then we undistort and unproject the captured images to frontal images. Thus we can precisely localize the control points and re-estimate the calibration parameters based on these frontal images. Figure 4 shows the captured calibration images and the related fronto-parallel images. The FOV of our affine camera is too small to capture an image containing the whole calibration board, thus only a 7 × 5 circle array is used to calibrate the camera and projector in our experiments.

 

Fig. 4 Top row: The images of calibration board in five different positions. Bottom row: Images have been undistorted and unprojected to canonical fronto-parallel images.

Download Full Size | PPT Slide | PDF

Considering the two-step refining process, now we have the whole camera calibration procedure:

  1. Deblur the captured images and detect the control points using the ellipse-fitting method.
  2. Use the detected control points to estimate calibration parameters by using the calibration method introduced in Sec. 2.1.1.
  3. Undistort and unproject the captured images to the frontal image and localize the control points in the world frame.
  4. Reproject the re-localized control points to image coordinate system with Eq. (8) and calibrate the camera again by using the same method in the second step.
  5. Do step 3 to 4 iteratively until convergence.

Once the camera calibration is finished, the calibration process of the projector is much simpler: one can get the homographic points of the control points with the aid of the phase information calculated with the horizontal and vertical phase-shifting method and get the projector calibration parameters through the method introduced in Sec. 2.1.2. The refinement is no longer necessary in projector calibration since we only need the control points’ homographic points to calibrate the projector, and the control points themselves have been precisely located during the camera calibration process.

3. System setup and experiments

3.1. System setup

The optical layout of the telecentric fringe projection profilometry system is shown in Fig. 5. The affine camera is consisted of a DH-SV2001FC camera with a resolution of 1628×1236 and a telecentric lens MML03-HR110-5M with a magnification of 0.3. The FOV of the camera is about 28.43 mm×21.33 mm and the DOF reaches 7 mm, which is usually sufficient to measure the depth of a small complex 3D object. The working distance of the telecentric lens is 110 mm. The small FOV projection is realized by a LightCrafter Pro4500 projector with resolution 912×1140. The pattern of the calibration target is a white 7×7 circle array distributed uniformly on a ceramic planar board. The distance between adjacent circles is 3.2 mm. In order to get a strictly sinusoidal fringe pattern, the γ correction method presented in [22] is performed in advance.

 

Fig. 5 Prototype of telecentric fringe projection profilometry system (all the cables were removed for better demonstration).

Download Full Size | PPT Slide | PDF

3.2. Camera and system calibration

The first experiment was designed to validate the proposed calibration method for affine cameras. The calibration board was placed at the working distance of the telecentric lens. Images of the calibration board in several positions were taken by the camera. As we described in Sec. 2.1.1, the traditional planar calibration method was firstly applied to calculate the raw extrinsic parameters. Equation (13) and Eq. (14) show the intrinsic matrix and one of the extrinsic parameters obtained by the traditional planar calibration method, respectively.

intrin=[3.64×1050.01810.3503.72×105621.51001],
extrin=[0.910.030.129.950.010.950.00511.520.210.030.995.77×103],
from intrinsic matrix intrin we can notice that f · mx = 3.64 × 105 and f · my = 3.72×105. In our system the resolution of the CCD camera is 1628 × 1236 and the physical size of the CCD chip is 7.112 mm×5.334 mm, which means mx = 1628/7.112 = 228.91 and my = 231.72. Thus the focal length f is approximately 1590 mm, which is a very large number because the actual focal length f of the lens used in FPP is about 5 mm to 16 mm. This expected big focal length suggests that the camera center is considered lying far away from the sensor plane. Not surprisingly, this phenomenon can also be noticed by tz = 5.77×103 in the extrinsic matrix extrin because tz represents the translation distance from the origin of the world system and the origin of the camera system. As introduced in the precious section, when applying the traditional planar calibration method, the algorithm mistakenly considers the affine camera as an entocentric camera with a big focal length and establishes the camera coordinate system based on the camera center which is far away from the image plane. This experiment result is in accord with the analysis we offered in Sec. 2.1.1.

Once the raw extrinsic matrices were calculated, we obtained the raw intrinsic parameters with Eq. (7) and then all these parameters were optimized though a L-M algorithm. The refined intrinsic matrix intrinaf fine and the extrinsic matrix extrinaf fine of the camera are shown in Eq. (15) and in Eq. (16), respectively.

intrinaffine=[63.460.0060065.750001],
extrinaffine=[0.980.0080.160.980.0010.990.041.120001].

After the affine camera is calibrated, the projector can be calibrated by projecting a series of fringe patterns onto each and every calibration board used to calibrate the camera. To fully validate the proposed calibration method, we calculated and compared the calibration results of the camera and the projector under four different circumstances:

  1. Directly apply the traditional calibration method to the affine camera.
  2. Use the proposed calibration method without deblurring and iterative control points refinement.
  3. Use the proposed calibration method with iterative control points refinement.
  4. Use the proposed calibration method with deblurring and iterative control points refinement.

Figure 6 shows the reprojection errors of the camera under four circumstances. It clearly shows once we refined the raw intrinsic and extrinsic parameters by using the affine projection model, the reprojection errors dropped drastically comparing to directly applying the traditional planar calibration method. Figure 6(c) and Fig. 6(d) show that by deblurring the captured images, the reprojection error slightly reduced. After several experiments we found that the deblurring process cannot always improve the calibration results as we predicted. This is because in real calibration process, the calibration target are out of focus only in some occasions. If the captured image is not out-of-focus in calibration process, the localization accuracy of the control points would not be improved by using the deblurring algorithm. Therefore, whether the captured images should be deblurred or not depends on whether those images are out-of-focus or not. Once we applied the deblurring algorithm and the iterative control points refinement, the reprojection error could be reduced to 0.07 in pixel, as shown in Fig. 6(d). Considering the resolution of the camera is 1628 × 1236 in pixels, the calibration accuracy is quite high.

 

Fig. 6 Reprojection errors of the camera under the four different circumstance

Download Full Size | PPT Slide | PDF

The mean reprojection errors of both the camera and projector under these four circumstances are shown in Table 1. As described in the previous section, the two-step refining process needs not be used in the projector calibration. Once the camera is calibrated more accurately through the refining process, the calibration accuracy of the projector can also be improved. This is because the projector is calibrated based on the camera calibration: the projector needs the homographic points of the control points to be calibrated. Thus once the control points have been localized more precisely in camera coordinate, the accuracy of the projector calibration can be improved accordingly.

Tables Icon

Table 1. Calibration results of the affine camera and projector

In Table 1, Error means the reprojection error in pixel and a, b, c and d represent the circumstances introduced previously. One issue about projector calibration should be pointed out. We projected several fringe patterns onto the calibration board to calibrate the projector. Based on our observation, however, the phase calculated from the captured fringe patterns was very sensitive to the relative positions between the projector, CCD camera and the calibration board. This is because both the DOFs of the projector and CCD camera are very limited (a fraction of one millimeter to several millimeters). For a common triangular imaging system, the angle between the projection and CCD camera dramatically reduces the common focus area, as shown in Fig. 7. The limited common focus area will no doubt affect the image quality of the fringe patterns projected. To address this problem, two options could be considered. The first one is to use a special camera which is designed based on the Scheimpflug principle just as the authors in [8] did. Inside this type of camera the sensor plane and principle plane are not parallel [1] to each other. This property of the camera ensures a large common focus area. However, this option is a hardware solution which is not available for everyone because of the extra hardware cost. The second option, which we chose in our experiments, is that at each and every calibration position of the planar target, we carefully check the phase quality of the fringe patterns captured by the camera. If the phase is not as good as we require, we keep moving the calibration board to other positions until the phase quality is satisfying. Good phase qualities ensure that the control points in CCD sensor can be mapped accurately in the projector’s DMD chip, which could result in a good projector calibration result. Actually, the second option is always recommended in projector calibration in telecentric system no matter the special camera is used or not.

 

Fig. 7 Common focus area demonstration of the fringe projection profilometry system.

Download Full Size | PPT Slide | PDF

3.3. 3D reconstruction

A series of experiments were conducted to further testify the performance of the propose calibration algorithm in real 3D reconstruction task. In the first experiment we used a positioning stage which achieves an accuracy of 1 µm in Z axis and changed the height of the stage seven times for known distances ΔZ. Then actual displacements of the stage were measured by the calibrated telecentric profilometry system. The ground truth of ΔZ displacement is 50 µm. The measured displacements were calculated from the average height changes of the 35 control points on the calibration board. The testing results are shown in Table 2.

Tables Icon

Table 2. Measurement results of the displacements ΔZ

In Table 2 we can notice that the root-mean-square (RMS) error of the measured displacement is about 5 µm. Taking the FOV into consideration, the relative measurement accuracy is about 0.022% since the relative accuracy is defined as the ratio of the reconstructed error to the width of the FOV.

Figure 8 shows the 3D information of the 35 control points. For better vision effect, only 3 out of 6 positions were displayed. This figure clearly demonstrates that the 3D information of the 35 centers and the seven displacements have been accurately reconstructed.

 

Fig. 8 3D images of the control points in 3 different positions.

Download Full Size | PPT Slide | PDF

Using the same profilometry system in the previous experiment, the 3D images of two objects with complex geometry were constructed, as shown in Fig. 9.

 

Fig. 9 Reconstruction results of two small objects with complex surface geometry.

Download Full Size | PPT Slide | PDF

Figure 9(a) is a small toolbox on top of which there are tiny regular sphere balls. Figure 9(b) shows the reconstructed point cloud of a small area on the top of the box and Fig. 9(c) is the rendering result. Figure 9(d) to Fig. 9(f) show the same reconstructed process of a daily-used key which contains complex geometric variations on its surface. Note that for better demonstration, Fig. 9(a) and Fig. 9(d) were not captured by the CCD camera we used in our telecentric system because the FOV is too small to take a picture containing the whole object. These two experiment results clearly confirm the validation of our proposed calibration algorithm and telecentric 3D measurement system.

4. Discussion

The previous experiments have clearly shown that the proposed affine calibration algorithm can fully fulfill the 3D reconstruction task of an object in small scale. As introduced before, the main advantage of the proposed method is that it is a 2D planar calibration method. One can simply use a planar target to calibrate the whole telecentric system. Comparing to [13] and [9], no stage positioning device or additional projector is required during the entire calibration process. Thus this calibration algorithm is flexible and easy to carry out. By applying the proposed calibration method, the telecentric fringe projection profilometry system can well finish the 3D reconstruction task. In the meantime, the proposed algorithm also has an potential drawback. The whole calibration process contains two mathematical optimizing process: one is in the traditional calibration process and the other is the optimizing process of the affine calibration model. Thus the calibration process is more mathematically complicated than the traditional calibration method. However, since the computer takes charge of all the computation task and no additional laborious work is needed, the proposed algorithm is still flexible and easy to carry out. It is worth mentioning that the measurement FOV of our profilometry system is 23.7 mm×17.78 mm. With the calibration method we proposed, one can measure a smaller object than the object measured in our experiments by reducing the FOV. To do this, one only needs to use a telecentric lens with a larger magnification factor.

As we described previously, when calibrating the affine camera, we apply the traditional planar calibration method at first and then optimize the camera parameters as well as the extrinsic parameters by using the affine model. This two-step calibration algorithm yields really good calibrating accuracy in most cases. In minor circumstances, however, the traditional method alone can calibrate the affine camera with a reprojection error less than 0.2 pixel. In such cases, the second refining step cannot improve the calibration accuracy significantly, the reprojection error can only be slightly reduced after minimizing Eq. (11). But since Eq. (8) is the correct projection model of an affine camera and the refining process can reduce the reprojection error, it is suggested to always use the whole two-step calibration algorithm to finish the task of calibrating an affine camera.

In [23], Li et al. proposed a similar calibration method for stereo light microscopic system based on the invariable extrinsic. In their work, the complicated noncoaxial optical system was assumed as an affine camera approximately. In our paper, however, the affine projection is realized by using a telecentric lens. The foundation of using the traditional planar calibration method to obtain the raw extrinsic parameters is that the telecentric lens used in our system is not perfectly telecentric in reality. Detailed geometric and mathematical proofs of the affine camera calibration method were presented. Besides, we validated the proposed calibration method by thoroughly analyzing the experiment data. Finally, the proposed calibration method was applied to telecentric fringe projection profilometry system and the 3D reconstruct results proved the flexibility and validation of the proposed calibration algorithm.

5. Conclusion

In this paper, we proposed a novel and flexible calibration method for telecentric 3D measurement system. By analyzing the differences and similarities of the telecentric and entocentric geometry, we notice that the extrinsic parameters of the telecentric system can be roughly calculated by traditional planar calibration method. Then the accurate intrinsic and extrinsic parameters can be obtained through a mathematical optimizing process. Experiments clearly validated the proposed calibration method and the telecentric fringe projection profilometry system yields an relative accuracy of 0.022%. Although 0.022% is not as good as the relative accuracy of 0.01% which is introduced in [24], we believe that the telecentric 3D imaging system has potential to achieve a better accuracy by using our proposed calibration algorithm. For example, one can use a special designed camera to ensure a larger common focus area so as to improve the calibration and reconstruct accuracy.

Acknowledgments

The authors would like to thank the National Natural Science Foundation of China for funding this work through the project No. 61462072, No. 61107001, No. 61405034, No. 51475092. The support from the Research Fund for the Doctoral Program of Higher Education of China (No. 20130092110027) is also gratefully acknowledged.

References and links

1. Opto Engineering, “Telecentric lenses tutorial: basic information and working principles,” http://www.opto-engineering.com/resources/telecentric-lenses-tutorial.

2. N. Hollinghurst and R. Cipolla, “Uncalibrated stereo hand-eye coordination,” Image Vis. Comput. 12(3), 187–192 (1994). [CrossRef]  

3. A. Habed, A. Amintabar, and B. Boufama, “Affine camera calibration from homographies of parallel planes,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2010), pp. 4249–4252.

4. R. Manning and C. R. Dyer, “Affine camera calibration from moving objects,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2001), pp. 494–500.

5. L. Quan, “Self-calibration of an affine camera from multiple views,” Int. J. Comput. Vis. 19(1), 93–105 (1996). [CrossRef]  

6. D. S. Gorpas, K. Politopouslos, and D. Yova, “Development of a computer vision binocular system for non-contact small animal model skin cancer tumor imaging,” in Proceedings of European Conference on Biomedical Optics (International Society for Optics and Photonics, 2007), pp. 66291J.

7. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

8. Y. Yin, M. Wang, B. Z. Gao, X. Liu, and X. Peng, “Fringe projection 3D microscopy with the general imaging model,” Opt. Express 23(5), 6846–6857 (2015). [CrossRef]   [PubMed]  

9. Z. Chen, H. Liao, and X. Zhang, “Telecentric stereo micro-vision system: Calibration method and experiments,” Opt. Lasers Eng. 57, 82–92 (2014). [CrossRef]  

10. K. Haskamp, M. Kästner, and E. Reithmeier, “Accurate calibration of a fringe projection system by considering telecentricity,” Proc. SPIE 8082, 80821B (2011). [CrossRef]  

11. D. Li and J. Tian, “An accurate calibration method for a camera with telecentric lenses,” Opt. Lasers Eng. 51(5), 538–541 (2013). [CrossRef]  

12. D. Lanman, D. C. Hauagge, and G. Taubin, “Shape from depth discontinuities under orthographic projection,” in Proceedings of IEEE International Conference on Computer Vision Workshop (IEEE, 2009), pp. 1550–1557.

13. B. Li and S. Zhang, “Flexible calibration method for microscopic structured light system using telecentric lens,” Opt. Express 23(20), 25795–25803 (2015). [CrossRef]   [PubMed]  

14. B. Pan, L. Yu, and D. Wu, “High-accuracy 2d digital image correlation measurements with bilateral telecentric lenses: Error analysis and experimental verification,” Exp. Mech. 53(9), 1719–1733 (2013). [CrossRef]  

15. C. Steger, M. Ulrich, and C. Wiedemann, Machine Vision Algorithms and Applications (Qinghua University, 2008).

16. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2003).

17. J. J. Moré, “The Levenberg-Marquardt algorithm: implementation and theory,” Lecture Notes in Mathematics 630, 105–116 (1978). [CrossRef]  

18. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). [CrossRef]  

19. J. Heikkila, “Moment and curvature preserving technique for accurate ellipse boundary detection,” in Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 1998), pp. 734–737.

20. D. Krishnan, T. Tay, and R. Fergus, “Blind deconvolution using a normalized sparsity measure,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 233–240.

21. A. Datta, J. S. Kim, and T. Kanade, “Accurate camera calibration using iterative refinement of control points,” in Proceedings of IEEE International Conference on Computer Vision Workshop (IEEE, 2009), pp. 1201–1208.

22. T. Hoang, B. Pan, D. Nguyen, and Z. Wang, “Generic gamma correction for accuracy enhancement in fringe-projection profilometry,” Opt. Lett. 35(12), 1992–1994 (2010). [CrossRef]   [PubMed]  

23. W. Li, Z. Wei, and G. Zhang, “Affine calibration based on invariable extrinsic parameters for stereo light microscope,” Opt. Eng. 53(10), 102105 (2014). [CrossRef]  

24. M. Vo, Z. Wang, B. Pan, and T. Pan, “Hyper-accurate flexible calibration technique for fringe-projection-based three-dimensional imaging,” Opt. Express 20(15), 16926–16941 (2012). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. Opto Engineering, “Telecentric lenses tutorial: basic information and working principles,” http://www.opto-engineering.com/resources/telecentric-lenses-tutorial .
  2. N. Hollinghurst and R. Cipolla, “Uncalibrated stereo hand-eye coordination,” Image Vis. Comput. 12(3), 187–192 (1994).
    [Crossref]
  3. A. Habed, A. Amintabar, and B. Boufama, “Affine camera calibration from homographies of parallel planes,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2010), pp. 4249–4252.
  4. R. Manning and C. R. Dyer, “Affine camera calibration from moving objects,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2001), pp. 494–500.
  5. L. Quan, “Self-calibration of an affine camera from multiple views,” Int. J. Comput. Vis. 19(1), 93–105 (1996).
    [Crossref]
  6. D. S. Gorpas, K. Politopouslos, and D. Yova, “Development of a computer vision binocular system for non-contact small animal model skin cancer tumor imaging,” in Proceedings of European Conference on Biomedical Optics (International Society for Optics and Photonics, 2007), pp. 66291J.
  7. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
    [Crossref]
  8. Y. Yin, M. Wang, B. Z. Gao, X. Liu, and X. Peng, “Fringe projection 3D microscopy with the general imaging model,” Opt. Express 23(5), 6846–6857 (2015).
    [Crossref] [PubMed]
  9. Z. Chen, H. Liao, and X. Zhang, “Telecentric stereo micro-vision system: Calibration method and experiments,” Opt. Lasers Eng. 57, 82–92 (2014).
    [Crossref]
  10. K. Haskamp, M. Kästner, and E. Reithmeier, “Accurate calibration of a fringe projection system by considering telecentricity,” Proc. SPIE 8082, 80821B (2011).
    [Crossref]
  11. D. Li and J. Tian, “An accurate calibration method for a camera with telecentric lenses,” Opt. Lasers Eng. 51(5), 538–541 (2013).
    [Crossref]
  12. D. Lanman, D. C. Hauagge, and G. Taubin, “Shape from depth discontinuities under orthographic projection,” in Proceedings of IEEE International Conference on Computer Vision Workshop (IEEE, 2009), pp. 1550–1557.
  13. B. Li and S. Zhang, “Flexible calibration method for microscopic structured light system using telecentric lens,” Opt. Express 23(20), 25795–25803 (2015).
    [Crossref] [PubMed]
  14. B. Pan, L. Yu, and D. Wu, “High-accuracy 2d digital image correlation measurements with bilateral telecentric lenses: Error analysis and experimental verification,” Exp. Mech. 53(9), 1719–1733 (2013).
    [Crossref]
  15. C. Steger, M. Ulrich, and C. Wiedemann, Machine Vision Algorithms and Applications (Qinghua University, 2008).
  16. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2003).
  17. J. J. Moré, “The Levenberg-Marquardt algorithm: implementation and theory,” Lecture Notes in Mathematics 630, 105–116 (1978).
    [Crossref]
  18. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006).
    [Crossref]
  19. J. Heikkila, “Moment and curvature preserving technique for accurate ellipse boundary detection,” in Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 1998), pp. 734–737.
  20. D. Krishnan, T. Tay, and R. Fergus, “Blind deconvolution using a normalized sparsity measure,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 233–240.
  21. A. Datta, J. S. Kim, and T. Kanade, “Accurate camera calibration using iterative refinement of control points,” in Proceedings of IEEE International Conference on Computer Vision Workshop (IEEE, 2009), pp. 1201–1208.
  22. T. Hoang, B. Pan, D. Nguyen, and Z. Wang, “Generic gamma correction for accuracy enhancement in fringe-projection profilometry,” Opt. Lett. 35(12), 1992–1994 (2010).
    [Crossref] [PubMed]
  23. W. Li, Z. Wei, and G. Zhang, “Affine calibration based on invariable extrinsic parameters for stereo light microscope,” Opt. Eng. 53(10), 102105 (2014).
    [Crossref]
  24. M. Vo, Z. Wang, B. Pan, and T. Pan, “Hyper-accurate flexible calibration technique for fringe-projection-based three-dimensional imaging,” Opt. Express 20(15), 16926–16941 (2012).
    [Crossref]

2015 (2)

2014 (2)

Z. Chen, H. Liao, and X. Zhang, “Telecentric stereo micro-vision system: Calibration method and experiments,” Opt. Lasers Eng. 57, 82–92 (2014).
[Crossref]

W. Li, Z. Wei, and G. Zhang, “Affine calibration based on invariable extrinsic parameters for stereo light microscope,” Opt. Eng. 53(10), 102105 (2014).
[Crossref]

2013 (2)

B. Pan, L. Yu, and D. Wu, “High-accuracy 2d digital image correlation measurements with bilateral telecentric lenses: Error analysis and experimental verification,” Exp. Mech. 53(9), 1719–1733 (2013).
[Crossref]

D. Li and J. Tian, “An accurate calibration method for a camera with telecentric lenses,” Opt. Lasers Eng. 51(5), 538–541 (2013).
[Crossref]

2012 (1)

2011 (1)

K. Haskamp, M. Kästner, and E. Reithmeier, “Accurate calibration of a fringe projection system by considering telecentricity,” Proc. SPIE 8082, 80821B (2011).
[Crossref]

2010 (1)

2006 (1)

S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006).
[Crossref]

2000 (1)

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

1996 (1)

L. Quan, “Self-calibration of an affine camera from multiple views,” Int. J. Comput. Vis. 19(1), 93–105 (1996).
[Crossref]

1994 (1)

N. Hollinghurst and R. Cipolla, “Uncalibrated stereo hand-eye coordination,” Image Vis. Comput. 12(3), 187–192 (1994).
[Crossref]

1978 (1)

J. J. Moré, “The Levenberg-Marquardt algorithm: implementation and theory,” Lecture Notes in Mathematics 630, 105–116 (1978).
[Crossref]

Amintabar, A.

A. Habed, A. Amintabar, and B. Boufama, “Affine camera calibration from homographies of parallel planes,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2010), pp. 4249–4252.

Boufama, B.

A. Habed, A. Amintabar, and B. Boufama, “Affine camera calibration from homographies of parallel planes,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2010), pp. 4249–4252.

Chen, Z.

Z. Chen, H. Liao, and X. Zhang, “Telecentric stereo micro-vision system: Calibration method and experiments,” Opt. Lasers Eng. 57, 82–92 (2014).
[Crossref]

Cipolla, R.

N. Hollinghurst and R. Cipolla, “Uncalibrated stereo hand-eye coordination,” Image Vis. Comput. 12(3), 187–192 (1994).
[Crossref]

Datta, A.

A. Datta, J. S. Kim, and T. Kanade, “Accurate camera calibration using iterative refinement of control points,” in Proceedings of IEEE International Conference on Computer Vision Workshop (IEEE, 2009), pp. 1201–1208.

Dyer, C. R.

R. Manning and C. R. Dyer, “Affine camera calibration from moving objects,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2001), pp. 494–500.

Fergus, R.

D. Krishnan, T. Tay, and R. Fergus, “Blind deconvolution using a normalized sparsity measure,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 233–240.

Gao, B. Z.

Gorpas, D. S.

D. S. Gorpas, K. Politopouslos, and D. Yova, “Development of a computer vision binocular system for non-contact small animal model skin cancer tumor imaging,” in Proceedings of European Conference on Biomedical Optics (International Society for Optics and Photonics, 2007), pp. 66291J.

Habed, A.

A. Habed, A. Amintabar, and B. Boufama, “Affine camera calibration from homographies of parallel planes,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2010), pp. 4249–4252.

Hartley, R.

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2003).

Haskamp, K.

K. Haskamp, M. Kästner, and E. Reithmeier, “Accurate calibration of a fringe projection system by considering telecentricity,” Proc. SPIE 8082, 80821B (2011).
[Crossref]

Hauagge, D. C.

D. Lanman, D. C. Hauagge, and G. Taubin, “Shape from depth discontinuities under orthographic projection,” in Proceedings of IEEE International Conference on Computer Vision Workshop (IEEE, 2009), pp. 1550–1557.

Heikkila, J.

J. Heikkila, “Moment and curvature preserving technique for accurate ellipse boundary detection,” in Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 1998), pp. 734–737.

Hoang, T.

Hollinghurst, N.

N. Hollinghurst and R. Cipolla, “Uncalibrated stereo hand-eye coordination,” Image Vis. Comput. 12(3), 187–192 (1994).
[Crossref]

Huang, P. S.

S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006).
[Crossref]

Kanade, T.

A. Datta, J. S. Kim, and T. Kanade, “Accurate camera calibration using iterative refinement of control points,” in Proceedings of IEEE International Conference on Computer Vision Workshop (IEEE, 2009), pp. 1201–1208.

Kästner, M.

K. Haskamp, M. Kästner, and E. Reithmeier, “Accurate calibration of a fringe projection system by considering telecentricity,” Proc. SPIE 8082, 80821B (2011).
[Crossref]

Kim, J. S.

A. Datta, J. S. Kim, and T. Kanade, “Accurate camera calibration using iterative refinement of control points,” in Proceedings of IEEE International Conference on Computer Vision Workshop (IEEE, 2009), pp. 1201–1208.

Krishnan, D.

D. Krishnan, T. Tay, and R. Fergus, “Blind deconvolution using a normalized sparsity measure,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 233–240.

Lanman, D.

D. Lanman, D. C. Hauagge, and G. Taubin, “Shape from depth discontinuities under orthographic projection,” in Proceedings of IEEE International Conference on Computer Vision Workshop (IEEE, 2009), pp. 1550–1557.

Li, B.

Li, D.

D. Li and J. Tian, “An accurate calibration method for a camera with telecentric lenses,” Opt. Lasers Eng. 51(5), 538–541 (2013).
[Crossref]

Li, W.

W. Li, Z. Wei, and G. Zhang, “Affine calibration based on invariable extrinsic parameters for stereo light microscope,” Opt. Eng. 53(10), 102105 (2014).
[Crossref]

Liao, H.

Z. Chen, H. Liao, and X. Zhang, “Telecentric stereo micro-vision system: Calibration method and experiments,” Opt. Lasers Eng. 57, 82–92 (2014).
[Crossref]

Liu, X.

Manning, R.

R. Manning and C. R. Dyer, “Affine camera calibration from moving objects,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2001), pp. 494–500.

Moré, J. J.

J. J. Moré, “The Levenberg-Marquardt algorithm: implementation and theory,” Lecture Notes in Mathematics 630, 105–116 (1978).
[Crossref]

Nguyen, D.

Pan, B.

Pan, T.

Peng, X.

Politopouslos, K.

D. S. Gorpas, K. Politopouslos, and D. Yova, “Development of a computer vision binocular system for non-contact small animal model skin cancer tumor imaging,” in Proceedings of European Conference on Biomedical Optics (International Society for Optics and Photonics, 2007), pp. 66291J.

Quan, L.

L. Quan, “Self-calibration of an affine camera from multiple views,” Int. J. Comput. Vis. 19(1), 93–105 (1996).
[Crossref]

Reithmeier, E.

K. Haskamp, M. Kästner, and E. Reithmeier, “Accurate calibration of a fringe projection system by considering telecentricity,” Proc. SPIE 8082, 80821B (2011).
[Crossref]

Steger, C.

C. Steger, M. Ulrich, and C. Wiedemann, Machine Vision Algorithms and Applications (Qinghua University, 2008).

Taubin, G.

D. Lanman, D. C. Hauagge, and G. Taubin, “Shape from depth discontinuities under orthographic projection,” in Proceedings of IEEE International Conference on Computer Vision Workshop (IEEE, 2009), pp. 1550–1557.

Tay, T.

D. Krishnan, T. Tay, and R. Fergus, “Blind deconvolution using a normalized sparsity measure,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 233–240.

Tian, J.

D. Li and J. Tian, “An accurate calibration method for a camera with telecentric lenses,” Opt. Lasers Eng. 51(5), 538–541 (2013).
[Crossref]

Ulrich, M.

C. Steger, M. Ulrich, and C. Wiedemann, Machine Vision Algorithms and Applications (Qinghua University, 2008).

Vo, M.

Wang, M.

Wang, Z.

Wei, Z.

W. Li, Z. Wei, and G. Zhang, “Affine calibration based on invariable extrinsic parameters for stereo light microscope,” Opt. Eng. 53(10), 102105 (2014).
[Crossref]

Wiedemann, C.

C. Steger, M. Ulrich, and C. Wiedemann, Machine Vision Algorithms and Applications (Qinghua University, 2008).

Wu, D.

B. Pan, L. Yu, and D. Wu, “High-accuracy 2d digital image correlation measurements with bilateral telecentric lenses: Error analysis and experimental verification,” Exp. Mech. 53(9), 1719–1733 (2013).
[Crossref]

Yin, Y.

Yova, D.

D. S. Gorpas, K. Politopouslos, and D. Yova, “Development of a computer vision binocular system for non-contact small animal model skin cancer tumor imaging,” in Proceedings of European Conference on Biomedical Optics (International Society for Optics and Photonics, 2007), pp. 66291J.

Yu, L.

B. Pan, L. Yu, and D. Wu, “High-accuracy 2d digital image correlation measurements with bilateral telecentric lenses: Error analysis and experimental verification,” Exp. Mech. 53(9), 1719–1733 (2013).
[Crossref]

Zhang, G.

W. Li, Z. Wei, and G. Zhang, “Affine calibration based on invariable extrinsic parameters for stereo light microscope,” Opt. Eng. 53(10), 102105 (2014).
[Crossref]

Zhang, S.

Zhang, X.

Z. Chen, H. Liao, and X. Zhang, “Telecentric stereo micro-vision system: Calibration method and experiments,” Opt. Lasers Eng. 57, 82–92 (2014).
[Crossref]

Zhang, Z.

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

Zisserman, A.

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2003).

Exp. Mech. (1)

B. Pan, L. Yu, and D. Wu, “High-accuracy 2d digital image correlation measurements with bilateral telecentric lenses: Error analysis and experimental verification,” Exp. Mech. 53(9), 1719–1733 (2013).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

Image Vis. Comput. (1)

N. Hollinghurst and R. Cipolla, “Uncalibrated stereo hand-eye coordination,” Image Vis. Comput. 12(3), 187–192 (1994).
[Crossref]

Int. J. Comput. Vis. (1)

L. Quan, “Self-calibration of an affine camera from multiple views,” Int. J. Comput. Vis. 19(1), 93–105 (1996).
[Crossref]

Lecture Notes in Mathematics (1)

J. J. Moré, “The Levenberg-Marquardt algorithm: implementation and theory,” Lecture Notes in Mathematics 630, 105–116 (1978).
[Crossref]

Opt. Eng. (2)

S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006).
[Crossref]

W. Li, Z. Wei, and G. Zhang, “Affine calibration based on invariable extrinsic parameters for stereo light microscope,” Opt. Eng. 53(10), 102105 (2014).
[Crossref]

Opt. Express (3)

Opt. Lasers Eng. (2)

Z. Chen, H. Liao, and X. Zhang, “Telecentric stereo micro-vision system: Calibration method and experiments,” Opt. Lasers Eng. 57, 82–92 (2014).
[Crossref]

D. Li and J. Tian, “An accurate calibration method for a camera with telecentric lenses,” Opt. Lasers Eng. 51(5), 538–541 (2013).
[Crossref]

Opt. Lett. (1)

Proc. SPIE (1)

K. Haskamp, M. Kästner, and E. Reithmeier, “Accurate calibration of a fringe projection system by considering telecentricity,” Proc. SPIE 8082, 80821B (2011).
[Crossref]

Other (10)

D. S. Gorpas, K. Politopouslos, and D. Yova, “Development of a computer vision binocular system for non-contact small animal model skin cancer tumor imaging,” in Proceedings of European Conference on Biomedical Optics (International Society for Optics and Photonics, 2007), pp. 66291J.

A. Habed, A. Amintabar, and B. Boufama, “Affine camera calibration from homographies of parallel planes,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2010), pp. 4249–4252.

R. Manning and C. R. Dyer, “Affine camera calibration from moving objects,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2001), pp. 494–500.

D. Lanman, D. C. Hauagge, and G. Taubin, “Shape from depth discontinuities under orthographic projection,” in Proceedings of IEEE International Conference on Computer Vision Workshop (IEEE, 2009), pp. 1550–1557.

Opto Engineering, “Telecentric lenses tutorial: basic information and working principles,” http://www.opto-engineering.com/resources/telecentric-lenses-tutorial .

J. Heikkila, “Moment and curvature preserving technique for accurate ellipse boundary detection,” in Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 1998), pp. 734–737.

D. Krishnan, T. Tay, and R. Fergus, “Blind deconvolution using a normalized sparsity measure,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 233–240.

A. Datta, J. S. Kim, and T. Kanade, “Accurate camera calibration using iterative refinement of control points,” in Proceedings of IEEE International Conference on Computer Vision Workshop (IEEE, 2009), pp. 1201–1208.

C. Steger, M. Ulrich, and C. Wiedemann, Machine Vision Algorithms and Applications (Qinghua University, 2008).

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2003).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 Pinhole camera geometry.
Fig. 2
Fig. 2 The object-side telecentric imaging process.
Fig. 3
Fig. 3 Using the self convolution method, the image (a) Before and (b) after being deblurred.
Fig. 4
Fig. 4 Top row: The images of calibration board in five different positions. Bottom row: Images have been undistorted and unprojected to canonical fronto-parallel images.
Fig. 5
Fig. 5 Prototype of telecentric fringe projection profilometry system (all the cables were removed for better demonstration).
Fig. 6
Fig. 6 Reprojection errors of the camera under the four different circumstance
Fig. 7
Fig. 7 Common focus area demonstration of the fringe projection profilometry system.
Fig. 8
Fig. 8 3D images of the control points in 3 different positions.
Fig. 9
Fig. 9 Reconstruction results of two small objects with complex surface geometry.

Tables (2)

Tables Icon

Table 1 Calibration results of the affine camera and projector

Tables Icon

Table 2 Measurement results of the displacements ΔZ

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

Z c [ u v 1 ] = [ f m x s u o 0 f m y v o 0 0 1 ] [ r 11 r 12 r 13 t x r 21 r 22 r 23 t y r 31 r 32 r 33 t z ] [ X w Y w Z w 1 ] .
[ u v 1 ] = [ ( f / Z c ) X c ( f / Z c ) Y c 1 ] ,
Z c [ u v 1 ] = [ f X c f Y c Z c ] = [ f 0 0 0 0 f 0 0 0 0 1 0 ] [ X c Y c Z c 1 ] ,
[ u v 1 ] = [ m X c m Y c 1 ] = [ m 0 0 0 0 m 0 0 0 0 0 1 ] [ X c Y c Z c 1 ] ,
[ u v 1 ] = [ m m x s u o 0 m m y v o 0 0 1 ] [ r 11 r 12 r 13 t x r 21 r 22 r 23 t y 0 0 0 1 ] [ X w Y w Z w 1 ] ,
[ u v 1 ] = H [ X w Y w Z w 1 ] , H = A [ R t ] ,
A i = H i [ R t ] i 1 , i = [ 1 , N p ] ,
[ u v 1 ] = [ m m x s 0 0 m m y 0 0 0 1 ] [ r 11 r 12 r 13 t x r 21 r 22 r 23 t y 0 0 0 1 ] [ X w Y w Z w 1 ] ,
[ t x t y ] = [ t x t y ] + [ m m x s 0 m m y ] 1 [ u 0 v 0 ] .
x c n = ( 1 + k 1 r 2 + k 2 r 4 ) x c n y c n = ( 1 + k 1 r 2 + k 2 r 4 ) y c n ,
i = 1 N p j = 1 N c p i j p ^ ( A , R i , t i , k 1 , k 2 , M j ) 2 ,
u i p = φ v ( O i c ) 2 π N v W , v i p = φ h ( O i c ) 2 π N h H ,
i n t r i n = [ 3.64 × 10 5 0.01 810.35 0 3.72 × 10 5 621.51 0 0 1 ] ,
e x t r i n = [ 0.91 0.03 0.12 9.95 0.01 0.95 0.005 11.52 0.21 0.03 0.99 5.77 × 10 3 ] ,
i n t r i n a f f i n e = [ 63.46 0.006 0 0 65.75 0 0 0 1 ] ,
e x t r i n a f f i n e = [ 0.98 0.008 0.16 0.98 0.001 0.99 0.04 1.12 0 0 0 1 ] .

Metrics