## Abstract

Fisheye lens can provide a wide view over 180°. It then has prominence advantages in three dimensional reconstruction and machine vision applications. However, the serious deformation in the image limits fisheye lens’s usage. To overcome this obstacle, a new rectification method named DDM (Digital Deformation Model) is developed based on two dimensional perspective transformation. DDM is a type of digital grid representation of the deformation of each pixel on CCD chip which is built by interpolating the difference between the actual image coordinate and pseudo-ideal coordinate of each mark on a control panel. This method obtains the pseudo-ideal coordinate according to two dimensional perspective transformation by setting four mark’s deformations on image. The main advantages are that this method does not rely on the optical principle of fisheye lens and has relatively less computation. In applications, equivalent pinhole images can be obtained after correcting fisheye lens images using DDM.

© 2012 OSA

## 1. Introduction

Since the view angle of fisheye lens can be 180° or more, the images of any objects before fisheye lens can be theoretically captured. This characteristic makes it attract growing interests from the fields of panoramic imaging and photogrammetric measurement [1–3]. For example, if we want to acquire the panoramic image of a large scene, multi-images should be obtained from different positions or angles when conventional lens is used and image processing to mosaic those images is rather complex. It is obvious that this task can be easily fulfilled with a fisheye lens. However, there are still relatively less report about its usage for measurement purposes in machine vision and three dimensional reconstruction. The reason is that fisheye lens causes serious deformation in the imaging process and an accurate, generic, and easy-to-use deformation rectification approach is still absent.

Various efforts have been done on the calibration of panoramic camera systems using fisheye lens. Parian corrected the images of fisheye camera by forcing the straight line on images be still straight [4]. Similar to the straight line features, others such as circles, co-plane features and epipolar restriction can all be taken as the constraints in rectifying images of fisheye camera [5]. Due to the difficulty in extracting and recognizing the features on images, how to automatically perform rectification remains challenging.

Most of other methods try to describe the actual imaging process using mathematical functions and then optimize these parameters based on a high accuracy control panel [6, 7]. Some earlier researchers model the imaging procedure based on the physical principle of fisheye lens and inversely transfer the imaging process to obtain images meeting pinhole condition [8–10]. Due to different lenses need to use different models in rectification, some researches attempted to establish a generalized model [11–15]. [13] proposed a model for all kinds of fisheye lens on basis of a five order polynomial, which represents a typical general model. Besides considering deformation caused by the projection ways of fisheye lens, most of methods consider the distortion terms which are deviations from the theoretical imaging model. As a special case, the deformation of fisheye lens is a generalization that includes the perspective projection and all kinds of distortions which violate the ideal imaging model. Gennery used the same set of conventional lens distortion parameters and obtained a relative precision of about 1:10000 after bundle adjustment when the special mathematical function was adopted in calibraton [14]. [13] further extends the general imaging model by considering the fisheye lens’s deviations from the precise radial symmetry and the inaccuracy caused in projection. Then, they introduce a basic model containing six internal camera parameters and determine those parameters in four steps by viewing a calibration plane with control points in known positions. In the experiments, they obtained the accuracy of about 1/2500 after least-squares adjustment [13]. Moreover, outliers that disagree excessively with other data are removed by means of automatic editing based on analysis of residuals in these methods.

These methods are essentially similar in that they try to approximate the imaging process with a mathematical function. However, mathematical function can only express the symmetric deformations, other imperfections in imaging system can not be covered which leads to the usual methods be theoretically and practically imperfect. Another drawback is that the computational burden is usually high in those methods. To provide a general and easy-to-use method, this paper investigates the strategy of DDM (Digital Deformation Model). Based on the two dimensional perspective transformation, an equivalent pinhole image can be obtained by correcting all deformations in fisheye camera imaging process.

## 2. Fisheye lens model

This section will briefly discuss the theoretical model of conventional lens and fisheye lens. The perspective projection of a pinhole camera can be simply described by:

*θ*is the angle between the optical axis and incoming ray,

*r*is the distance between the image point and the principal point,

*f*is the focal length. The most common model of fisheye lens may be equidistance projection. The schematic description of different projections for fisheye lens are illustrated in Fig. 1(a) and the difference between pinhole lens and fisheye lens is shown in Fig. 1(b) [13]. Compared with that of perspective projection, the images of non-perspective projection are close to the principal point. This character makes the view angle of the fisheye lens be more than that of common lens. At the same time, it can be known that actual imaging surface of fisheye lens is a hemisphere with respect to a plane in pinhole lens. Therefore, the deformation of fisheye lens is mainly caused by projecting the image on hemisphere surface to actual imaging plane.

According to the principle of perspective projection in Eq. (1), an ideal point (distortion-free) is the projection of object with the following homogenous coordinates:

*x*̂,

*ŷ*) is point coordinate on image, (

*X*,

*Y*,

*Z*) denotes the 3D object coordinate,

*f*,

_{x}*f*are the focal lengths in

_{y}*x*,

*y*axis respectively, (

*x*

_{0},

*y*

_{0}) is the image coordinate of principal point,

*s*is the skewness factor between two axes on the image plane,

*λ*is the scale coefficient.

*R*,

*T*represent the rotation matrix and translation vector of the image. Then, the pinhole imaging process can be written as:

There are totally 11 unknown parameters (5 intrinsic and 6 extrinsic parameters) in Eq. (3). It is obvious that the same number of unknown parameters are appeared in Eq. (4). So, all intrinsic and extrinsic elements are implicitly contained in the transformation coefficients *A*_{1}, *A*_{2}, ⋯ *A*_{11}. Thus, Eq. (4) can be considered as another expression of perspective transformation. Unfortunately, in practice, the pinhole lens does not exactly follow the designed model. In another word, the image distortion is unavoidable. In most situations, the distortion is mainly dominated by the lens radial components especially by the first two terms. Real lens may deviate from precise radial symmetry and therefore some researchers supplement the distortion model with tangential parts. It has also been found that any more elaborated modeling not only would not help, but also would cause numerical instability. Therefore, the radial and tangential models shown in Eq. (5) are usually used to estimate lens distortions.

Then, the relationship between the observed image coordinate (*x*, *y*) (distorted) and the ideal point (*x*̂, *ŷ*) is expressed as:

In Eq. (5), (*x*_{0}, *y*_{0}) represents the coordinate of principal point, *k*_{1}, *k*_{2} are the radial distortion coefficients. It is obvious that iterations are needed to optimize parameters in the process of calibration. At the same time, iterative computation is still necessary in process of image correction. Distortion correction is to obtain the grey value of each pixel from original image. That is to say, we should compute the corresponding point (*x*, *y*) on the original image for each pixel (*x*̂, *ŷ*) on the corrected image to be generated. According to Eqs. (5)–(6), this process is iterative. This is the case of common pinhole camera. For fisheye lens, the deformation (Δ*x*, Δ*y*) is a generalization that includes the non-perspective projection and all kinds of distortions which violate the ideal imaging model. The imaging model is much more complex than above which makes the process of correction be rather complex.

## 3. Digital deformation model

DDM intends to establish the deformation model for all imaging systems based on a standard control panel. It is a three dimensional model whose plane coordinates are the row and column of CCD (Charge Coupled Device) chip and the height is the deformation corresponding to the pixel which may be the deformation in *x*, *y* axis or radial direction. The height includes not only the deformation caused by projection, radial and tangential distortions of lens, error of electric performance of CCD, but also any other errors caused by medium between object and lens. In all, the value of deformation should comprise all systemic and accidental errors which make the image and object violate the collinear restriction. The purpose of establishing DDM of a camera is to express the total deformation and correct the images taken in the similar condition. Images after correcting will meet the pinhole relationship to object.

For ideal pinhole imaging, the image and object points are collinear. According to Eq. (4), *A*_{3}*Z*, *A*_{7}*Z*, *A*_{11}*Z* are all the same when all objects are on a same plane. In this case, after simplification, Eq. (4) can be rewritten as:

*A*

_{1},

*A*

_{2}, ⋯

*A*

_{11}and

*B*

_{1},

*B*

_{2}, ⋯

*B*

_{8}is as follows:

In Eq. (7), (*x*̂, *ŷ*) is the mark’s ideal image coordinate and (*X*, *Y*) is the corresponding object coordinate, *B*_{1}, *B*_{2}, ⋯ *B*_{8} are the transform coefficients which can be accurately computed when the image coordinates and object coordinates of four marks are known. Due to the influence of all kinds of errors, the actual imaging process including fisheye lens can be described as Eq. (9) by combining Eqs. (6) and (7).

*x*

_{0},

*y*

_{0}are constant. So, we can easily obtain Eq. (10) by using other symbols. The relationship between two groups of coefficients will not detailed.

Since the deformation (Δ*x*, Δ*y*) is unknown, the coefficients *C*_{1}, *C*_{2}, ⋯*C*_{8} can’t be computed. It means this problem is really an ill-posed problem. To obtain its solution, our method artificially sets the deformations of four points to zero and computes the coefficients *C*′_{1}, *C*′_{2}, ⋯ *C*′_{8} on basis of Eq. (10). The pseudo-ideal coordinate (*x*′, *y*′) of each mark can then be obtained. It is clear that the pseudo-ideal image and control panel meet collinear relationship. The difference between pseudo-ideal image and actual image is:

After obtaining the differences between pseudo-ideal coordinate and actual coordinate of all marks, deformation of CCD grid vertexes can be interpolated according to bilinear interpolation using 4 nearest marks’ deformations. After interpolating the deformation of each pixel, establishment of DDM of this camera is done.

For all images captured by this camera, the coordinate of each pixel after correcting can be computed using Eq. (11), in which, the deformation (Δ*x*, Δ*y*) is retrieved from DDM. According to the definition of DDM, the corrected value of each pixel is not the actual deformation, while the corrected images still meet the pinhole condition to object. That is to say, an equivalent pinhole image is produced after correction using DDM. Then, we can see DDM actually provides a flexible solution to solve the ill-posed problem in Eq. (10).

## 4. Algorithm details

The main steps of establishing DDM are:

- Establish two dimensional control panel composed by a certain number of marks and obtain marks’ spatial coordinates (
*X*,_{i}*Y*)_{i}*i*= 1, 2, ⋯ ,*N*. The two factors of marks’ shape and physical property are mainly considered. As those in many applications, circle marks are usually adopted due to its isotropic. Empirically, highly contrasted images may be obtained when reflective materials are used to make the marks. High accuracy measurement equipment such as theodolite helps to increase the accuracy of the marks’ spatial coordinates. - Capture the image of this control panel using the fisheye lens camera. The control panel should fill the whole field of view and the image contrast must be sufficient.
- Segment each mark and obtain image coordinates (
*x*,_{i}*y*)_{i}*i*= 1, 2, ⋯ ,*N*for all marks. The grey gravity center of each mark should be its location on image if the mark is symmetry. - Choose four marks on image near to the corners of image with approximately same distances to image center and set their deformations to zero. The purpose of setting the deformations of four marks is to compute the pseudo-ideal transformation coefficients and then to obtain a pseudo-ideal image. Theoretically, four points can be randomly chosen. However, when four marks on image with same distance to image center, their actual deformations are close. Then, the pseudo-ideal image to be established will be approximately parallel to the actual image. I.e, we hope the external parameters of the pseudo-ideal image be closely equal to those of actual image. Accordingly, when DDM is used to correct other images taken by this camera, the corrected image seems to be taken in the same position as actual image. We choose the four points on corners simply due to their deformations are largest on the whole image.
- Compute the perspective transformation coefficients
*C*′_{1},*C*′_{2}, ⋯*C*′_{8}on basis of Eq. (10) according to the least squares adjustment. - Compute the pseudo-ideal image coordinates (
*x*′,_{i}*y*′)_{i}*i*= 1, 2, ⋯,*N*for all marks using Eq. (7), in which the coefficients are*C*′_{1},*C*′_{2}, ⋯*C*′_{8}. The pseudo-ideal image strictly meets 2D perspective transformation to the control panel. - Compute difference between actual and pseudo-ideal coordinate for each mark.
- Interpolate the deformation for each integer pixel based on the bilinear rule. To perform this task, we construct a rectangle mesh model for all marks on the image, then, the nearest marks of each pixel can be found by judging which rectangle is the pixel located in. After interpolating the deformations of all pixels, DDM has been established.
- Correct images obtained in same condition using DDM.

Compared with mathematical function based methods, there is no complex iteration to optimize parameters in DDM establishment and applications which may greatly reduce the computation cost.

## 5. Experiments and results

In this part, our method will be demonstrated through the experiments. Fisheye lens and conventional lens are used to validate the model of DDM. Based on a three dimensional control panel, a wide view lens is taken to estimate the accuracy of DDM.

#### 5.1. Fisheye images rectification using DDM

A control network with 1944 marks has been established which is approximately located on a vertical plane. The height differences relative to the plane among marks can be neglected. The accuracy of marks’ object coordinates is ±0.1*mm*. In this experiment, we intend to establish the DDM of a camera with resolution of 3024 × 2016, lens model is F/2.8G ED with a 180-degree angle of view, focus length is 10.5*mm* made by Nikon. Fig. 2(a) is the image of the control panel taken using this fisheye camera. Fig. 2(b) shows the DDM in radial direction, whose height is
$H=\sqrt{\left({d}_{x}^{2}+{d}_{y}^{2}\right)}$, *d _{x}*,

*d*are the deformation values in

_{y}*x*,

*y*axis respectively.

Figure 3(a) shows the image of a building captured by this camera, in which, many deformations are existed. Fig. 3(b) is the corrected image using DDM which conforms to pinhole perspective transformation.

Figure 4 shows another example for fisheye lens calibration using DDM. Fig. 4(a) is the original image of a high building. Fig. 4(b) is its corrected image in which the straight lines of building remain straight on image. It means that Fig. 4(b) is the perspective image of this building. Fig. 4(c) is building’s facade image obtained according to the theory of vanishing points [16].

#### 5.2. Quantitative evaluation

In Fig. 5(a), the image of a line is taken by a Finepix S1 Pro camera with 28mm conventional pinhole lens. The line is a little bending on the image (white curve) which is mainly caused by the radial distortion of lens. The blue straight line connects two end points of the curve. The corrected image using the DDM is shown as Fig. 5(b), in which, the white curve and blue line are almost overlapped. On above two figures, we further compute the distance between blue line and the white curve. Shown in Table 1, (*x*, *y*) is the coordinate of point on white line, *d* is the distance from each point on the white curve line to the blue straight line.

In order to estimate DDM’s accuracy, we capture three images of a three dimensional control panel using this camera. Firstly, three images are corrected using our established DDM. Then, the interior and exterior parameters (not considering distortions) of three images are computed according to Eq. (3). Lastly, we calculate the object coordinates of 20 marks using the coordinates on corrected image. The mean errors between calculated coordinates and original data are *M _{X}* = 0.44

*mm*,

*M*= 0.42

_{Y}*mm*,

*M*= 0.94

_{Z}*mm*. In establishing DDM, this camera’s optical axis and 2D control panel are approximately perpendicular to each other. Moreover, their intersection is close to the center of 2D control panel. In addition, distance between camera and 2D control panel is about 0.5m since this camera has a wide view and the control panel should try to fill the whole image. In measuring 3D control panel, distance from camera to object is about 2m. The size of three dimensional control panel is about 2.5

*m*× 2

*m*× 2

*m*. Then, the relative accuracies are

*X*: 1/5000,

*Y*: 1/5000,

*Z*: 1/2000.

## 6. Conclusion

The paper introduces a novel approach to correct the deformation of fisheye lens based on two dimensional perspective transformation. DDM can obtain the deformation of each pixel on CCD by calculating the difference between real coordinate and the pseudo-ideal coordinate of a standard panel. In applications, an equivalent pinhole image can be obtained after correcting original image using DDM. The main advantages are that the approach doesn’t rely on the optical constructions of lens and algorithm is relatively simple, which in all make it a more practical approach for fisheye image rectification.

The limitation of our method lies in it can not obtain the parameters such as focus length, the principal point and the skewness factor. It then can be taken as a previous step before camera calibration. Even so, the method is still useful due to the unknown parameters may not be necessary in some applications and the process of estimating interior parameters for the corrected images is relatively simple. The method of DDM can extend fisheye lens’ usages in computer visions, photogrammetric measurement and etc. How to improve the accuracy should be our future work.

## Acknowledgments

This study was supported by the Natural Science Fund of P. R. China under contract 41171289 and the Special Fund of China Central Collegiate Basic Scientific Research Bursary under contract 2011QN239.

## References and links

**1. **H. Bakstein and T. Pajdla, “Panoramic mosaicing with a field of view lens,” in *Proceedings of IEEE Conference on Omnidirectional Vision* (IEEE, 2002), pp. 60–67.

**2. **Y. Jia, H. Lu, and A. Xu, “Fish-eye lens camera calibration for stereo vision system,” Chinese J Comput. **23**(11), 1215–1219 (2002).

**3. **J. Willneff and O. Wenisch, “The calibration of wide-angle lens cameras using perspective and non-perspective projections in the context of realtime tracking applications,” Proc. SPIE **8085**, 80850S–80850S-9 (2011). [CrossRef]

**4. **A. Parian and A. Gruen, “Panoramic camera calibration using 3D straight lines,” presented at ISPRS Panoramic Photogrammetry Workshop, Berlin, Germany, 24–25 Feb. 2005.

**5. **S. Abraham and W. Forstner, “Fish-eye-stereo calibration and epipolar rectification,” ISPRS J Photogramm. **59**(5), 278–288 (2005). [CrossRef]

**6. **P. Sturm and S. Maybank, “On plane-based camera calibration: a general glgorithm, singularities, applications,” in *Proceedings of IEEE Conference on Computer Vision and Pattern Recognition* (IEEE, 1999), pp. 432–437.

**7. **A. Heikkil, “Geometric camera calibration using circular control points,” IEEE T Pattern Anal. **22**(10), 1066–1077 (2000). [CrossRef]

**8. **M. Grossberg and S. Nayar, “The raxel imaging model and ray-based calibration,” Int J Comput Vision **61**(2), 119–137 (2005). [CrossRef]

**9. **I. Akio, Y. Kazukiyo, M. Nobuya, and K. Yuichiro, “Calibrating view angle and lens distortion of the nikon fisheye converter FC-E8,” J Forest Res. **9**(3), 177–181 (2004). [CrossRef]

**10. **Z. Zhang, “A flexible new technique for camera calibration,” IEEE T Pattern Anal. **22**(11), 1330–1334 (2000). [CrossRef]

**11. **D. Schneider, E. Schwalbe, and H. Maas, “Validation of geometric models for fisheye lenses,” ISPRS J Photogramm. **64**(3), 259–266 (2009). [CrossRef]

**12. **J. Kannala and S. Brandt, “A generic camera calibration method for fish-eye lenses,” in *Proceedings of International Conference on Pattern Recognition* (IEEE, 2004), pp. 10–13.

**13. **J. Kannala and S. Brandt, “A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses,” IEEE T Pattern Anal. **28**(8), 1335–1340 (2006). [CrossRef]

**14. **D. Gennery, “Generalized camera calibration including fish-eye lenses,” Int J Comput Vision **68**(3), 239–266 (2006). [CrossRef]

**15. **V. Orekhov, B. Abidi, C. Broaddus, and M. Abidi, “Universal camera calibration with automatic distortion model selection,” in *Proceedings of International Conference on Image Processing* (IEEE, 2007), pp. 397–400.

**16. **Z. Kang, L. Zhang, and S. Zlatanova, “An automatic mosaicking method for building facade texture mapping using a monocular close-range image sequence,” ISPRS J Photogramm. **65**(3), 282–293 (2010). [CrossRef]