Abstract

In photogrammetric applications, camera calibration and orientation procedures are a prerequisite for the extraction of precise and reliable 3D metric information from images. This study presents a method for full automatic calibration of color digital cameras using color targets. Software developed using Borland C + + Builder programming language is used to apply the method. With this software, the calibration process is carried out in 3 stages: firstly, at least four of six color targets (whose 3D object coordinates are known) on each image of the overall test field are detected and the approximate exterior orientation parameters are computed. Then, the remaining target points are measured using the approximate image locations, determined using these parameters and the 3D object point coordinates parameters. Finally, calibration parameters are determined using a self-calibration bundle adjustment technique. The colored targets within the test field are assigned labels corresponding to their color. For the detection of color targets and computation of approximate exterior orientation elements, HSV color space was used together with space resection computation method for all the possible color labels of targets. To test the proposed method, full automatic calibration was carried out using six different digital cameras. The calibration accuracies achieved in object space were within the range 0.006 to 0.030 mm; the accuracies achieved in image space were within the range 0.14 to 0.51 µm.

©2011 Optical Society of America

1. Introduction

Close-range photogrammetry is today mostly based on the use of non-metric digital cameras. For use in photogrammetric applications, such cameras must be thoroughly modeled and calibrated to ensure high levels of accuracy. A camera is considered calibrated if the principal distance, principal point offset and lens distortion parameters are known. In many applications, especially in computer vision (CV), only the focal length is recovered, while for precise photogrammetric measurements all the calibration parameters are generally employed [1].

The accuracy of photogrammetric triangulation is fundamentally a function of, firstly, the measurement resolution of the CCD camera and, secondly, the geometry and number of the intersecting bundles of rays (one per image) forming the optical triangulation network. The measurement resolution is, in turn, a function of three variables: the accuracy of 2D image coordinate measurement (1%-4% of a pixel for well-defined targets), the focal length of the camera lens, and the fidelity of the mathematical model of calibration that describes the deviations of the physical imaging process from a geometrically ideal perspective projection [13].

Many different algorithms, based on perspective or projective camera models, have been used for many years for camera calibration. The most popular of these methods is self-calibration bundle adjustment, which was first introduced to close-range photogrammetry in the early 1970s. The sensor interior and exterior orientation, XYZ object point coordinates and the additional parameters can be determined by means of self-calibration bundle adjustment. The interior orientation is defined by the location of the projection centre in the image coordinate system while the exterior orientation is defined the location of the projection centre in object space and the orientation of the image coordinate system with respect to the object coordinate system. The additional parameters model the geometric effects of the deviation of the physical reality from perspective geometry. In controlled tests, self-calibration has yielded up to 10-fold improvements over conventional calibration approaches for close-range CCD-based camera systems [13].

This study examined the potential for fully automatic self-calibration of consumer-grade color digital cameras using color targets. Digital cameras are currently automatically calibrated via an EO device and coded targets or coded targets alone for photogrammetric applications. The use of color targets provides an alternative approach to determination of automatic sensor orientation. The color targets method does not require an exterior orientation (EO) device and leads to a robust exterior orientation process. In addition, the method requires a simpler target design instead of more complicated geometric arrangements of traditional coded targets designed for use with panchromatic imagery and exploit the presence of the powerful attribute of RGB colors [4].

This study uses full-automatic calibration software that was developed in the Borland C + + Builder programming language. Camera calibration was carried out using a test field of marked points, each with known 3D object coordinates. Within the test field, two types of targets were used: either white dots or colored dots, both on a black background. Red, green, blue, yellow, magenta and cyan colors were used for the colored targets. The software detects at least four of six color targets within each image to determine the approximate exterior orientation elements and measure the image coordinates of the targets. HSV (hue, saturation and value [brightness]) color space was used together with a space resection computation method to detect color targets and calculate approximate exterior orientation elements. The colors of the targets are detected from the S and V values of pixels and other target parameters. Each colored target is assigned either one single or two different color labels, according to the hue value of the centroid pixel, located at the target centre. 3D object coordinates are known for each color target. The space resection computation is performed for all possible color labels of color targets where the optimal solution has the lowest root mean square (RMS) for image coordinate residuals. The software measures the image coordinates of the remaining target points in the approximate image locations, determined using the initial values of the approximate exterior orientation parameters and the 3D object point coordinates parameters; calibration parameters are then determined using the self-calibration bundle adjustment technique. To test the proposed method, six different digital cameras were fully automatically calibrated using the developed software. Each calibration was completed within 1:30 to 3 minutes. The accuracy in object space was within the range 0.006 to 0.030 mm and the accuracy in image space was within the range 0.14 to 0.51 µm.

The process steps of the self calibration bundle adjustment carried out with the developed software are detailed in the following sections. Detection of the targets from photographic images and measurement of the image coordinates are described. Finally, the calibration of different digital cameras is explained and the results are summarized.

2. Color image scanning

The method proposed for full-automatic self-calibration bundle adjustment is based on the measurement of image coordinates and the detection of at least four colored targets on each image of the test field. The 3D object coordinates of the targets are known. The detection of the color target images involves unambiguously identifying color targets within a scene and providing a coarse location for a local window. The location of the target image is the second process, which precisely and accurately determines the target image centre within that local window [5].

The detection of color target regions used HSV color space together a cross-correlation template matching technique. In the HSV color space, hue, saturation, and brightness value are used as coordinate axes. By projecting the RGB unit cube along the diagonals of white to black, a hexacone results that forms the topside of the HSV pyramid. The hue H is indicated as an angle around the vertical axis. In the HSV color space, the primary colors are separated by 120°. The secondary colors are offset 60° from the primaries, so that the angle between secondaries is also 120° (Fig. 1 ). Primary and secondary colors were also used for the color targets of the test field. The saturation S is a number between 0 on the central axis (the V-axis) and 1 on the sides of the pyramid. The brightness value V (or B) lies between 0 on the apex of the pyramid and 1 on the base [6].

 figure: Fig. 1

Fig. 1 Hexacone representation of HSV color space.

Download Full Size | PPT Slide | PDF

The target search algorithm of the software is implemented as follows: The process starts at the first pixel line of the image and goes through every pixel in that line. If the current pixel position does not fulfill the SV value criteria defined in the template parameters of the software (Fig. 2 ), the algorithm proceeds to the next pixel position in the line. If the current pixel position fulfills the SV value criteria, the hue (H) value of the pixel is consulted. The current pixel position is assigned one or two colors labels, according the similarity between the observed hue value of the pixel and the known hue values of six different reference colors. The similarity threshold is defined within the template parameters of the software. The next step is to apply the template matching technique by cross-correlation.

 figure: Fig. 2

Fig. 2 Image of target parameters created by the software.

Download Full Size | PPT Slide | PDF

In this method, the basic idea is to measure the similarity between the template and the matching window in terms of their correlation factor. This correlation coefficient (r) is computed from standard deviations σ1 and σ2 of the grey-level densities g1 and g2 in both areas; and from the covariance σ12 between the densities in both areas, as shown in Eq. (1) [7]:

r=σ12σ1.σ2=(g1g¯1)(g2g¯2)(g1g¯1)2(g2g¯2)2,
where g1 and g2 represent arithmetic means of the densities of the target area and the densities in the corresponding section of the search area

In each pixel position that meets the HSV value criteria, the software calculates the correlation coefficient between the template window and the corresponding part of the related image, according to Eq. (1). If the computed value is greater than the minimum correlation coefficient defined by the template parameters of the software (Fig. 2), it indicates a possible target image in that position of the related image.

Once the target images are identified, a second computation procedure is required to locate the centre of the target image within the digital image frame with sub-pixel accuracy. This second computation stage consists of a preprocessing phase and the actual centre calculation phase [5,8].

The first step in the preprocessing phase is the subtraction of a local threshold intensity value. The threshold subtraction is based on the supposition that there will always be some background noise in any digital image. Threshold values for target location are generally determined dynamically, based on local conditions within the window. Local threshold values can be set by a number of techniques. A technique based on a statistical analysis of the distribution of the intensities of the window edge pixels and proposed by Shortis [9] was used in the software. In this method, it is assumed that the target is centered in the window and therefore the edge pixels are representative of the background noise. The template matching technique by cross-correlation was used to determine the pixel location of the target image centre. The pixels at the edge of the local window at each target are used to compute a mean and a standard deviation of the grey values of the noise. The addition of three standard deviations to the mean is the minimal threshold level required to remove the background [5,9].

The second step in the preprocessing phase is the segmentation process aims to isolate the target image as a contiguous area of above-threshold pixels. Scanning the pixels outward from the target image centre, the target image edge will be detected. The edge detection criterion is the first value of pixel intensity below the threshold. Once the image edge is detected, all subsequent pixels encountered outward to the window edge are assumed to be non-blob pixels and are set to zero intensity. Once isolated by the blob test, the detected image can be subjected to a number of geometric tests to ascertain whether it is a true target or a false target. Geometrical tests are performed using knowledge of the expected size and shape of the target images defined in the template parameters of the software. The size range criterion rejects targets that are larger or smaller than the specified limits. The preprocessing results in a ratio that describes the extents to which two perpendicular directions is used to test the shape of the target images [5,9].

The final step in the target location procedure is the calculation of the sub-pixel location of the target image centre. For this purpose, a density-weighted centroiding approach is used.

[x0y0]=i=1nj=1mgij[xijyij]i=1nj=1mgij.

Here, xij and yij are the row and column coordinates of pixels within the target blob, gij is the corresponding grey value, and x0, y0 are the final centroid coordinates [5,8].

The mathematical model used in the bundle adjustment is a non-linear equation system with respect to the unknown parameters. For the least squares solution, linearization uses the equation system and therefore approximate values are required for the system unknowns.

Space resection is the determination of the position and orientation parameters of an image with respect to the object coordinate system. In the standard case, parameters are the object coordinates of the projection centre and the orientation angles describing the rotation from the object coordinate system to the image coordinate system.

A closed-form resection solution proposed by Munjy [10] was used to determine the approximate exterior orientation parameters. This approach requires four points with known image and object coordinates to generate a solution. This closed-form resection solution is based on the principle that the scale in a perspective photograph is variable across the image plane. One explanation for this variation is that during the imaging process in a frame camera, the third dimension (z image coordinate) in the image space is being forced to remain equal to the focal length of the camera at each image point. Conversely, a constant scale in the image space can be enforced if the camera focal length is allowed to vary across the image. If the scale between a set of observed image points, such as for some object space control points, is forced to remain unchanged, it will cause a shift in the coordinates in the image space, resulting in a new set of image coordinates and a different focal length (z image coordinate) at each control point. The new set of coordinates may be viewed as a representation of a scaled and rotated three-dimensional model of the object control points. Using a closed three-dimensional transformation between the ground coordinates and the newly formed three-dimensional coordinates will determine the camera spatial position and orientation parameters [10].

It is difficult to accurately determine the colors of the targets due to various environmental illumination conditions. To resolve this issue, a closed-form resection solution is performed for all the combinations of targets labeled with a single-color or two-color image in the image scanning stage. The optimal solution is that which best fulfills the limit value criteria of the software ​​(RMS <0.1 µm) from the RMS values calculated using image coordinate residuals, given the initial exterior orientation elements of the image. After determining the initial exterior orientation elements, the remaining target points of the test field were measured in the approximate image locations, determined using exterior orientation and the 3D object point coordinates parameters. Template matching by cross-correlation and the density-weighted centroiding approach are used to measure image coordinates. Figure 3 gives the software image obtained after the resection drive-back process.

 figure: Fig. 3

Fig. 3 Software image obtained after resection driveback process.

Download Full Size | PPT Slide | PDF

3. Self-calibration

The mathematical model of the self-calibration bundle adjustment method is based on the collinearity condition implicit in the perspective transformation between image and object space:

xx0+Δx=cR1R3,yy0+Δy=cR2R3,
with,
[R1R2R3]=R[XX0YY0ZZ0],
where

  • x, y Image coordinates of point,
  • x0, y0, c Interior orientation (IO) parameters,
  • X, Y, Z Object coordinates of point,
  • X0,Y0, Z0 Object coordinates of the perspective center,
  • R Orthogonal rotation matrix built up with the three rotation angles of the camera,
  • Δx, Δy Correction term of additional parameter set.

The image coordinate correction terms Δx and Δy, which are functions of a set of additional parameters (AP), account for the departures from collinearity due to lens and focal plane distortions. The present study used a standard 10-term, ‘physical’ calibration model comprising interior orientation elements (x0,y0,c), lens distortion coefficients (k1,k2,k3,p1 andp2) and terms for differential scaling and non-orthogonality of the image coordinate axes (b1,b2), as described by Fraser [3].

Δx=x0xcΔc+x¯r2k1+x¯r4k2+x¯r6k3+(r2+2x¯2)p1+2p2x¯y¯+b1x¯+b2y¯,Δy=y0ycΔc+y¯r2k1+y¯r4k2+y¯r6k3+2p1x¯y¯+(r2+2y¯2)p2,
with,
r=x¯2+y¯2,x¯=xx0,y¯=yy0,
where,

  • k1, k2, k3 First three parameters of radial symmetric distortion,
  • p1, p2 First two parameters of decentering distortion,
  • b1 Affinity,
  • b2 Non-orthogonality.

The results of the calibration process were the exterior orientation parameters of the cameras, the interior orientation parameters of the cameras, parameters for the radial and de-centering distortion of the lenses and optical systems, and two additional parameters modeling differential scaling and non-orthogonality effects.

Least squares adjustments are not robust estimation techniques as wrong observations can lead to completely wrong results and might even prevent convergence of the adjustment. For these reasons the image observations should be checked for possible blunders, using an error test procedure on the estimated residuals. The software includes methods for blunder detection, which leads to the idea of employing an initial bundle adjustment for the detection and correction of gross errors that may arise in both the color labels of targets determination and the target centroiding process. Figure 4 shows the result dialog box generated by the software after the self-calibration. A summary of the project preferences and the self-calibration bundle adjustment appears in the dialog box. The software also generates a standard bundle adjustment output file. This file includes adjusted camera and exterior orientation parameters, 3D coordinates of marked points, image coordinate residuals, and correlation matrix data.

 figure: Fig. 4

Fig. 4 Result dialog box obtained after self-calibration process.

Download Full Size | PPT Slide | PDF

4. Experimental results

In order to test the proposed method, at least three fully automatic self-calibration applications were performed for each of two DSLR and four compact digital cameras. The technical details of the cameras are summarized in Table 1 .

Tables Icon

Table 1. Technical Specifications of Cameras

The full automatic self-calibrating bundle adjustment was performed using images of a test-field of 60 × 40 cm with 62 circular targets in an irregular grid. Fifty-six of the circular targets (1.5 mm diameter) were white on a black background (Fig. 5 ). To determine the approximate exterior orientation parameters, the remaining six targets (same size) used colored dots on a black background. These targets placed around the middle section of the test field. Paper material was used for marked points. Three-dimensional object coordinates of all the targets were previously determined photogrammetrically with a mean standard deviation of 20 µm.

In each calibration application, at least 16 images were recorded from a distance of approximately 1.5 m. Eight of these images were taken with ± 90° roll angle to minimize parameter correlation. The image acquisition geometry for the first calibration application of the Canon EOS 500D camera is shown in Fig. 6 . When taking the images, the focal lengths of all cameras except the Sony F828 were fixed at maximum zoom. In the images taken using the Sony F828 camera, an intermediate focal length was chosen. The images were taken in a closed environment using the cameras’ own flashes for lighting.

 figure: Fig. 6

Fig. 6 Image acquisition geometry.

Download Full Size | PPT Slide | PDF

In the calibration applications carried out with the developed software, the maximum hue difference, minimum saturation, and minimum brightness values for the software template parameters were chosen as 5, 150 and 100, respectively; minimum correlation coefficient was chosen as 0.80. According to these parameters, the software should detect at least four color targets with known 3D object coordinates from each image and perform full automatic calibration. The signalized points covered an area of approximately from 6 × 6 to 15 × 15 pixels on the images. Summaries of the first calibration parameters and self-calibration bundle adjustment for each of the cameras are given in Tables 2 and 3 . Similar tables were obtained in other applications during the study.with the notions,

Tables Icon

Table 2. Automatically Calculated Calibration Parameters for Six Camera Types

Tables Icon

Table 3. Self-calibration Bundle Adjustment Results of Six Cameras

  • Obs Number of observations,
  • Reject Number of rejected points,
  • σ0 Standard deviation of unit weight a posteriori,
  • σXYZ Theoretical precision in object space,

In these applications, with a single color label, the software detected at most three of the color targets (yellow, cyan and red, respectively) on the images when using the chosen parameters. The rest of the color targets (3 to 6) were determined with two different color labels with respect to the hue values of the target centroid pixel. The detection of at least four of the colored targets with either single or two-color labels is sufficient to implement the method.

5. Conclusions

This article presents an alternative method proposed for full automatic calibration of color digital cameras with self calibration bundle adjustment. The method is based on the determination of approximate exterior orientation parameters using color targets. HSV color space was used together with space resection computation to correctly determine the color labels of the targets. The remaining target points are measured in the approximate image locations determined using the approximate exterior orientation and 3D object point coordinates parameters. The calibration parameters are then determined using the self-calibration bundle adjustment technique. All of these operations are fully automated within the software developed in the study.

In order to test the proposed method, at least three self-calibration applications were performed for each of six digital cameras. In all the applications, the proposed method successfully achieved full automation. Following calibration, the accuracies in the object and image space were estimated as 0.006 to 0.030 mm; and 0.14 to 0.51 µm, respectively. Each calibration application was completed within 1.3- to 2.57 minutes.

References and Links

1. F. Remondino and C. Fraser, “Digital camera calibration methods: considerations and comparisons,” Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 36(5), 266–272 (2006).

2. C. Fraser, M. R. Shortis, and G. Ganci, “Multi-sensor system self-calibration,” in Video-metrics IV (SPIE, 1995), pp. 2–18.

3. C. Fraser, “Digital camera self-calibration,” ISPRS J. Photogramm. Remote Sens. 52(4), 149–159 (1997). [CrossRef]  

4. S. Cronk, C. Fraser, and H. Hanley, “Automatic metric calibration of colour digital cameras,” Photogramm. Rec. 21(116), 355–372 (2006). [CrossRef]  

5. M. R. Shortis, T. A. Clarke, and T. Short, “Comparison of some techniques for the subpixel location of discrete target images,” Proc. SPIE 2350, 25 (1994).

6. A. Koschan and M. Abidi, Digital Color Image Processing, 1st ed. (John Wiley & Sons, Inc., 2008).

7. K. Kraus, “Photogrammetry,” vols 1, Bonn, Dümmler, ISBN 3–427–78686–6, 78653–6. (1997).

8. J. O. Otepka, H. B. Hanley, and C. Fraser, “Algorithm developments for automated offline vision metrology,” Proceedings of the ISPRS Commission V Symposium, ISPRS 2002, Corfu, Greece, September, 1–2, pp. 60–67.

9. M. R. Shortis, T. A. Clarke, and S. Robson, ““Practical testing of the precision and accuracy of target image centering algorithms,” Videometrics IV,” Proc. SPIE 2598, 65–76 (1995). [CrossRef]  

10. R. A. H. Munjy, and M. Hussain, “Closed-form space resection using photo scale variation,” Proceedings of the XVIII ISPRS Congress, Vienna, Austria, 9–19 June 1996.

References

  • View by:
  • |
  • |
  • |

  1. F. Remondino and C. Fraser, “Digital camera calibration methods: considerations and comparisons,” Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 36(5), 266–272 (2006).
  2. C. Fraser, M. R. Shortis, and G. Ganci, “Multi-sensor system self-calibration,” in Video-metrics IV (SPIE, 1995), pp. 2–18.
  3. C. Fraser, “Digital camera self-calibration,” ISPRS J. Photogramm. Remote Sens. 52(4), 149–159 (1997).
    [Crossref]
  4. S. Cronk, C. Fraser, and H. Hanley, “Automatic metric calibration of colour digital cameras,” Photogramm. Rec. 21(116), 355–372 (2006).
    [Crossref]
  5. M. R. Shortis, T. A. Clarke, and T. Short, “Comparison of some techniques for the subpixel location of discrete target images,” Proc. SPIE  2350, 25 (1994).
  6. A. Koschan and M. Abidi, Digital Color Image Processing, 1st ed. (John Wiley & Sons, Inc., 2008).
  7. K. Kraus, “Photogrammetry,” vols 1, Bonn, Dümmler, ISBN 3–427–78686–6, 78653–6. (1997).
  8. J. O. Otepka, H. B. Hanley, and C. Fraser, “Algorithm developments for automated offline vision metrology,” Proceedings of the ISPRS Commission V Symposium, ISPRS 2002, Corfu, Greece, September, 1–2, pp. 60–67.
  9. M. R. Shortis, T. A. Clarke, and S. Robson, ““Practical testing of the precision and accuracy of target image centering algorithms,” Videometrics IV,” Proc. SPIE 2598, 65–76 (1995).
    [Crossref]
  10. R. A. H. Munjy, and M. Hussain, “Closed-form space resection using photo scale variation,” Proceedings of the XVIII ISPRS Congress, Vienna, Austria, 9–19 June 1996.

2006 (2)

F. Remondino and C. Fraser, “Digital camera calibration methods: considerations and comparisons,” Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 36(5), 266–272 (2006).

S. Cronk, C. Fraser, and H. Hanley, “Automatic metric calibration of colour digital cameras,” Photogramm. Rec. 21(116), 355–372 (2006).
[Crossref]

1997 (1)

C. Fraser, “Digital camera self-calibration,” ISPRS J. Photogramm. Remote Sens. 52(4), 149–159 (1997).
[Crossref]

1995 (1)

M. R. Shortis, T. A. Clarke, and S. Robson, ““Practical testing of the precision and accuracy of target image centering algorithms,” Videometrics IV,” Proc. SPIE 2598, 65–76 (1995).
[Crossref]

1994 (1)

M. R. Shortis, T. A. Clarke, and T. Short, “Comparison of some techniques for the subpixel location of discrete target images,” Proc. SPIE  2350, 25 (1994).

Clarke, T. A.

M. R. Shortis, T. A. Clarke, and S. Robson, ““Practical testing of the precision and accuracy of target image centering algorithms,” Videometrics IV,” Proc. SPIE 2598, 65–76 (1995).
[Crossref]

M. R. Shortis, T. A. Clarke, and T. Short, “Comparison of some techniques for the subpixel location of discrete target images,” Proc. SPIE  2350, 25 (1994).

Cronk, S.

S. Cronk, C. Fraser, and H. Hanley, “Automatic metric calibration of colour digital cameras,” Photogramm. Rec. 21(116), 355–372 (2006).
[Crossref]

Fraser, C.

S. Cronk, C. Fraser, and H. Hanley, “Automatic metric calibration of colour digital cameras,” Photogramm. Rec. 21(116), 355–372 (2006).
[Crossref]

F. Remondino and C. Fraser, “Digital camera calibration methods: considerations and comparisons,” Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 36(5), 266–272 (2006).

C. Fraser, “Digital camera self-calibration,” ISPRS J. Photogramm. Remote Sens. 52(4), 149–159 (1997).
[Crossref]

Hanley, H.

S. Cronk, C. Fraser, and H. Hanley, “Automatic metric calibration of colour digital cameras,” Photogramm. Rec. 21(116), 355–372 (2006).
[Crossref]

Remondino, F.

F. Remondino and C. Fraser, “Digital camera calibration methods: considerations and comparisons,” Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 36(5), 266–272 (2006).

Robson, S.

M. R. Shortis, T. A. Clarke, and S. Robson, ““Practical testing of the precision and accuracy of target image centering algorithms,” Videometrics IV,” Proc. SPIE 2598, 65–76 (1995).
[Crossref]

Short, T.

M. R. Shortis, T. A. Clarke, and T. Short, “Comparison of some techniques for the subpixel location of discrete target images,” Proc. SPIE  2350, 25 (1994).

Shortis, M. R.

M. R. Shortis, T. A. Clarke, and S. Robson, ““Practical testing of the precision and accuracy of target image centering algorithms,” Videometrics IV,” Proc. SPIE 2598, 65–76 (1995).
[Crossref]

M. R. Shortis, T. A. Clarke, and T. Short, “Comparison of some techniques for the subpixel location of discrete target images,” Proc. SPIE  2350, 25 (1994).

Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. (1)

F. Remondino and C. Fraser, “Digital camera calibration methods: considerations and comparisons,” Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 36(5), 266–272 (2006).

ISPRS J. Photogramm. Remote Sens. (1)

C. Fraser, “Digital camera self-calibration,” ISPRS J. Photogramm. Remote Sens. 52(4), 149–159 (1997).
[Crossref]

Photogramm. Rec. (1)

S. Cronk, C. Fraser, and H. Hanley, “Automatic metric calibration of colour digital cameras,” Photogramm. Rec. 21(116), 355–372 (2006).
[Crossref]

Proc. SPIE (2)

M. R. Shortis, T. A. Clarke, and T. Short, “Comparison of some techniques for the subpixel location of discrete target images,” Proc. SPIE  2350, 25 (1994).

M. R. Shortis, T. A. Clarke, and S. Robson, ““Practical testing of the precision and accuracy of target image centering algorithms,” Videometrics IV,” Proc. SPIE 2598, 65–76 (1995).
[Crossref]

Other (5)

R. A. H. Munjy, and M. Hussain, “Closed-form space resection using photo scale variation,” Proceedings of the XVIII ISPRS Congress, Vienna, Austria, 9–19 June 1996.

C. Fraser, M. R. Shortis, and G. Ganci, “Multi-sensor system self-calibration,” in Video-metrics IV (SPIE, 1995), pp. 2–18.

A. Koschan and M. Abidi, Digital Color Image Processing, 1st ed. (John Wiley & Sons, Inc., 2008).

K. Kraus, “Photogrammetry,” vols 1, Bonn, Dümmler, ISBN 3–427–78686–6, 78653–6. (1997).

J. O. Otepka, H. B. Hanley, and C. Fraser, “Algorithm developments for automated offline vision metrology,” Proceedings of the ISPRS Commission V Symposium, ISPRS 2002, Corfu, Greece, September, 1–2, pp. 60–67.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 Hexacone representation of HSV color space.
Fig. 2
Fig. 2 Image of target parameters created by the software.
Fig. 3
Fig. 3 Software image obtained after resection driveback process.
Fig. 4
Fig. 4 Result dialog box obtained after self-calibration process.
Fig. 6
Fig. 6 Image acquisition geometry.

Tables (3)

Tables Icon

Table 1 Technical Specifications of Cameras

Tables Icon

Table 2 Automatically Calculated Calibration Parameters for Six Camera Types

Tables Icon

Table 3 Self-calibration Bundle Adjustment Results of Six Cameras

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

r = σ 12 σ 1 . σ 2 = ( g 1 g ¯ 1 ) ( g 2 g ¯ 2 ) ( g 1 g ¯ 1 ) 2 ( g 2 g ¯ 2 ) 2 ,
[ x 0 y 0 ] = i = 1 n j = 1 m g i j [ x i j y i j ] i = 1 n j = 1 m g i j .
x x 0 + Δ x = c R 1 R 3 , y y 0 + Δ y = c R 2 R 3 ,
[ R 1 R 2 R 3 ] = R [ X X 0 Y Y 0 Z Z 0 ] ,
Δ x = x 0 x c Δ c + x ¯ r 2 k 1 + x ¯ r 4 k 2 + x ¯ r 6 k 3 + ( r 2 + 2 x ¯ 2 ) p 1 + 2 p 2 x ¯ y ¯ + b 1 x ¯ + b 2 y ¯ , Δ y = y 0 y c Δ c + y ¯ r 2 k 1 + y ¯ r 4 k 2 + y ¯ r 6 k 3 + 2 p 1 x ¯ y ¯ + ( r 2 + 2 y ¯ 2 ) p 2 ,
r = x ¯ 2 + y ¯ 2 , x ¯ = x x 0 , y ¯ = y y 0 ,

Metrics