## Abstract

Lens distortion parameters vary with the distance between the object point and the image plane. We propose an analytical model of depth-dependent distortion for large depth-of-field digital cameras used for high accuracy photogrammetry. Compared with the magnification-dependent model, the proposed one does not need focusing operation during calibration, thus eliminates focusing errors and guarantees the stability of camera interior parameters. Compared with the widely used constant distortion parameter model, the proposed model reduces the maximum distortion variation from 8.0 μm to 0.9 μm at 20 mm radial distance when the depth changes from 2.46 m to 4.51 m for the 35 mm lens, and from 23.0 μm to 3.6 μm when the depth changes from 2.07 m to 4.17 m for the 50 mm lens. Additionally, when applied to photogrammetry bundle adjustment, the proposed model reduces length measurement standard deviation from 0.055 mm to 0.028 mm in a measurement volume of 7.0 m × 3.5 m × 2.5m compared with the constant parameter model.

© 2017 Optical Society of America

## 1. Introduction

Photogrammetry and vision techniques allow measurement of spatial geometry through reconstruction of 3D target points from convergent images. The principle is to determine the intersection of recovered spatial ray-bundles [1]. The generally adopted method is the bundle adjustment, which is essentially a nonlinear least square optimization method [1]. For this method, the precisely described linear projective model and the nonlinear distortion model are the most critical prerequisites for high-accuracy 3D measurements. The most generally accepted and referenced projective model uses the collinear equations [2, 3]:

In Eq. (1), (*x _{p}*,

*y*) are the coordinate offsets of the principle point,

_{p}*c*is the principle distance, and [

*X*

_{0},

*Y*

_{0},

*Z*

_{0}] and

*R*denote the translation and rotation matrix of the camera with respect to a specific reference frame. (

*X*,

*Y*,

*Z*) are the coordinates of a 3D point and (

*x*,

*y*) are the projected coordinates in the image. (△

*x*, △

*y*) are the distortions that denote the deviation of the image point from the ideal pinhole projective points and are expressed by the following polynomials:

*K*

_{1},

*K*

_{2}, and

*K*

_{3}are the radial distortion parameters,

*P*

_{1}and

*P*

_{2}are the decentering distortion parameters, and

*b*

_{1}and

*b*

_{2}are the affine distortion parameters in the image plane.

Distortions are systematic errors of the practical imaging system. Therefore, complete and precise modelling and calibration of the distortion is inevitable for imaging based high accuracy measurement techniques. There are various types of calibration equipment and methods. For infinite distance applications such as the star tracker for which object distances are in the magnitude order of light years and incident rays are parallel, theodolites and collimators are employed [4, 5]. For surface scanners that operates at a distance of about 0.5 m, and measures surface undulation that is several centimeters at the most, a planar checkerboard pattern method is widely used [6]. For the two applications, the variation in object depth is negligible compared with the measurement depth and the constant distortion model in Eq. (2) gives satisfactory results. However, for large depth 3D photogrammetry or vision measurement, variation in distortion parameters is non-negligible [7], especially when using large format imaging sensors, and the constant model does not provide the expected accuracy such as 1/100000 [8]. The Material and method section gives such an example.

Lens distortion is dependent on magnification [9]. Ignoring the depth influence upon distortions causes systematic measurement error. Previous studies propose magnification dependent radial and decentering distortion models and calibration methods [10–13]. Magill derived the explicit relationship between radial distortion and magnification [9]:

where*r*is the radial distance in the image;

*δr*is the radial distortion when the image is focused on an object plane at depth

_{s}*s*;

*δr*

_{+∞}and

*δr*

_{-∞}are distortion functions for infinite and inverted infinite focus, respectively; and

*m*is the magnification. The model is inapplicable to cameras because it is impossible to reverse the lens to calibrate the distortion function

_{s}*δr*

_{-∞}.

Brown makes modifications to Eq. (3) and obtains a formulation that is more applicable to model camera radial distortion [10]:

*δr*

_{s}_{1}and

*δr*

_{s}_{2}are the calibrated distortion functions at focused depth

*s*

_{1}and

*s*

_{2}.

Additionally, Brown derives the distortion function on a defocused image plane through geometry analysis and obtains:

*δr*

_{s}_{,}

_{s}_{′}denotes the radial distortion of a image focused at depth

*s*when imaging an object plane at

*s*′ depth; and

*c*and

_{s}*c*

_{s}_{′}are the principal distances of the camera when the image plane is focused at

*s*and

*s*′ respectively according to the thin lens law.

Equations (4) and (5) show that, once the radial distortion functions focused at two depths of *s*_{1} and *s*_{2} are calibrated, the distortion of an arbitrary depth *s*′ object point can be computed when the actually focused depth *s* of the camera is known.

Fryer derives the decentering distortion formulations [11]:

*δd*and

_{x_s}*δd*are the decentering distortions along the horizontal and vertical directions in the image;

_{y_s}*c*and

*c*are the principal distances when the image is focused at infinity and depth

_{s}*s*, respectively; and

*P*

_{1}and

*P*

_{2}are the decentering distortion parameters at infinite focus. Fryer validates the model by plumb line calibration data, but the models for defocused

*δd*

_{x_s}_{,}

_{s}_{′}and

*δd*

_{y_s}_{,}

_{s}_{′}are not given.

The plumb line method, which uses coplanar plumb lines to calibrate cameras, is an effective approach for calibrating radial and decentering distortion parameters [10, 11]. The method optimizes camera intrinsic parameters through minimizing straightness errors of the plumb lines in the image. But, the traditional method uses nylon thread and requires fine illumination and longtime exposure [13]. Brown and Fryer’s models require the camera be focused on the plumb line plane and consequently introduce a systematic focusing error of at least half a pixel in the image for digital cameras. What is more, the models require the camera be focused at different depths in the calibration procedure, but the focusing rings of modern measurement camera lens are locked in order to guarantee the stability of the intrinsic parameters, and thus re-focusing is impossible.

Alvarez combines Brown’s [10] and Fraser’s [15] models to derive a new radial distortion computation model for planar scene such as the soccer field [14]:

*s*and

_{min}*s*are the nearest and farthest object depth in the scene. The model is capable of calibrating a camera using one image taken in any pose because the depths can be easily determined with the help of a homography matrix. However, the decentering and in-plane affine distortion parameters are not calibrated.

_{max}This paper proposes a depth-dependent distortion model that analytically describes the relationship between distortion parameters and spatial point depths of the fixed-focus and large depth-of-field cameras. Compared with Magill, Brown and Fryer’s models, the proposed model does not require focusing or other camera lens adjustments during calibration. Additionally, the actual focusing distance of the camera does not need to be measured. Consequently, the calibration inaccuracy caused by camera focusing errors is eliminated and stability of the camera intrinsic parameters is maintained. An automated system incorporating coplanar stretched retro-reflect-filaments (RRFs), the orientation adjustment mechanism, and the data processing software is built to calibrate distortion parameters at any depth. Experiments are conducted to validate the effectivity of the calibration system, and the correctness of the model. Distortion variation with the depth is accurately computed by the model. Additionally, 3D length measurement precision is improved by more than 40% over the constant distortion model.

## 2. Materials and methods

This part describes in detail the accuracy problem encountered in large depth scene photogrammetry measurement using the constant distortion model. Figure 1 shows such a scenario that is built for the orientation method test and accuracy evaluation of a two camera 3D measurement system. The system is intended to dynamically measure the surface deformation of a 6.4 m diameter parabolic antenna during rotary scanning by tracking and measuring the retro-reflective-targets (RRTs) on the surface.

In the test and evaluation scenario, the RRTs are distributed and divided into two groups in 3D space: the foreground group and the background group, to form a highly constrained test field. They are measured to provide control and ground truth points. The foreground group contains twelve retro-reflective-targets (RRTs) of which five lie on a cross shaped exterior orientation bar. The background group contains twenty-five RRTs and ten coded RRTs that lie on a 7 m × 3.5 m wall. The distance between the foreground and the background is 2.5 m. A commercial photogrammetry system along with a software package is employed to conduct the measurement of the RRTs. Convergent pictures are taken at different locations in an area as illustrated in Fig. 1(b) that is 6.0 m away from the wall. Typically, photogrammetric cameras are configured to a large F-number to extend the depth-of-field for large spatial volume measurement. Using synchronized flash light when taking pictures, RRTs are imaged and detected as light spots for its much higher reflectivity compared with the background. Locations of the RRTs in each image are determined by measuring the intensity centroid of the spots. All of the RRTs and coded RRTs were measured through bundle adjustment of the software package.

During measurement accuracy evaluating of the stereo camera system, large systematic errors that vary with the depth are detected. Consequently, we design and conduct an experiment to test the photogrammetry measurement consistency along measurement depth direction. A carbon fiber bar is placed at different depths and positions in the measurement volume and measured by the photogrammetry system. The standard deviation of the measured bar lengths are employed to measure the consistency. The length of the bar is defined by the distance between two coded RRTs rigidly fixed on the bar and is measured on a granite linear rail by a laser interferometer (XD1LS, API, Rockville, Maryland, USA) for distance measurement and a micro imaging camera (Mata G-504B, AVT, Stadtroda, Germany) for target locating. The measured length is 688.006 mm and the standard uncertainty is less than 1.0 μm. In each bar position measurement, pictures are taken at the same locations and orientations. Two invar alloy scale bars, *S*_{1} and *S*_{2}, are placed closely in front of the foreground object to provide a unique scale among different measurement trials. Nine measurement trials were performed in which the carbon fiber bar was placed in different positions from BP1 to BP9. Figure 1 (b) depicts the two yellow scale bars and the nine carbon fiber bar positions. Figure 1 (a) also depicts the carbon fiber bar at position BP7.

The photogrammetry system is claimed to have a theoretical relative coordinate accuracy of 1/100000. Considering that the length measurement accuracy is two or three times lower than coordinate accuracy [8], the theoretical maximum length variation (6*σ*) is about 0.124 mm and the standard deviation is about 0.021 mm for the 688 mm bar. However, the result is inconsistent with theoretical analysis. Results of the proposed model and the constant distortion model is compared and analyzed in the Experiments and results section.

## 3. Depth-dependent distortion model

All the following derivations are conducted under the assumptions that the camera is focused at a unknown depth of *s* and the focusing is fixed, the radial distortion parameters **k**_{s}_{,}_{s}_{1} and **k**_{s}_{,}_{s}_{2} at two different depths *s*_{1} and *s*_{2} are calibrated by the method in the section 4, and the distortion parameters **k**_{s}_{,}_{s}_{′} at any arbitrary object depth *s*′ are to be computed.

#### 3.1 Radial distortion

For the calibration depth *s*_{1}, a similar equation to Eq. (5) can be obtained:

Expanding *δr _{s}*

_{1}, we obtain:

Replacing *r _{s}*

_{1}with

*c*

_{s}_{1}/

*c*

_{s}_{×}

*r*, we obtain:

_{s}According to the calibration results **k**_{s,s}_{1}, *δr _{s,s}*

_{1}can also be formulated as:

Comparing Eq. (8) with Eq. (9), the explicit relationship between focused and defocused radial distortion parameters is:

*s*, the calibration depth

*s*

_{1}, and the calibration results

**k**

_{s}_{,}

_{s}_{1}, the parameters of the camera when focused at depth

*s*

_{1}can be computed.

Similar to Magill and Brown’s conclusion, this paper states that radial distortion parameters calibrated at two different depths are sufficient to compute the radial distortion parameters at any arbitrary depth. Supposing that the radial distortion parameters **k**_{s}_{,}_{s}_{2} of the same camera are also calibrated at depth *s*_{2}, equations similar to (10) can also be derived as:

On knowing distortion parameters **k**_{s}_{1} and **k**_{s}_{2} at depth *s*_{1} and *s*_{2}, the radial distortion parameters of the same camera when focused at depth s′ can be derived using Eq. (4):

Defocusing the image plane to the actually focused depth of *s* by Eq. (5) and considering *r _{s}*

_{′}=

*c*

_{s}_{′}/

*c*×

_{s}*r*

_{s}_{,}

_{s}_{′}, we have:

Substituting Eqs. (10) and (11) into Eq. (13) and then substituting Eq. (13) into Eq. (14), we have:

Equation (15) shows that, for a fixed-focus camera, if the radial distortion parameters **k**_{s}_{,}_{s}_{1} and **k**_{s}_{,}_{s}_{2}at two known depths are calibrated, the radial parameters at an arbitrary depth **k**_{s}_{,}_{s}_{′} can be computed. Moreover, the depth *s* at which the camera is actually focused is not needed to be measured because *C _{s}*

_{1},

*C*

_{s}_{2},

*C*and

_{s′}*α*are independent of

_{s′}*s*.

#### 3.2 Decentering distortion

For a focused depth *s*′, the decentering distortion is similar to Eq. (6):

Defocusing Eq. (16) onto the actual image plane focused at depth *s*, we have:

Substituting *r _{s}*

_{′}in Eq. (17) with

*c*

_{s}_{′}/

*c*×

_{s}*r*, we have:

_{s}Comparing Eq. (18) with Eq. (6), we have:

It can be seen that, for a fixed-focus camera, object depth variation *s*′ does not influence the decentering parameters. Consequently, the calibration results of *P*_{1}_{s}_{,}_{s}_{1} and *P*_{2}_{s}_{,}_{s}_{1} at an arbitrary depth *s*_{1} are equal to the decentering parameters at any other depth. What is more, the depth *s* at which the camera is actually focused is not needed to be measured.

Equations (15) and (19) provide the universal depth-dependent distortion calculating model. No matter where a point locates, once the distance between the point and the image in space is determined, the distortion parameters of the point on that image can be calculated.

## 4. Camera calibration methods

The above models require the radial distortion parameters **k**_{s}_{,}_{s}_{1}, **k**_{s}_{,}_{s}_{2} and the decentering distortion parameters **P**_{s}_{,}_{s}_{1} be calibrated for the object planes at two different depths *s*_{1} and *s*_{2}. The calibration system and process are designed to achieve higher accuracy and automation.

Figure 2(a) exhibits the coplanar calibration pattern, which is composed of a rigid frame, thirty-two line patterns, and some 6 mm diameter coded RRTs (Geodetic System, Melbourne, Florida, USA). Each of the line patterns is formed by a 2 mm width RRF (Chinastars, Hangzhou, China) that is stretched between the top and bottom frame border. The stretched RRF has the following advantages over the plumb Nylon thread method [10–12]:

- 1 Stretched lines are steadier than plumbed Nylon thread;
- 2 Less exposure time because of much higher reflectivity as depicted in Fig. 2(b);
- 3 Variable filament width according to calibration depth, which is similar to RRT diameter in application. Thus, image measurement errors in calibration and measurement are equalized.

The coded RRTs placed on the frame vertical bars are measured by the camera that is to be calibrated through self-calibration bundle adjustment that simultaneously produces the initial intrinsic parameters. Based on the target 3D coordinates, 2D image coordinates, and initial camera parameters, image extrinsic parameters with respect to the line pattern plane, including three angles and three translations, can be determined for any picture by space resection [1]. An object plane is fitted using the measured targets and the reference system is moved onto the object plane as in Fig. 3. Consequently, a camera orientation that is perpendicular to the plane has the extrinsic angles of *φ* = 90°, *ω* = −90°, and *κ* = 0°.

The camera is mounted on a customized automated 6-DOF micro movement platform (Suruga Seiki, Shizuoka, Japan) that is employed to adjust the optical axis as shown in Fig. 4. Angle deviations away from the predefined values mentioned above are fed back to the controller (DA200, Suruga Seiki, Shizuoka, Japan) for automated adjustment until all the angle deviations are less than 0.005°.

At each specified depth, the camera is rotated around the optical axis by 90° to generate orthogonal line patterns in order to decouple the correlations between parameters. The orthogonal image pair is used to calibrate the camera at the depth. Horizontal stretched filaments are not used together with vertical ones because their straightness is easily affected by gravity and thus introduces systematic calibration errors. Intensity centroids of each filament in every pixel row or column are measured as observations for distortion parameters adjustment as in [10]. Generally, over 100000 centroid observations in each image are extracted to guarantee parameters reliability. Figure 5 shows the images and extracted observations in one calibration. The flow chart in Fig. 6 demonstrates the calibration process.

## 5. Experiments and results

Experiments were conducted to calibrate lenses and verify the proposed models. An industrial camera body (GE4900, AVT, Stadtroda, Germany) equipped with a full frame CCD sensor was used. The resolution is 4872 × 3248 pixels and the dimension is 36 mm × 24 mm. Two consumer - level lenses, the Nikkor 50mm F/1.4G and the Nikkor 35mm F/2D (Nikon, Japan), were investigated. A LED ring flash light (YN-14EX, Yongnuo, Shenzhen, China) is incorporated to provide illumination.

Figure 7 shows six depth calibration images of the 35 mm lens and exhibits a large depth-of-field. Figures 8 and 9 show the radial and decentering distortion profiles of the two lenses calibrated at each depth. Tables 1 and 2 give the corresponding distortion values at specified radial distances. The two lenses show similar profiles and the radial distortion decreases with the depth. For the 50 mm lens, the radial distortion decreases by 0.4 μm, 3 μm, 10 μm, and 23 μm at the radial distances of 5 mm, 10 mm, 15 mm, and 20 mm, respectively, when the depth increases from 2.07 m to 4.17 m. For the 35 mm lens, the values are 0.5 μm, 1.3 μm, 3.5 μm, and 8.0 μm, respectively, when the depth increases from 2.46 m to 4.51 m. The decreased values also imply the systematic error of the constant distortion model. Considering the 1/30 pixels image measurement accuracy, the variance is significant enough to weaken the reliability of a high accuracy photogrammetry system. Comparatively, there are no obvious variations in decentering distortion. The result is similar to the one obtained in [10] and supports the conclusion about decentering distortion in Section 3.

For the calibrations, the standard deviation of the observation centroids is 0.27 μm, which is lower than 1/25 pixels. Accuracy of the parameters can be estimated by the adjustment RMSE. Table 3 shows one of the calibration results. The RMSEs are two orders of magnitude lower than the measured results and indicate great reliability.

In order to validate the depth-dependent radial distortion model, calibrated parameters at two depths are employed to compute parameters at other depths using Eq. (15), and the computed distortions are compared with the calibrated distortion in Tables 4 and 5. Distortion variations between different depths in the “Cal” column essentially indicate systematic errors of the constant distortion model. After applying the depth-dependent model, the error reduces from 23.0 μm to 3.6 μm for the 50 mm lens, and from 8.0 μm to 1.0 μm for the 35 mm lens, at a radial distance of 20 mm. It can be derived from Eq. (15) that **k**_{s}_{,}_{s}_{′} equals **k**_{s}_{,}_{s}_{1} when *s*′ equals *s*_{1} and that **k**_{s}_{,}_{s}_{′} equals **k**_{s}_{,}_{s}_{2} when *s*′ equals *s*_{2}. Consequently, in Tables 4 and 5, the deviation between calibrated and computed distortion is zero at the employed depths for distortion computation. The depths used for model validating are restricted by the dimension of the calibration frame and the minimum number of image lines needed for accurate camera calibration. If the calibration frame dimension and line pattern density increase, the model can be validated by data at nearer and farther distances. Although the camera is calibrated by a planar object, the model and calibration results apply to any point position in the scene.

A one-side self-calibration bundle adjustment algorithm is developed to adapt the proposed models in photogrammetry measurement. The principle is that, upon the calibration results at two depths, the distortion parameters of an image point is only affected by the corresponding spatial point distance away from the specific image. That is to say, if the depth changes, no matter caused by a direct object distance variation or by camera tilting, the image point distortion parameters change consequently and can be calculated by Eqs. (15) and (19). As a result, each point in any image has its own distortion parameters determined by the point position in the space, camera position and orientation where the image is taken.

The camera used in section 2 was re-calibrated using the method introduced in section 4. The constant model bundle software and the depth-dependent model bundle software are employed to measure the same scene in Fig. 1 using the same photos. Length measurement results of the carbon fiber bar in nine trials are listed in Table 6 and are exhibited in Fig. 10. The maximum length variation is reduced from 0.181 mm to 0.083 mm and the standard deviation is reduced from 0.055 mm to 0.028 mm. It is worth noticing that there is no obvious variation in residuals and estimated 3D coordinate RMSEs between the two models, but the consistency of spatial length measurement is improved by 49%. Moreover, the measurement results of the depth-dependent model are more accordant with theoretical accuracy predictions introduced in section 2.

Another verification experiment was carried out in a volume of 4 m × 3 m × 2 m. The constant distortion model obtains a maximum length measurement variation of 0.046 mm and a standard deviation of 0.020 mm while the proposed model obtains 0.033 mm and 0.012 mm. The consistency of spatial length measurement is improved by 40%.

From the above experiments and results, the following conclusions are obtained. For large scale imaging sensors, object depth causes non-negligible variations to the distortion parameters. Without consideration of the dependence, the uncertainty of spatial measurement results increases, but it will not be reflected by image residual errors and estimated RMSEs. Length measurement consistency is improved significantly when applying the proposed distortion model into bundle adjustment.

## 6. Summary

For consumer-level lenses that will be used in photogrammetry measurement, object distance from the camera image plane causes significant variation in lens distortion parameters, especially for cameras with large-scale CCD or CMOS sensors. Thus, modelling and calibrating the depth-dependent distortion parameters are important to improve measurement accuracy. This paper proposes a depth-dependent radial and decentering distortion model that is suitable for a category of large depth-of-field cameras whose focusing status is fixed during both calibration and measurement. Compared with previous magnification-dependent models, this model does not require accurate lens focusing and focused depth measurement, and enables automation of the calibration process. Calibration and measurement results validate the correctness of the model and the effectiveness of the calibration method. Systematic distortion errors are significantly reduced by the proposed model compared with the constant parameter model. Additionally, spatial accuracy is improved by more than 40% for 3D and large depth scene measurements. With more and more large scale imaging sensors being applied to 3D noncontact measurements, this model provides the potential to improve the measurement accuracy significantly in large depth scenes.

Although the accuracy improvement is confirmed when applying this model to multi-image photogrammetric network, effectivity of the model for multi camera 3D videogrammetry systems is under investigation. Suffering from incapacity of self-calibration, such systems require cameras be accurately calibrated before measurement. Future works also include extending the model to other vision related research fields that involve non-negligible viewing depth. Bionic vision systems adjust focusing status in real-time according to object depth. The model allows timely calibration of such systems utilizing line characters that exist extensively in structured environment. Compared with conventional Structure from Motion (SfM) method, the model provides higher accuracy for its analytical description and least square adjustment. Additionally, for structured light or laser scanners, the model potentially improves the calibration accuracy not only because the calibration results accommodate depth variation but also because planar line patterns are easy and accurate to manufacture compared with circle dot patterns. However, In order to realize these functions, the model needs improvement to enable calibration using just one image of spatial lines.

## Funding

China National Natural Science Fund (51175047, 51475046); Scientific Research Project of Beijing Educational Committee (KM201511232020).

## References and links

**1. **T. Luhmann, S. Robson, and S. Kyle, *Close-range photogrammetry and 3D imaging* (Walter de Gruyter, 2014).

**2. **T. Luhmann, C. S. Fraser, and H. G. Maas, “Sensor modelling and camera calibration for close-range photogrammetry,” ISPRS J. Photogramm. **115**, 37–46 (2016). [CrossRef]

**3. **C. S. Fraser, “Automatic camera calibration in close range photogrammetry,” Photogram. Eng. Remote Sens. **79**(4), 381–388 (2013). [CrossRef]

**4. **T. Sun, F. Xing, and Z. You, “Optical system error analysis and calibration method of high-accuracy star trackers,” Sensors (Basel) **13**(4), 4598–4623 (2013). [CrossRef] [PubMed]

**5. **F. Xing, Y. Dong, and Z. You, “Laboratory calibration of star tracker with brightness independent star identification strategy,” Opt. Eng. **45**(6), 063604 (2006). [CrossRef]

**6. **Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE. T. Pattern. Anal. **22**(11), 1330–1334 (2000).

**7. **F. Remondino and C. S. Fraser, “Digital camera calibration methods: considerations and comparisons,” Int. Arch. Photogram. **36**(5), 266–272 (2006).

**8. **T. Luhmann, “Close range photogrammetry for industrial applications,” ISPRS J. Photogramm. **65**(6), 558–569 (2010). [CrossRef]

**9. **A. A. Magill, “Variation in distortion with magnification,” J. Opt. Soc. Am. **45**(3), 148–149 (1955). [CrossRef]

**10. **D. C. Brown, “Close-range camera calibration,” Photogram. Eng. **37**, 855–866 (1971).

**11. **J. G. Fryer and D. C. Brown, “Lens distortion for close-range photogrammetry,” Photogram. Eng. Rem. S. **52**(1), 51–58 (1986).

**12. **J. G. Fryer and C. S. Fraser, “On the calibration of underwater cameras,” Photogram. Rec. **12**(67), 73–85 (1986). [CrossRef]

**13. **J. G. Fryer, T. A. Clarke, and J. Chen, “Lens distortion for simple C-mount lenses,” Int. Arch. Photogramm. Remote Sens. **30**, 97–101 (1994).

**14. **L. Alvarez, L. Gómez, and J. R. Sendra, “Accurate depth dependent lens distortion models: an application to planar view scenarios,” J. Math. Imaging Vis. **39**(1), 75–85 (2011). [CrossRef]

**15. **C. S. Fraser and M. R. Shortis, “Variation of distortion within the photographic field,” Photogram. Eng. Rem. S. **58**(6), 851–855 (1992).