Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Calibration method for the electrically tunable lens based on shape-changing polymer

Open Access Open Access

Abstract

In this paper, a calibration method for the camera system with electrically tunable lens (ETL) based on shape-changing polymer (SCP) is proposed to improve the accuracy, robustness and practicality of the system. The camera model of the ETL based on SCP is proposed based on the analyses of its optical properties. The calibration strategy, including initial estimation of camera parameters and bundle adjustment is presented. To eliminate the influence of temperature on ETL in machine vision applications, a real-time temperature compensation method is proposed. The proposed method makes use of the existing calibration hardware without adding new components to the system. Both simulations and experiments are conducted to evaluate the effectiveness and accuracy of the proposed camera model and calibration method. The measurement error with the proposed calibration method is below 20 microns at high magnification, whose measurement accuracy is improved by five times than the existing method at high magnification. With the proposed calibration method for the camera system with ETL based on SCP, the calibration workload is reduced and accurate calibration at high magnification is achieved. It also benefits the development of autofocusing 3D measurement technology.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The rapid development of electrically tunable lens (ETL) in recent years has promoted the autofocusing applications of microscopy imaging [1], augmented reality display [2], endoscope [3] and three-dimensional (3D) shape measurement [46] due to its miniaturized size and stable performance without mechanical operations. Among these applications, autofocusing 3D measurement systems with ETL in machine vision applications [46] came to fruition in the past two years until ETL became much more reliable [7]. With escalating labor costs and a growing requirement for automation, the autofocusing 3D measurement with high accuracy and extended depth of field (DOF) is being pursued in advanced manufacturing, robotics and industrial fields.

The basic task of building an accurate autofocusing 3D measurement system is the calibration of the camera system with ETL. Existing autofocusing 3D measurement systems [46] conduct monofocal calibration at each lens setting followed by interpolating them with a polynomial. This calibration strategy has two major drawbacks. One is the huge calibration workload. Normally ten images are required for each monofocal calibration [46]. To fully utilize the whole working range of the camera system with ETL, around ten monofocal calibration should be conducted. Therefore, it takes several hundred images to calibrate the camera system with ETL. Another drawback is the large calibration error of current monofocal calibration methods at high magnification [815], which hinders autofocusing 3D measurement systems from achieving high accuracy [16]. Meanwhile, the extended DOF achieved by autofucusing is necessary at high magnification. To improve both the practicality and accuracy of autofocusing 3D measurement systems, a new calibration method for the camera system with ETL is developed.

There are two main principles to successfully implement ETL [17]. The first one is achieved by varying the reflective index of lens’s material and the most popular technology is based on liquid crystals [18]. However, its high sensitivity to polarization and slow response time limit its applications in machine vision imaging systems. Another principle is to control the shape of the lens, consisting of electrowetting [19] and shape-changing polymer [17]. The lens based on the electrowetting technology contains two liquids with the same density, but different refractive indices. The shape of the interface is controlled by the applied voltage based on the electrowetting effect. The major limitations of this technology are its small aperture (around 3 mm) and high sensitivity to gravity and temperature [4]. Another lens based on shape-changing polymer (SCP) contains the optical liquid sealed by at least one side of polymer membrane. The internal pressure is varied by the actuator to control the shape of membrane. Its aperture is much larger than the one based on electrowetting (larger than 10 mm). Although it is still affected by the gravity, the gravity effect is small and can be suppressed by stiffer polymer membrane. The temperature expansion problem can also be compensated in a closed loop with temperate sensing [17]. Among all these technologies, the ETL based on SCP is a good choice for machine vision applications due to its large aperture, fast response time, little mechanical operation, compact design and low cost.

Therefore, a new calibration method for the camera system with ETL based on SCP is proposed in this paper to improve its accuracy, robustness and practicality. First, the paper analyzes the optical properties of the ETL group containing an ETL based on SCP and a prime lens, since this lens group is commonly used for machine vision applications [20]. Next, the camera model of the ETL group is proposed based on its optical properties. In addition, a real-time temperature compensation method with a simple setup is proposed to eliminate the influence of temperature on the ETL. Finally, a camera calibration method for the camera system with the ETL group is proposed, including initial estimation of camera parameters and bundle adjustment strategy. Both simulations and experiments are conducted to verify the effectiveness and accuracy of the proposed camera model and corresponding calibration method. The proposed calibration method significantly reduces the calibration workload, which only requires 16 images for calibration (this can be reduced to 10 if necessary). Meanwhile, accurate calibration can be conducted at any magnification, even high magnification.

Section 2 explains the optical properties of ETL group based on SCP. Section 3 presents the proposed camera model for the camera system with ETL group based on SCP. Section 4 introduces the proposed camera calibration method. Section 5 presents simulation and experimental results to verify the proposed calibration method. Last, Section 6 concludes the whole paper.

2. Mechanism and optical properties of the ETL group based on SCP

In this section, the basic mechanism of the ETL based on SCP is introduced. Due to the liquid-filled design of the ETL, the optical properties of a single ETL based on SCP affected by three factors (membrane’s deformation, gravity and temperature) are first investigated. For machine vision applications, the ETL group, including a single ETL and a prime lens, is commonly utilized to achieve a better imaging quality [20]. Therefore, the optical properties of the ETL group are discussed afterwards.

2.1 Mechanism of the ETL based on SCP

The schematic structure of the ETL based on SCP is described in Fig. 1(a). The mechanism of the ETL based on SCP means that the actuator-5 varies the applied pressure to the liquid-2 mechanically, electromechanically, pneumatically or electrostatically. The varied pressure is transferred onto the polymer membrane-4 through the liquid-2 to vary the shape of the membrane-4. Therefore, the focal length of the ETL is controllable during autofocus or zoom process.

 figure: Fig. 1.

Fig. 1. (a) Schematic structure of the ETL based on SCP. 1-cover glass; 2-liquid; 3-body structure; 4-membrane; 5-actuator; (b) camera system configuration with the ETL group.

Download Full Size | PDF

The ETL is combined with a prime lens to form an ETL group for a better imaging quality in machine vision applications. There are two configurations of the ETL group for the common machine vision applications as shown in Fig. 1(b). The front-ETL group is designed for the case of a long working distance, and the back-ETL group is for the case of a short working distance.

The focal length and optical axis of a lens determine its characteristics in machine vision applications. These two optical properties of the ETL based on SCP are different from those of a prime lens, because the ETL based on SCP deforms its membrane to change the focal length during autofocus or zoom process. Meanwhile, the optical properties of the ETL are vulnerable to temperature and gravity due to its fluid-based design [17]. The membrane’s deformation, temperature and gravity may affect the optical properties of the ETL based on SCP. Therefore, the above optical properties affected by the membrane’s deformation, temperature and gravity are first analyzed before building the camera model of the camera system with ETL based on SCP.

2.2 Optical properties of a single ETL based on SCP

2.2.1 Focal length of a single ETL based on SCP

i). Effect of membrane’s deformation on focal length

The ETL changes its focal length by deforming its polymer membrane. The deformation of the membrane is controlled by the applied pressure from the inside actuator. There are various actuators in the ETL, such as servo motor [21], direct current motor [2224], photo-polymer [25], electromagnetic [26,27], piezo [17], liquid pumping [28] and dielectric elastomer [29]. The applied pressure $p_a$ is determined by its input power $E_{in}$ as expressed in Eq. (1), since the highest order of the above actuators is the second order. Either controlling the voltage or current can control the input power of the ETL. The ETL [30] used in this paper is controlled by its input current.

$$p_a=k_{E2}\cdot E_{in}^2+k_{E1}\cdot E_{in}+k_{E0}$$
where, $p_a$ is the applied pressure produced by the actuator; $E_{in}$ is the input power of the actuator ; $k_{E2},k_{E1}, k_{E0}$ are the coefficients of the quadratics polynomial that describes the relationship between the applied pressure and the input power of the actuator.

The influence of the applied pressure on the deformation of polymer membrane is investigated. Since the aperture of the ETL based on SCP is quite large compared to the thickness and peak-deflection of the membrane [31], the relationship between the peak deflection of membrane and applied pressure can be expressed as Eq. (2) with tiny pre-strain of the membrane [32,33].

$$\delta _{\max}=\frac{p_0+p_a}{E}\cdot \frac{\left( 1-v \right) \cdot a^2}{4\varepsilon _0h}$$
where, $\delta _{\max }$ is the peak deflection of the membrane and locates in the middle of the membrane; $p_0$ is the initial pressure in the ETL which induces an initial deformation of the membrane; $a$ is the radius of the membrane (half of the aperture); $h$ is the thickness of the membrane; $E$ is the Young’s modulus of the membrane; $v$ is the Poisson’s ratio of the membrane; $\varepsilon _0$ is the pre-stain of the polymer membrane.

The profile of the membrane can be regarded as a part of a sphere [21], since the peak deflection $\delta _{\max }$ is quite small compared to the aperture in the ETL [32,33]. The curvature of the membrane $R_{mem}$ can be calculated using Eq. (3) based on geometrical relationship.

$$R_{mem}=\frac{\delta _{\max}}{2}+\frac{a^2}{2\delta _{\max}}$$
The focal length of the ETL regarding the membranes’ curvature can be calculated based on the Lensmarker’s equation [34]. Meanwhile, the ETL based on SCP normally contains one curved profile for stability and easy controllability [17,21,31,35]. Therefore, the focal length regarding the surface curvature of the membrane is expressed as Eq. (4).
$$f_{liq}=\frac{R_{mem}\cdot n_0}{n_{liq}-n_0}$$
where, $f_{liq}$ is the focal length of the ETL; $n_{liq}$ is the refractive index of the liquid in the lens; $n_0$ is the refractive index of the surrounding, which equals to one with the air.

In all, the relationship between the focal length and the input power in the ETL based on SCP can be derived as Eq. (5) from Eq. (14).

$$\begin{array}{c} f_{liq}=\frac{n_0}{n_{liq}-n_0}\cdot \left[ \frac{p_0+k_{E2}\cdot E_{in}^{2}+k_{E1}\cdot E_{in}+k_{E0}}{E}\cdot \frac{\left( 1-v \right) \cdot a^2}{8\varepsilon _0h} \right.\\ \left. +\frac{E}{p_0+k_{E2}\cdot E_{in}^{2}+k_{E1}\cdot E_{in}+k_{E0}}\cdot \frac{2\varepsilon _0h}{\left( 1-v \right)} \right]\\ \end{array}$$
ii). Effect of gravity on focal length

The polymer membrane tends to be deformed to change the focal length by the gravity of the liquid inside the lens. This deformation is varied with the orientation of the lens. Its influence on the focal length reaches the maximum when the optical axis is placed vertically. With the vertically placed optical axis of the ETL, the influence of gravity-induced membrane’s deformation is equivalent to the influence of the uniform pressure applied to the membrane. The error of the focal length induced by gravity is expressed in Eq. (6).

$$\begin{array}{c} error_{gravity}=f_{liq}\left( p_a+\varDelta p_{gravity} \right) -f_{liq}\left( p_a \right) =\\ \frac{n_0}{n_{liq}-n_0}\cdot \left( \frac{\varDelta p_{gravity}}{E}\cdot \frac{\left( 1-v \right) \cdot a^2}{8\varepsilon _0h}-\frac{\varDelta p_{gravity}\cdot E\cdot 2\cdot \varepsilon _0\cdot h}{\left( p_0+p_a+\varDelta p_{gravity} \right) \cdot \left( p_0+p_a \right)} \right)\\ \end{array}$$
where, $\varDelta p_{gravity}$ is the pressure’s variation induced by gravity.

The applied pressure of the actuator $p_a$ reduces the gravity effect on focal length. Therefore, the maximum error of the focal length induced by gravity is calculated when $p_a=0$. It has been shown that $\frac {error_{gravity}}{f_{liq}}<0.17\%$ with the ETL’s parameters in many cases [21,23,29,32,33].

For the measurement task with stationary machine vision setups, the gravity effect does not change during the whole measurement process, its effect is integrated into the effect of the membrane’s deformation and calibrated. But for the measurement task with moving machine vision setups, the gravity effect varies with different orientations of the ETL. The maximum variation is less than 0.17%. It is small compared to the effect of the membrane’s deformation. Therefore, the gravity effect on focal length will not be considered in this paper. If a higher accuracy is pursued, the gravity effect maybe calibrated with an additional gravity sensor by the method similar to the temperature compensation in the section 4.1.

iii). Effect of temperature on focal length

The temperature will influence the focal length of the ETL in two different ways due to the inside liquid. First, the reflective index of liquid (oil, water or alcohol) linearly decreases with the increment of the temperature [3638], which increases the focal length of the lens based on Eq. (5). Second, the volume of the liquid expands with the rise in temperature. It will reduce the focal length of the lens. Meanwhile, the temperature effect is dependent on the focal lengths of ETL [30] but remains constant during its life-time [17]. The focal length of the ETL is the function of the input power $E_{in}$ shown in Eq. (5). Due to the complexity in direct measurement of the focal length, the compensation of the temperature effect is conducted on the input power $E_{in}$. This compensation strategy can be realized without additional setups and makes the compensation for temperature effect of the integrated machine vision system practical. The compensation input power induced by temperature $E_T$ is expressed as a function of temperature $T$ and the uncompensated input power of actuator $E_{in}$ in Eq. (7). The detailed calibration procedures of temperature effect are presented in section 4.1.

$$\begin{array}{c} E_T=f\left( T,E_{in} \right) \end{array}$$
In conclusion, the focal length of the single ETL based on SCP affected by three factors is expressed as Eq. (8).
$$\begin{array}{c} f_{liq}=\frac{n_0}{n_{liq}-n_0}\cdot \left[ \frac{p_0+k_{E2}\cdot E_{com}^{2}+k_{E1}\cdot E_{com}+k_{E0}}{E}\cdot \frac{\left( 1-v \right) \cdot a^2}{8\varepsilon _0h} \right.\\ \left. +\frac{E}{p_0+k_{E2}\cdot E_{com}^{2}+k_{E1}\cdot E_{com}+k_{E0}}\cdot \frac{2\varepsilon _0h}{\left( 1-v \right)} \right]\\ \end{array}$$
where, $E_{com}=E_{in}-E_T$, which is the compensated input power after temperature compensation.

The minimum focal length of a single ETL is obtained when $p_a=\frac {4E\cdot \varepsilon _0\cdot h}{\left ( 1-v \right ) \cdot a}-p_0$. The value of the equation $\frac {4E\cdot \varepsilon _0\cdot h}{\left ( 1-v \right ) \cdot a}-p_0$ is far larger than $p_a$ with the Polydimethylsiloxane (PDMS) membrane (a common material for the ETL’s membrane) [21,23,29,31,32]. Therefore, the focal length of a single ETL $f_{liq}$ monotonically decreases with the increment of compensated input power $E_{com}$.

2.2.2 Optical axis of a single ETL based on SCP

i). Effect of membrane’s deformation on the optical axis

In the ETL based on SCP, the pressure from the actuator $p_a$ is evenly applied to deform the membrane. With the evenly applied pressure and uniform membrane, the membrane’s axis of symmetry will not deviate from its original position [39]. Since the optical axis of ETL is determined by the membrane’s axis of symmetry, the optical axis of a single ETL maintains its position under the effect of membrane’s deformation. This proposition is verified by the experiments in section 5.2.1.

ii). Effect of gravity on the optical axis

The effect of gravity on the optical axis reaches the maximum when the optical axis of the lens is horizontal. The gravity of the liquid may break the axisymmetric shape of the membrane, so that the optical axis may be biased. Given the condition of fixed edge of the membrane, the deflection of membrane caused by the effect of gravity is related to the coefficient $k_{gravity}$ [40].

$$k_{gravity}=\frac{p_0+p_a}{2\left( \rho _{liq}-\rho _{sur} \right) \cdot g\cdot a}$$
where, $\rho _{liq}$ is the density of the filled liquid; $\rho _{sur}$ is the density of the surroundings; $g$ is the gravity acceleration.

For the ETL based on SCP, $k_{gravity}$ always exceeds thirty [2126,29,31]. With this condition, the peak deflection of the membrane stays along the original optical axis within 0.02% errors [40]. The effect of gravity on the optical axis becomes less due to the improved pressure during the autofocus or zoom process. Therefore, gravity has little effect on the optical axis of the ETL, which is verified by the experiments in section 5.2.1.

iii). Effect of temperature on the optical axis

As mentioned earlier, the temperature changes the reflective index and volume of the liquid. They are both isotropic properties. Therefore, they do not break the axisymmetric profile of the membrane, keeping the optical axis in its original position. This fact is also verified by experiments in section 5.2.1.

In summary, it is concluded that the membrane’s deformation, gravity and temperature do not affect the position of the optical axis in the ETL based on SCP.

2.3 Optical properties of an ETL group based on SCP

The ETL group based on SCP includes a single ETL based on SCP and a prime lens, which is designed for machine vision applications.

2.3.1 Focal length of an ETL group based on SCP

The focal length of the ETL group is calculated based on the compound lens formula in Eq. (10) [41]. It is satisfied both in the front or back ETL configurations.

$$f_{com}=\frac{f_{liq}\cdot f_{pri}}{f_{liq}+f_{pri}-d_{lens}}$$
The focal length of the prime lens $f_{pri}$ is a constant parameter. The focal length of the single ETL $f_{liq}$ is a variable as shown in Eq. (8). The distance $d_{lens}$ is the distance between the optical centers of the prime lens and ETL. The optical center of the prime lens is unchanged. The optical center of the ETL is related to the positions of optical principal points. The positions of front and back principal points of the ETL are calculated as Eq. (11) [42].
$$P_{front}=-\frac{f_{liq}\left( n_{liq}-n_0 \right)}{R_{back}n_{liq}},\ P_{back}=-\frac{f_{liq}\left( n_{liq}-n_0 \right)}{R_{mem}n_{liq}}$$
where, $P_{front}$ and $P_{back}$ are the front and back principal points in the ETL; $R_{back}=\infty$ due to the plano-convex design.

It is derived that $P_{front} \approx 0$ and $P_{back}=-\frac {n_0}{n_{liq}}$ by substituting Eq. (4) into Eq. (11). In the back-principal point $P_{back}$, only the reflective index of the liquid in the ETL $n_{liq}$ is a variable, which varies with the temperature. It changes around 0.01 when its temperature ranges from 20 $^{\circ }C$ to 50 $^{\circ } C$ [3638]. With this prior knowledge, the temperature effect moves the back principal point around $5 \times 10^{-3} mm$, which is quite small. Therefore, both the front and back principal points are fixed in the ETL, leading to a fixed optical center in the ETL.

Consequently, the distance $d_{lens}$ is also a constant parameter. The focal length of an ETL group $f_{com}$ becomes a function of the focal length of a single ETL $f_{liq}$. As discussed in section 2.2.1, the $f_{liq}$ monotonically decreases when the compensated input power $E_{com}$ increases. The compound focal length of an ETL group $f_{com}$ also monotonically decreases with the increment of the compensated input power $E_{com}$. The parameters in Eq. (8) and (10) are difficult to calibrate mathematically due to its non-linearity. Therefore, a cubic polynomial is utilized to interpolate the relationship between $f_{com}$ and $E_{com}$ as shown in Eq. (12). The error induced by the interpolation model is investigated by the simulation in section 5.1.1.

$$f_{com}=k_{f3}\cdot E_{com}^3+k_{f2}\cdot E_{com}^2+k_{f1}\cdot E_{com}+k_{f0}$$
where, $k_{f3},k_{f2},k_{f1},k_{f0}$ are the coefficients of the fitting cubic polynomial.

2.3.2 Optical axis of an ETL group based on SCP

The optical axis of an ETL group is affected by the optical axes of the single ETL and prime lens. The optical axis of the prime lens remains unchanged all the time. Based on the analyses in section 2.2.2, the optical axis of the single ETL based on SCP does not change. Therefore, the optical axis of an ETL group based on SCP is fixed.

3. Camera model of the ETL group based on SCP

The camera model mathematically describes the projective process parameterized by the optical properties of the lens. The camera model of the camera system with ETL group based on SCP is established based on the analyses of its optical properties in section 2.3.

The imaging process of the camera model is illustrated in Fig. 2. There are three sequential steps for a camera model to transfer the points of an object in the world coordinates to the ones in the frame buffer coordinates which is explained as follows.

 figure: Fig. 2.

Fig. 2. The imaging process of the ETL group based on SCP

Download Full Size | PDF

3.1 Points in the world coordinates ${\textbf{P}_\textbf{W}} = {\left [ {{X_W},{Y_W},{Z_W}} \right ]^T}$ to points in the camera coordinates ${\textbf{P}_\textbf{C}} = {\left [ {{X_C},{Y_C},{Z_C}} \right ]^T}$

The world coordinates $(O_W,X_W,Y_W,Z_W)$ are defined by the user. They are normally placed on the calibration target during camera calibration process. In the camera coordinates $(O_C,X_C,Y_C,Z_C)$, the origin is set at the optical center of the lens. The $Z_C$ axis coincides with the optical axis of the lens. The $X_C$ and $Y_C$ axes are parallel to the cross-section of the lens.

The points in the world coordinates $\mathbf {P_W}$ are transferred to the ones in the camera coordinates through a rotation transformation $\mathbf {R}$ and a translation transformation $\mathbf {T}$ as shown in Eq. (13).

$${\textbf{{P}}_\textbf{{C}}} = \textbf{{R}} \cdot {\textbf{{P}}_\textbf{{W}}} + \textbf{{T}}$$
where,
$$\mathbf{R}=\left[ \begin{matrix} r_1 & r_2 & r_3\\ r_4 & r_5 & r_6\\ r_7 & r_8 & r_9\\ \end{matrix} \right] ,\ \mathbf{T}=\left[ \begin{array}{c} t_x\\ t_y\\ t_z\\ \end{array} \right]$$
As presented in section 2.3.1, the optical center of the ETL group is fixed. Therefore, the rotation and translation matrices $[\bf {R},\bf {T}]$ in the ETL group are constant parameters due to the fixed optical center.

3.2 Points in the camera coordinates ${\textbf{P}_\textbf{C}} = {\left [ {{X_C},{Y_C},{Z_C}} \right ]^T}$ to points in the ideal frame buffer coordinates ${\textbf{P}_\textbf{u}} = {\left [ {{u},{v}} \right ]^T}$

The points in the camera coordinates $\mathbf {P_C}$ are transformed to the points in the idea frame buffer coordinates $\mathbf {P_u}$ through a perspective projection process represented in Eq. (15).

$$\left[ \begin{array}{c} u\\ v\\ 1\\ \end{array} \right] =\lambda \cdot \mathbf{A}\left( E_{com,j} \right) \cdot \left[ \begin{array}{c} X_C\\ Y_C\\ Z_C\\ \end{array} \right] =\lambda \cdot \left[ \begin{matrix} f_x\left( E_{com,j} \right) & 0 & u_0\\ 0 & f_y\left( E_{com,j} \right) & v_0\\ 0 & 0 & 1\\ \end{matrix} \right]$$
where, $\lambda = \frac {1}{Z_C}$; $\mathbf {A}(E_{com,j})$ is a 3 x 3 camera intrinsic matrix describing perspective projection process of the ETL group at the $j^{th}$ compensated input power $E_{com,j}$; $f_x(E_{com,j})$ and $f_y(E_{com,j})$ are respectively the focal lengths of the ETL group along $u$ and $v$ axes at the $j^{th}$ compensated input power $E_{com,j}$; $(u_0,v_0)$ is the image center of the camera-lens system.

The intrinsic matrix contains the information of focal length and image center in the camera-lens system. The image center is the intersection point of the optical axis and the imaging sensor. As discussed in section 2.3.2, the optical axis of the ETL group based on SCP is fixed, which leads to a fixed image center. Based on the above fact and Eq. (12), the intrinsic matrix regarding the compensated input power $\mathbf {A}(E_{com,j})$ is expressed as Eq. (16).

$$\begin{array}{c} \boldsymbol{A}\left( E_{com,j} \right) =\boldsymbol{A}_0+\boldsymbol{A}_1\cdot E_{com,j}+\boldsymbol{A}_2\cdot E_{com,j}^2+\boldsymbol{A}_3\cdot E_{com,j}^3=\left[ \begin{matrix} k_{f0,x} & 0 & u_0\\ 0 & k_{f0,y} & v_0\\ 0 & 0 & 1\\ \end{matrix} \right] +\\ \left[ \begin{matrix} k_{f1,x} & 0 & 0\\ 0 & k_{f1,y} & 0\\ 0 & 0 & 0\\ \end{matrix} \right] \cdot E_{com,j}+\left[ \begin{matrix} k_{f2,x} & 0 & 0\\ 0 & k_{f2,y} & 0\\ 0 & 0 & 0\\ \end{matrix} \right] \cdot E_{com,j}^2+\left[ \begin{matrix} k_{f3,x} & 0 & 0\\ 0 & k_{f3,y} & 0\\ 0 & 0 & 0\\ \end{matrix} \right] \cdot E_{com,j}^3 \end{array} $$
where, $k_{fi,x}$ and $k_{fi,y} (i=0,1,2,3)$ are respectively the coefficients describing the relationship between the focal length of the ETL group along $u$ and $v$ axes in the idea frame buffer coordinates and the compensated input power.

3.3 Points in the ideal frame buffer coordinates ${\textbf{P}_\textbf{u}} = {\left [ {{u},{v}} \right ]^T}$ to points in the distorted frame buffer coordinates ${\textbf{P}_\textbf{d}} = {\left [ {{u_d},{v_d}} \right ]^T}$

As a result of imperfections in lenses and assembly errors in optical systems, distortion is considered in the camera model. Three types of distortion have been well adopted in the machine vision society. Radial distortion is caused by imperfect lens shape with radial positional error, whereas decentering and thin prism distortion are caused by lens and camera assembly errors by both radial and tangential errors [43]. Thin prism distortion is caused by the tilt of the lens assembly, which can be compensated by the difference in principal distances along the $u$ and $v$ axes. As a result, radial and decentering distortion are analyzed in this paper. The point in the distorted frame buffer coordinates ${\textbf{P}_\textbf{d}} = {\left [ {{u_d},{v_d}} \right ]^T}$ is expressed by the distortion model [43,44] in Eq. (17).

$$\begin{array}{c} u_d=u+u\cdot k_1\cdot \left( u^2+v^2 \right) +p_1\cdot \left( 3u^2+v^2 \right) +2p_2\cdot u\cdot v\\ v_d=v+v\cdot k_1\cdot \left( u^2+v^2 \right) +2p_1\cdot u\cdot v+p_2\cdot \left( 3u^2+v^2 \right)\\ \end{array}$$
where, $k_1$ is the coefficient of radial distortion; $p_1$ and $p_2$ are the coefficients of decentering distortion.

These coefficients $k_1, p_1, p_2$ are regarded as constants only when the focal length is fixed and the incident angle of light is small [45]. However, the focal length and the incident angle vary during the autofocus process in the camera system with the ETL group based on SCP. $k_1, p_1, p_2$ become variables in the ETL group. A quadratic polynomial is commonly used to describe the variation in the distortion parameters $k_1, p_1, p_2$ with regard to the focal length [46,47]. Therefore, the distortion variables $k_{1,j}, p_{1,j}, p_{2,j}$ at the $j^{th}$ compensated input power $E_{com,j}$ are expressed in Eq. (18).

$$\begin{array}{c} k_{1,j}=a_1 \cdot [\frac{1}{f_{mean}(E_{com,j})}]^2 + a_2 \cdot \frac{1}{f_{mean}(E_{com,j})} + a_3\\ p_{1,j}=a_4 \cdot [\frac{1}{f_{mean}(E_{com,j})}]^2 + a_5 \cdot \frac{1}{f_{mean}(E_{com,j})} + a_6\\ p_{2,j}=a_7 \cdot [\frac{1}{f_{mean}(E_{com,j})}]^2 + a_8 \cdot \frac{1}{f_{mean}(E_{com,j})} + a_9 \end{array}$$
where, $\mathbf {D}=[a_i (i=1,2,\ldots ,9)]$ are the model coefficients of the polynomial; $f_{mean}(E_{com,j})=[f_{x}(E_{com,j})+f_{y}(E_{com,j})]/2$.

4. Proposed calibration method for the camera system with ETL group based on SCP

The mathematical model of the camera system with ETL group is presented in section 3. As presented in Eq. (16) of the camera model, the intrinsic matrix is the function of $E_{com,j}$. $E_{com,j}$ is the compensated input power to the ETL after temperature compensation. Therefore, the temperature compensation method is presented first. As expressed in section 2.2.1. (iii) and 2.2.2. (iii), the temperature effect is compensated directly on the input power and it has no relation to other intrinsic and extrinsic parameters. As a result, the calibration of the temperature effect can be independently performed before calibrating other parameters.

Another four steps are conducted afterwards to solve all the intrinsic and extrinsic parameters in the proposed camera model of the ETL group. The parameters to be solved are the compensated input power after temperature compensation $E_{com,j}$, extrinsic parameters $[\mathbf {R,T}]$, intrinsic parameters $\mathbf {A}_0,\mathbf {A}_1,\mathbf {A}_2,\mathbf {A}_3$ and distortion parameters $a_i (i=1,2,\ldots ,9)$.

4.1 Calibrating the temperature effect

The compensation input power induced by temperature $E_T$ is a function of temperature $T$ and the uncompensated input power of actuator $E_{in}$ as shown in Eq. (7). The compound temperature effect on the focal length of an ETL group is difficult to analyze directly. An experimental method is proposed to solve this temperature compensation problem without any additional complicated setups.

The calibration principle of the temperature effect is elaborated as follows. When the temperature inside the ETL varies, the focal length of the ETL group varies accordingly. The variation in the focal length makes the captured images of checkerboard blurred based on the Gaussian optical formula [48]. In other words, if the input power of the actuator can be varied to maintain the image sharpness when the temperature varies, the variation in the input power of actuator is the compensation input power induced by temperature $E_T$.

The experimental calibration setup is shown in Fig. 3. The magnification of this calibration setup is set at around 1x, because the variation in image sharpness is sensitive to the focal length’s variation at high magnification. Meanwhile, the temperature effect is independent of the magnification.

 figure: Fig. 3.

Fig. 3. Setup for calibrating the temperature effect.

Download Full Size | PDF

A look-up table of $E_T$ can be established based on the flow chart of Fig. 4. The image sharpness of the captured checkerboard is evaluated by the image sharpness assessment function based on the cumulative probability of blur detection (CPBD) [49]. The initial $E_{in}$ is normally zero and the initial temperature is normally the room temperature.

 figure: Fig. 4.

Fig. 4. Flow chart to calibrate the temperature effect.

Download Full Size | PDF

Therefore, the final compensated input power of the ETL after temperature compensation $E_{com}$ can be expressed as $E_{com}= E_{in}-E_{T}(T,E_{in})$. When the ETL is used in real applications, the input power to the ETL is varied with the temperature’s variation based on the calibrated look-up table. This process is achieved in real-time.

4.2 Estimating $\mathbf {A}_0$ in the off-state of the ETL group

The off-state of the ETL group is set with no input power. In the initial state, the intrinsic matrix $\mathbf {A}(E_{com,j})=\mathbf {A}(0)=\mathbf {A}_0$. The focal length of the ETL group reaches the maximum in the off-state, which is the case for long working distances. Therefore, the matrix $\mathbf {A}_0$ can be calibrated by common monofocal calibration methods [815].

4.3 Estimating $\mathbf {R},t_x,t_y$ using the radial alignment constraint

As shown in the green dashed line of Fig. 2, the direction of the vector from the object point $\mathbf {P_C}(X_C,Y_C,Z_C)$ to the optical axis in the camera coordinates is radially aligned with the direction of the vector from the point $\mathbf {P_u}(u,v)$ to the origin $O_u$ in the ideal frame buffer coordinates. This is called the radial alignment constraint (RAC) in Eq. (19).

$$\frac{u}{v}=\frac{X_C}{Y_C}\Leftrightarrow \frac{u}{v}=\frac{r_1\cdot X_W+r_2\cdot Y_W+r_3\cdot Z_W+t_x}{r_4\cdot X_W+r_5\cdot Y_W+r_6\cdot Z_W+t_y}$$
The RAC is independent of the radial distortion, the effective focal length, and the z component of 3D translation vector $t_z$ [9]. Therefore, the RAC is satisfied in the ETL group based on SCP based on the analyses in section 2.3.

For each $i^{th}$ calibration point at the $j^{th}$ compensated input power $E_{com,j}$ with $(u_{ij},v_{ij)}$ in the ideal frame buffer coordinates, Eq. (20) is derived from Eq. (19).

$$\begin{array}{c} \left[ \begin{matrix}{} v_{ij}X_{W,ij} & v_{ij}Y_{W,ij} & v_{ij}Y_{W,ij} & v_{ij} & -u_{ij}X_{W,ij} & -u_{ij}Y_{W,ij} & -u_{ij}Z_{W,ij}\\ \end{matrix} \right] \cdot\\ \left[ \begin{matrix}{} \frac{r_1}{t_y} & \frac{r_2}{t_y} & \frac{r_3}{t_y} & \frac{t_x}{t_y} & \frac{r_4}{t_y} & \frac{r_5}{t_y} & \frac{r_6}{t_y}\\ \end{matrix} \right]^T =u_{ij}\\ \end{array}$$
With more than seven equations of Eq. (20), an over-determined system of linear equations can be established. The unknowns $r_i (i=1,2,\ldots ,9), t_x$ and $t_y$ can be solved with additional conditions from orthogonality of $\mathbf {R}$. More details about the RAC can be found in Tsai’s method [9].

4.4 Estimating the intrinsic parameters $\mathbf {A}_1$, $\mathbf {A}_2$, $\mathbf {A}_3$ and $t_z$

Equation (21) and Eq. (22) can be derived to estimate the $\mathbf {A}_1$, $\mathbf {A}_2$, $\mathbf {A}_3$ from Eq. (13) and Eq. (15).

$$\begin{array}{c} \left[ \begin{matrix} X_{C,ij}\cdot E_{com,j} & X_{C,ij}\cdot E_{com,j}^{2} & X_{C,ij}\cdot E_{com,j}^{3} & u_{ij}-u_0\\ \end{matrix} \right] \cdot \left[ \begin{matrix} k_{f1,x} & k_{f2,x}\\ \end{matrix} \right.\\ \left. \begin{matrix} k_{f3,x} & t_z\\ \end{matrix} \right] ^T=\left( r_7\cdot X_{W,ij}+r_8\cdot Y_{W,ij}+r_9\cdot Z_{W,ij} \right) \cdot \left( u_{ij}-u_0 \right) -k_{f0,x}\cdot X_{C,ij}\\ \end{array}$$
$$\begin{array}{c} \left[ \begin{matrix} Y_{C,ij}\cdot E_{com,j} & Y_{C,ij}\cdot E_{com,j}^{2} & Y_{C,ij}\cdot E_{com,j}^{3} & v_{ij}-v_0\\ \end{matrix} \right] \cdot \left[ \begin{matrix} k_{f1,y} & k_{f2,y}\\ \end{matrix} \right.\\ \left. \begin{matrix} k_{f3,y} & t_z\\ \end{matrix} \right] ^T=\left( r_7\cdot X_{W,ij}+r_8\cdot Y_{W,ij}+r_9\cdot Z_{W,ij} \right) \cdot \left( v_{ij}-v_0 \right) -k_{f0,y}\cdot Y_{C,ij}\\ \end{array}$$
With the calibration points obtained at over three compensated input power $E_{com,j}$, two equation sets are established by stacking different Eq. (21) or (22). The parameters $\mathbf {A}_1$, $\mathbf {A}_2$, $\mathbf {A}_3$ and $t_z$ can be calculated as the least square solutions to the above two equation sets. Meanwhile, the equation set can be stably solved if and only if the coefficient matrix has full column rank. This is proved in Appendix I.

4.5 Estimating the distortion parameters $a_i (i=1,2,\ldots ,9)$

Since the distortion is expected to be small, other parameters except distortion can be calibrated with the above proposed calibration procedures by setting the distortion parameters zeros according to the work in [8]. The distortion parameters are then calculated by eliminating the difference between the points in the ideal and distorted frame buffer coordinates.

Equation (23) is derived from Eq. (17) as follows.

$$\left[ \begin{matrix}{} u_{ij}\cdot \left( u_{ij}^2+v_{ij}^2 \right) & 3u_{ij}^2+v_{ij}^2 & 2\cdot u_{ij}\cdot v_{ij}\\ v_{ij}\cdot \left( u_{ij}^2+v_{ij}^2 \right) & 2\cdot u_{ij}\cdot v_{ij} & 3u_{ij}^2+v_{ij}^2\\ \end{matrix} \right] \cdot \left[ \begin{array}{c} k_{1,j}\\ p_{1,j}\\ p_{2,j}\\ \end{array} \right] =\left[ \begin{array}{c} u_{d,ij}-u_{ij}\\ v_{d,ij}-v_{ij}\\ \end{array} \right]$$
With $M (M \geq 2)$ calibration points at the $j^{th}$ compensated input power, the distortion variables at the $j^{th}$ input power $k_{1,j}$, $p_{1,j}$ and $p_{2,j}$ can be calculated as the least square solution to the equation set of $M$ Eq. (23). Afterwards, the distortion coefficients $a_i (i=1,2,\ldots ,9)$ are solved based on Eq. (18) with at least three compensated input power positions.

4.6 Bundle adjustment to refine the initially estimated camera parameters

Due to the noise in calibration data, the initially obtained camera parameters are biased. To eliminate the errors induced by the noise and numerical calculation, the bundle adjustment optimization is utilized to refine the calibrated camera parameters. The bundle adjustment (BA) [50] is to minimize the errors between the observed points in the distorted frame buffer coordinates $P_d$ and the projected points calculated from the world coordinates with calibrated camera parameters $\widehat {P_d}$. The objective function of the bundle adjustment is presented in Eq. (24).

$$F=\underset{\boldsymbol{R,T,A,D}}{arg\min}\sum_{i=1}^M{\sum_{j=1}^N{\left| P_{d,ij}-\widehat{P_{d,ij}}\left( P_{W,ij},\boldsymbol{R,T,A,D,}E_{com,j} \right) \right|^2}}$$
The optimization method used in the paper is Levenberg-Marquardt algorithm [51].

5. Numerical simulations and experiments

5.1 Simulations

There are two simulations conducted in this paper. The first simulation is conducted to evaluate the effectiveness and accuracy of the proposed cubic polynomial model for the focal length of an ETL group presented in section 2.3.1. The second simulation is conducted to investigate the calibration performance affected by three factors: the noise level in calibration data, the error of the ETL’s input power and the number of calibration images. The simulation of the temperature effect is absent in this paper. This simulation is equivalent to the simulation towards the error of the ETL’s input power in this section, since the temperature effect is compensated by the input power of actuator as mentioned in section 4.1.

5.1.1 Evaluation of the modeling errors in the focal length of an ETL group

The focal length of an ETL group is formulated based on Eq. (8) and Eq. (10). The parameters in Eq. (8) and Eq. (10) are difficult to be calibrated due to its non-linearity. Therefore, a third-degree polynomial is utilized to model the focal length of an ETL group with regard to the compensated input power as shown in Eq. (12). The modeling errors are analysed by the simulation with physical parameters in real cases [21,23,29,32,33].

The parameters are elaborated as follows:

$E=3 MPa, a=5 mm, h= 100 \mu m, \textit {v}=0.48, n_{liq}=1.4, n_0=1, \varepsilon = 5\%, p_0=1200 Pa, p_a=-0.004 \cdot E_{com}^2+7.8 \cdot E_{com}+2.061 \cdot 10^{-13}, f_{pri}=30 mm, d_{lens}=3 mm.$

The focal length of an ETL group can be calculated as the ground truth based on Eq. (8) and Eq. (10) with the above parameters. The modeling errors of the polynomials with different degrees are presented in Fig. 5.

 figure: Fig. 5.

Fig. 5. (a) Modeling errors of the polynomials with different degrees; (b) enlarged view of the olive yellow area in (a).

Download Full Size | PDF

In consideration of both accuracy and number of parameters, a third-degree polynomial is adopted in this paper, whose root mean square error (RMSE) is $1.09 \times 10^{-3} mm$.

5.1.2 Evaluation of the calibration performance affected by different factors

The purpose of the second simulation is to study the effects of the noise level in calibration data, accuracy of input power and number of calibration images on calibration performance. All simulations in this section are conducted under the following default setups unless otherwise stated.

Default setups. The simulated camera system with the ETL group has the following parameters: $k_{f0,x}=5380,k_{f1,x}=-7,k_{f2,x}=0.02,k_{f3,x}=-3 \cdot 10^{-5},k_{f0,y}=5430,k_{f1,y}=-7,k_{f2,y}=0.02,k_{f3,y}=-3 \cdot 10^{-5},u_0=900,v_0=550,a_1=-0.067, a_2=-5.19\cdot 10^{-5}, a_3=1.09\cdot 10^{-8},a_4=-0.0077,a_5=0.103,a_6=-1.93\cdot 10^{-5},a_7=0.11,a_8=-0.0257,a_9=5.6\cdot 10^{-6}$. The resolution of the camera is 1936 x 1458 pixels, and its pixel size is 4.54 $\mu m$. These parameters are selected based on practical situations. As described in section 4, the estimation of $k_{f0,x},k_{f0,y}, u_0, v_0$ is conducted when the ETL is in the off-state after the temperature compensation. In this paper, this step is conducted using Zhang’s method [8]. The uncertainty is 0.3 % for $k_{f0,x},k_{f0,y}$ and in reality around one pixel for $u_0, v_0$ [8]. Since this step has been well evaluated, 0.3 % and one pixel uncertainty are respectively added to the parameters $k_{f0,x},k_{f0,y}, u_0, v_0$ after this calibration step in the simulation.

The calibration procedures in sections 4.34.6 require calibration data with different compensated input power of the actuator. Therefore, the checkerboard with 15 x 13 1.5mm squares is gradually moved closer to the camera system. Meanwhile, the compensated input power $E_{com}$ is varied to ensure the captured checkerboard is clear. The displacement of the checkerboard $Z_{W} = [75,60,48,43,35,31,28,22,19,16,13,10,8,4,2]^T$. The corresponding compensated input power $E_{com}$ is $[80,90,100,110,120,130,140,150,160,170,180,190,200,210,225]^T$. The simulated rotation and translation parameters between the points in the world coordinates and the camera coordinates are $\mathbf {om} = [0.2,0.2,0.2]^T$ and $\mathbf {T}=[-11,-4,48]$. ($\mathbf {om}$ is the Rodrigues form of $\mathbf {R}$).

The noise configurations in calibration are concluded as follows. One micron random noise is added to the corner position of the checkerboard $X_W,Y_W$. Ten microns random noise is added to the displacement of the checkerboard $Z_W$. 0.3 % uncertainty is added to $k_{f0,x},k_{f0,y}$. One pixel random noise is added to $u_0, v_0$. 0.4 mA random noise is added to the compensated input power of actuator in consideration of the temperature effect and sensor resolution. The Gaussian noise with zero mean and 0.2 pixels standard deviation (STD) is added to the points in the distorted frame buffer coordinates $\mathbf {P_d}$ due to the capturing noise and corner extraction error.

The influence of the noise level in calibration data. The Gaussian noise with zero mean and varied STD from 0 to 1.6 pixels is added onto the points in the distorted frame buffer coordinates $\mathbf {P_d}$. For each magnitude of STD, 100 trials are conducted to estimate camera parameters. The average relative errors of $k_{f1,x},k_{f2,x},k_{f3,x},k_{f1,y},k_{f2,y},k_{f3,y},\mathbf {om}, \mathbf {T}=[t_x,t_y,y_z]^T$ are presented in Fig. 6(a). It is observed that the estimation errors of the rotation and translation parameters $\mathbf {om}, \mathbf {T}$ are stably lower than 0.2% under the noise in the calibration data. The estimated errors of intrinsic parameters $k_{f1,x},k_{f2,x},k_{f3,x},k_{f1,y},k_{f2,y},k_{f3,y}$ increase with the noise amplitude in the calibration data. The errors’ increasing speed of the pair $k_{f1,x},k_{f1,y}$, the pair $k_{f2,x},k_{f2,y}$ and the pair $k_{f3,x},k_{f3,y}$ gradually grows, because the coefficients of higher-order terms are susceptible to noise. However, the average errors of all estimated parameters are less than 1.6 %, even when the STD of noise reaches 1.6 pixels.

 figure: Fig. 6.

Fig. 6. Estimation errors of camera parameters with the proposed calibration method affected by different factors. (a) noise amplitude in the calibration data; (b) noise amplitude in the input power; (c) number of images for calibration.

Download Full Size | PDF

The influence of the error in the ETL’s input power. Since the ETL is controlled by its input power, the errors in the input power caused by the temperature effect and sensor’s accuracy will affect the estimation of camera parameters. The noise amplitude of the input power increases from 0 to 1.6 mA. For each amplitude, 100 trials are conducted to estimate the camera parameters. The average relative errors of $k_{f1,x},k_{f2,x},k_{f3,x},k_{f1,y},k_{f2,y},k_{f3,y},\mathbf {om}, \mathbf {T}=[t_x,t_y,y_z]^T$ are presented in Fig. 6(b). All parameters increase with the noise amplitude of the input power. The increasing speed of the pair $k_{f1,x},k_{f1,y}$, the pair $k_{f2,x},k_{f2,y}$ and the pair $k_{f3,x},k_{f3,y}$ gradually increases. The increasing speed of $\mathbf {om}, \mathbf {T}$ is around the same and quite low.

The influence of the number of images for calibration. To investigate the performance of the proposed calibration method with respect to the number of images for calibration, 3 to 15 images are randomly selected for calibration from all simulated images. For each number of images, 100 trials are conducted and the average relative errors of $k_{f1,x},k_{f2,x},k_{f3,x},k_{f1,y},k_{f2,y},k_{f3,y},\mathbf {om}, \mathbf {T}=[t_x,t_y,y_z]^T$ are presented in Fig. 6(c). The average relative errors of all parameters remain consistent when the number of images is larger than five and increase with the loss of the images’ number when the number of images is smaller than five. The increasing speed of $\mathbf {om}, \mathbf {T}$ is quite low and the maximum relative error is lower than 0.2 %. The increasing speed from low to high is the pair $k_{f1,x},k_{f1,y}$, the pair $k_{f2,x},k_{f2,y}$ and the pair $k_{f3,x},k_{f3,y}$ by turns. The maximum relative error of these parameters is lower than 1.8 % when the images for calibration is three.

5.2 Experimental verification

Three experiments are conducted in this paper to verify the effectiveness of the proposed calibration method. The first verifies the fixed optical axis of the ETL affected by the membrane’s deformation, gravity and temperature, which is proposed in section 2.2.2. The second one verifies the effectiveness and accuracy of the proposed calibration method for the camera system with an ETL group. The last one compares the calibration and measurement results of the proposed method with those of existing calibration method in [46] to further verify its effectiveness and superiority.

5.2.1 Verification of the fixed optical axis of ETL

As mentioned in section 2.2.2, the optical axis of ETL remains fixed under different conditions. Therefore, the variations in the optical axis are measured under different conditions of membrane’s deformation, gravity and temperature to verify the proposed proposition. The measurement setup is shown in Fig. 7. The laser is reflected at the front and back surface of the ETL, so that two reflected points are observed on the white board. The ETL can be adjusted to align the laser beam with its optical axis by making two reflected points coincide.

 figure: Fig. 7.

Fig. 7. Experimental setup to measure the optical axis. 1-Laser beam: 3.4 mW, 670nm; 2-Optical attenuator: 98% filtered; 3-aperture; 4-ETL based on SCP: EL-10-30-TC-VIS-12D from Optotune [30], focal length ranges from 50mm to 120 mm; 5-CCD camera: Allied Vision G283-B [52], 1936 x 1458 pixels, pixel size is 4.54 microns.

Download Full Size | PDF

The effect of membrane’s deformation on the optical axis. The experiments were conducted when the optical axis of the ETL was placed both vertically and horizontally. The membrane’s deformation of the ETL was varied by changing $E_{com}$. In this experiment, $E_{com}$ was increased from 0 mA to 240 mA by the interval of 20 mA. The variation in the laser beam position was recorded by the camera. The variation in the optical axis of the ETL affected by the input power is presented in Fig. 8.(a). It can be observed that the deviation of the optical axis induced by the varied input power is less than 0.19 % (clear aperture 10mm [30]) when the ETL’s optical axis is horizontal. The maximum deviation reduces to 0.05 % for the vertical optical axis.

 figure: Fig. 8.

Fig. 8. Deviation of the optical axis under three factors. (a) the input power of actuator; (b) the angle of the ETL relative to horizontal line; (c) temperature.

Download Full Size | PDF

The effect of gravity on the optical axis. In this experiment, the ETL was set in the off-state at room temperature. The angle between the experimental setup in Fig. 7 and the horizontal line was gradually increased from 0 to 90 degrees by the interval of 10 degrees. The variation in the optical axis of the ETL affected by the gravity is recorded in Fig. 8.(b). The deviation of the optical axis is less than 0.10 % due to the gravity effect.

The effect of temperature on the optical axis. The experiments were conducted when the optical axis of the ETL was placed both vertically and horizontally. The temperature of the ETL was gradually increased from 26 $^{o}C$ to 36 $^{o}C$ by the interval of 1 $^{o}C$. The temperature effect on the optical axis is quite small compared to the effect of membrane’s deformation and gravity, which is less than 0.04 % no matter whether the ETL is placed vertically or horizontally.

Overall, the optical axis deviates less than 0.3% with the worst conditions. If it is designed for the applications of static measurement, the influence on the optical axis of three effects is almost zero when the optical axis of the ETL group is vertical.

5.2.2 Calibration performance evaluation

In this section, compensation results of the temperature effect are investigated. The calibration performance affected by two factors are also investigated, which includes different numbers of images or different combinations of six images.

Calibration setups. The calibration setups are depicted in Fig. 9. The ETL group consists of a front prime lens from Canon [53] and a back ETL from Optotune [30]. A camera from Allied Vision [52] is used as the acquisition device.

 figure: Fig. 9.

Fig. 9. Calibration setup. Checkerboard: produced in Nanosystem Fabrication Facility at HKUST by photolithography with $0.25\mu m$ accuracy; displacement platform: $1\mu m$ accuracy; prime lens: Canon EF-S 18 mm-55 mm, fixed at around 30 mm in the paper; ETL: EL-10-30-TC-VIS-12D from Optotune [30]; camera: Allied Vision G283-B [52].

Download Full Size | PDF

For the calibration of temperature effect, the checkerboard with 1.5mm squares was placed close to the ETL group to achieve a high magnification (around 1x in this experiment), so that high sensitivity of image sharpness to the temperature variation is available. The image sharpness assessment function based on the cumulative probability of blur detection (CPBD) [49] was utilized to ensure a clear imaging process. The initial temperature was set at $27^{o}C$. The uncompensated input power of actuator at the initial temperature $E_{in}$ ranged from 40 mA to 240 mA by the interval of around 20 mA. For each $E_{in}$, the temperature rises from $27^{o}C$ to $37^{o}C$ by the interval of around $1^{o}C$. The input power of actuator was varied with the changes in the temperature to ensure the captured objects were in focus based on the CPBD image sharpness function. The variation of the input power is recorded as the compensation input power induced by the temperature $E_T$.

To estimate $A_0$, the ETL group was set in the off-state. The checkerboard with 8 x 5 10mm squares was captured at ten different poses.

The calibration target for the rest of the calibration steps is the checkerboard with 15 x 13 1.5mm squares perpendicular fixed on a Parker displacement platform. This displacement platform gradually moved the checkerboard close to the ETL group to create a 3D calibration space. During this process, the magnification of the system ranged from 0.176x to 0.381x. The compensated input power to the ETL after temperature compensation $E_{com}$ was varied accordingly to ensure the captured images of the checkerboard in focus. It ranged around 80 mA to 240 mA. There are in total nine images captured within this calibration process, and the interval of the varied input power was around 20 mA.

Compensation of the temperature effect. The temperature effect on the focal length of an ETL group is compensated by varying the input power of actuator. The detailed procedures to calibrate the temperature effect are elaborated in section 4.1. The compensation input power induced by temperature $E_T$ in this paper is expressed in Fig. 10(a) as the function of temperature $T$ and uncompensated input power of actuator $E_{in}$. The third-degree polynomial surface is utilized to fit the compensation input power induced by temperature, which is $E_T=-348.1+36.03 \cdot T - 0.414 \cdot E_{in}-1.266 \cdot T^2+0.02957 \cdot T \cdot E_{in}+1.176 \cdot 10^{-4} \cdot E_{in}^2 + 0.01514 \cdot T^3-5.276 \cdot 10^{-4}\cdot T^2 \cdot E_{in}-4.632 \cdot 10^{-6}\cdot T \cdot E_{in}^2+2.761 \cdot 10^{-8} \cdot E_{in}^3$. The root mean square error (RMSE) of surface fitting is 0.2106 mA. With this third-degree polynomial surface, the final compensated input power after temperature compensation $E_{com}$ can be calculated by the equation $E_{com}=E_{in}-E_{T}$ in real-time.

 figure: Fig. 10.

Fig. 10. (a) Calibration results of the temperature effect; (b) comparison of the image sharpness with and without the proposed temperature compensation.

Download Full Size | PDF

To verify the effectiveness of the proposed temperature compensation method, the image sharpness’s variations regarding the temperature with and without the proposed temperature compensation are compared. The captured object is a stationary checkerboard. As shown in Fig. 10(b), the image sharpness gradually decreases when the temperature rises if there is no temperature compensation operation. However, the image sharpness almost keeps consistent with the temperature when the proposed temperature compensation is adopted. High image sharpness means that captured images are clear.

Calibration performance with different numbers of images for calibration. As indicated in the simulation, the estimation errors of camera parameters become stable when the image number exceeds six. Therefore, 6 to 9 images are adopted for calibration in this paper. Their calibrated intrinsic and extrinsic parameters respectively obtained from initial estimation and bundle adjustment are given in Table 1. It can be observed that the calibrated intrinsic and extrinsic parameters using different numbers of images remain consistent with each other. The convergence of the bundle adjustment is fast, normally after 2 to 3 iterations. The reprojection error reduces from around 0.4 pixels to 0.23 pixels after bundle adjustment, which is comparable to other works [54,55]. The reprojection error can be further reduced if the calibration board with concentric features and iterative refinement are adopted [56]. The calibrated distortion parameters obtained from different numbers of images are given in the orange dashed line in Fig. 11. The small error bar indicates that the calibrated distortion parameters are stable with different numbers of images. The magnitude of overall distortion increases with the increment of the compensated input power. The compensated input power of the ETL group is increased when the working distance becomes smaller. Therefore, it conforms to the Seidel’s aberration theory [57] that the distortion is exacerbated at higher magnification. The reason that the distortion coefficient $p_1$ is an order of magnitude larger than $p_2$ is that the ETL group is horizontally placed, which makes distortion along the vertical direction more serious due to the gravity effect.

 figure: Fig. 11.

Fig. 11. Calibration results of distortion parameters with different numbers of images and different combinations of six images.

Download Full Size | PDF

Tables Icon

Table 1. Calibrated intrinsic parameters, extrinsic parameters and reprojection errors obtained respectively from initial estimation (initial) and bundle adjustment after 5 iterations (5 iters) with different numbers of images.

Calibration performance with different combinations of six images. To evaluate the calibration performance with different combinations of six images, the proposed calibration method is applied to all combinations of six images selected from nine captured images. Therefore, there are in total 84 combinations of six images. The calibration results of some example combinations, the average and standard deviation (STD) of all combinations are given in Table 2. The STD of all calibrated intrinsic and extrinsic parameters are quite small, indicating that the proposed calibration method is stable. The calibrated distortion parameters are given in the blue solid line in Fig. 11. The deviation of the error bar in distortion parameters with different combinations of six images is larger than the one with different numbers of images, but still quite small and stable.

Tables Icon

Table 2. Calibrated intrinsic parameters, extrinsic parameters and reprojection errors with different combinations of six images. For instance, the sextuple {123456} means that the first, second, third, fourth, fifth and sixth images are adopted for calibration.

5.2.3 Comparison of calibration and measurement results

Comparison of calibration results. As presented in current autofocusing 3D measurement systems with the ETL [46], their calibration strategy is monofocal calibration at each lens setting followed by interpolation. The calibrated intrinsic parameters with this strategy are compared with the ones obtained by the proposed method for analyses as shown in Fig. 12(a). The monofocal calibration is conducted at different compensated input power with the calibration setup shown in Fig. 9. The monofocal calibration method used is Zhang’s method [8]. At each compensated input power, 15 images of the checkerboard with 1.5mm squares are captured for monofocal calibration to ensure calibration accuracy. The angle between each pose of the checkerboard is set as large as possible. Meanwhile, the sharpness of captured images should always be guaranteed.

 figure: Fig. 12.

Fig. 12. Comparison of calibrated camera parameters by the proposed method and existing calibration method in [46] (monofocal calibration [8] followed by interpolation).

Download Full Size | PDF

It can be observed from Fig. 12(a) that the focal lengths calibrated by the monofocal calibration at different compensated input power fit the curve of the focal length calibrated by the proposed method when the compensated input power is less than 190 mA. The uncertainty of the focal length calibrated by the monofocal calibration becomes greater when the compensated power is larger than 190 mA. The system magnification is around 0.34x at this compensated input power. The large calibration uncertainty is caused by the limited depth of field at high magnification, which is also reported in these monofocal calibration methods [815]. In this paper, the largest deviation of calibrated focal length by the monofocal calibration method is around 0.81% for $f_x$ and 0.74% for $f_y$ respectively.

Comparison of measurement results. A measurement experiment is conducted to further evaluate the accuracy and effectiveness of the proposed calibration method. The measurement results are compared by using camera parameters obtained by the proposed method and the existing calibration method in [46] (monofocal calibration [8] followed by interpolation). The details of the measurement are described as follows. The measurement setup is the same with the setup shown in Fig. 9. However, the displacement of the displacement platform is regarded as the unknown variable to be measured. With the calibrated camera parameters and feature points on the checkerboard, the displacement can be estimated by solving a plane-based perspective-n-point (PnP) problem [58]. The displacement obtained from the encoder in the displacement platform with 1 $\mu m$ accuracy is regarded as the measurement ground truth. The measurement was conducted at six different compensated input power ranging from 100 mA to 250 mA by the interval of 30 mA. At each compensated input power, the checkerboard was moved around 2 mm by the displacement platform. Autofocusing should be adopted here to ensure the captured checkerboard is clear. This process is repeated five times at each compensated input power to obtain statistical measurement results. The measurement errors obtained using the proposed method and the existing method in [46] (monofocal calibration followed by interpolation) are compared, as shown in Fig. 12(b). It can be observed that the measurement error of the proposed method gradually decreases with the increment of magnification. The measurement errors of the existing method are approximate to the ones of the proposed method at low magnification, but significantly increase when $E_{com}$ is larger than 190 mA (high $E_{com}$ indicates high magnification). When the magnification of the measurement system is around 0.38x ($E_{com} \approx 250 mA$), the measurement error with the proposed calibration method is below 20 microns, which is one-fifth of the error with the existing method. In other words, the measurement accuracy with the proposed calibration method is five times higher than the one with the existing calibration method. This observation conforms to the calibration results in Fig. 12(a) where the calibrated focal length by monofocal calibration method has greater deviation at high magnification.

6. Conclusion

A comprehensive calibration method for the camera system with ETL based on SCP is proposed in this paper. The camera model for the camera system with ETL group is established based on detailed analyses of the optical properties of ETL group based on SCP. The proposed calibration method for all camera parameters in the camera model including initial estimation and bundle adjustment is presented. A real-time temperature compensation method with a simple setup is also proposed to compensate negative effects of the temperature variations in the ETL. The proposed method will significantly reduce the calibration workload and achieve more accurate calibration at high magnification in practical usages. Both simulations and experiments confirm the effectiveness and accuracy of the proposed calibration method. The estimation of all camera parameters, especially for distortion parameters under different conditions keeps consistent with each other, and the bundle adjustment converges within a few iterations. The measurement error with the proposed calibration method is below 20 microns at high magnification, whose measurement accuracy is improved by five times than the existing method at high magnification. Yet, the potential applications of the proposed calibration method in other adaptive camera systems are worth studying in the future.

Appendix I: Proof of the full column rank of coefficient matrix in Eq. (21) and Eq. (22)

It is assumed that there are $M$ calibration points each at $N$ compensated input power. Therefore, the coefficient matrix in Eq. (21) becomes the matrix $\mathbf {G}$ as follows.

$$ \boldsymbol{G}=\left[ \begin{matrix}{} X_{C,11}\cdot E_{com,1} & X_{C,11}\cdot E_{com,1}^2 & X_{C,11}\cdot E_{com,1}^3 & u_{11}-u_0\\ X_{C,21}\cdot E_{com,1} & X_{C,21}\cdot E_{com,1}^2 & X_{C,21}\cdot E_{com,1}^3 & u_{21}-u_0\\ \vdots & \vdots & \vdots & \vdots\\ X_{C,M1}\cdot E_{com,1} & X_{C,M1}\cdot E_{com,1}^2 & X_{C,M1}\cdot E_{com,1}^3 & u_{M1}-u_0\\ X_{C,12}\cdot E_{com,2} & X_{C,12}\cdot E_{com,2}^2 & X_{C,12}\cdot E_{com,2}^3 & u_{12}-u_0\\ X_{C,22}\cdot E_{com,2} & X_{C,22}\cdot E_{com,2}^2 & X_{C,22}\cdot E_{com,2}^3 & u_{22}-u_0\\ \vdots & \vdots & \vdots & \vdots\\ X_{C,MN}\cdot E_{com,N} & X_{C,MN}\cdot E_{com,N}^2 & X_{C,MN}\cdot E_{com,N}^3 & u_{MN}-u_0 \end{matrix} \right] $$
Let $\mathbf {G}_k$ be the $k^{th}$ column of the matrix $\mathbf {G}$. Proof of the full column rank of coefficient matrix $\mathbf {G}$ also verifies the linear independence of the columns $\mathbf {G}_k$ in $\mathbf {G}$. This shows that the sufficient and necessary condition for $\sum _{k=1}^4{g_k\boldsymbol {G}_k}=0$ is that $g_k=0$ for $k=1,2,3,4$. The sufficiency is obvious. To prove the necessity, $\sum _{k=1}^4{g_k\boldsymbol {G}_k}=0$ can be expanded as Eq. (26).
$$g_1\cdot X_{C,ij}\cdot E_{com,j}+g_2\cdot X_{C,ij}\cdot E_{com,j}^2+g_3\cdot X_{C,ij}\cdot E_{com,j}^3+g_4\cdot \left( u_{ij}-u_0 \right) =0$$
Since Eq. (26) should be satisfied for all calibration points, Eq. (26) can be treated as a polynomial with the nominals $X_{C}\cdot E_{com}$, $X_{C}\cdot {E_{com}}^2$, $X_{C}\cdot {E_{com}}^3$, $u$ and 1. It should be noted that at least three different $E_{com}$ should be adopted to ensure the linear independence of these nominals. For a polynomial to be identically zero, the coefficients for these nominals should vanish. Therefore, $g_1=0; g_2=0; g_3=0; g_4=0; g_4 \cdot u_0=0$. The necessity is also proved. Consequently, the columns $\mathbf {G}_k$ of $\mathbf {G}$ is linearly independent of each other. The coefficient matrix $\mathbf {G}$ is full column rank.

Funding

Research Grants Council, University Grants Committee (GRF 16202718).

Disclosures

The authors declare no conflicts of interest.

References

1. C. Zuo, Q. Chen, W. Qu, and A. Asundi, “High-speed transport-of-intensity phase microscopy with an electrically tunable lens,” Opt. Express 21(20), 24060–24075 (2013). [CrossRef]  

2. X. Shen and B. Javidi, “Large depth of focus dynamic micro integral imaging for optical see-through augmented reality display using a focus-tunable lens,” Appl. Opt. 57(7), B184–B189 (2018). [CrossRef]  

3. Y. Zuo, W. Zhang, F. S. Chau, and G. Zhou, “Miniature adjustable-focus endoscope with a solid electrically tunable lens,” Opt. Express 23(16), 20582–20592 (2015). [CrossRef]  

4. X. Hu, G. Wang, Y. Zhang, H. Yang, and S. Zhang, “Large depth-of-field 3d shape measurement using an electrically tunable lens,” Opt. Express 27(21), 29697–29709 (2019). [CrossRef]  

5. X. Hu, G. Wang, Y. Zhang, H. Yang, and S. Zhang, “Autofocusing method for high-resolution three-dimensional profilometry,” Opt. Lett. 45(2), 375–378 (2020). [CrossRef]  

6. M. Zhong, X. Hu, F. Chen, C. Xiao, D. Peng, and S. Zhang, “Autofocusing method for a digital fringe projection system with dual projectors,” Opt. Express 28(9), 12609–12620 (2020). [CrossRef]  

7. A. G. Marrugo, F. Gao, and S. Zhang, “State-of-the-art active optical techniques for three-dimensional surface metrology: a review,” J. Opt. Soc. Am. A 37(9), B60–B77 (2020). [CrossRef]  

8. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

9. R. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE J. Robot. Autom. 3(4), 323–344 (1987). [CrossRef]  

10. C. Duane, “Close-range camera calibration,” Photogramm. Eng. 37, 855–866 (1971).

11. C. Fraser, “Digital camera self-calibration,” ISPRS J. photogrammetry and remote sensing 52(4), 149–159 (1997). [CrossRef]  

12. Z. Jia, J. Yang, W. Liu, F. Wang, Y. Liu, L. Wang, C. Fan, and K. Zhao, “Improved camera calibration method based on perpendicularity compensation for binocular stereo vision measurement system,” Opt. Express 23(12), 15205–15223 (2015). [CrossRef]  

13. Z. Liu, Q. Wu, S. Wu, and X. Pan, “Flexible and accurate camera calibration using grid spherical images,” Opt. Express 25(13), 15269–15285 (2017). [CrossRef]  

14. X. Ying and H. Zha, “Geometric interpretations of the relation between the image of the absolute conic and sphere images,” IEEE Trans. Pattern Anal. Machine Intell. 28(12), 2031–2036 (2006). [CrossRef]  

15. Y. Hong, G. Ren, and E. Liu, “Non-iterative method for camera calibration,” Opt. Express 23(18), 23992–24003 (2015). [CrossRef]  

16. Z. Lu and L. Cai, “Camera calibration method with focus-related intrinsic parameters based on the thin-lens model,” Opt. Express 28(14), 20858–20878 (2020). [CrossRef]  

17. M. Blum, M. Büeler, C. Grätzel, and M. Aschwanden, “Compact optical design solutions using focus tunable lenses,” Proc. SPIE 8167, 81670W (2011). [CrossRef]  

18. H.-C. Lin, M.-S. Chen, and Y.-H. Lin, “A review of electrically tunable focusing liquid crystal lenses,” Transactions on Electr. Electron. Mater. 12(6), 234–240 (2011). [CrossRef]  

19. B. Berge and J. Peseux, “Variable focal lens controlled by an external voltage: An application of electrowetting,” The Eur. Phys. J. E 3(2), 159–163 (2000). [CrossRef]  

20. Optotune, “Focus tunabl liquid lenses and how to intergrate them into machine vision systems,” https://www.optotune.com/Optotune%20focus-tunable%20lenses%20for%20machine%20vision.pdf.

21. H. Ren, D. Fox, P. A. Anderson, B. Wu, and S.-T. Wu, “Tunable-focus liquid lens controlled using a servo motor,” Opt. Express 14(18), 8031–8036 (2006). [CrossRef]  

22. S. Xu, Y. Liu, H. Ren, and S.-T. Wu, “A novel adaptive mechanical-wetting lens for visible and near infrared imaging,” Opt. Express 18(12), 12430–12435 (2010). [CrossRef]  

23. H. Ren and S.-T. Wu, “Variable-focus liquid lens,” Opt. Express 15(10), 5931–5936 (2007). [CrossRef]  

24. G. Zhu, J. Yao, S. Wu, and X. Zhang, “Actuation of adaptive liquid microlens droplet in microfluidic devices: A review,” Electrophoresis 40(8), 1148–1159 (2019). [CrossRef]  

25. S. Xu, H. Ren, Y.-J. Lin, M. J. Moharam, S.-T. Wu, and N. Tabiryan, “Adaptive liquid lens actuated by photo-polymer,” Opt. Express 17(20), 17590–17595 (2009). [CrossRef]  

26. H. Yu, G. Zhou, F. S. Chau, and S. K. Sinha, “Tunable electromagnetically actuated liquid-filled lens,” Sens. Actuators, A 167(2), 602–607 (2011). [CrossRef]  

27. S. H. Oh, K. Rhee, and S. K. Chung, “Electromagnetically driven liquid lens,” Sens. Actuators, A 240, 153–159 (2016). [CrossRef]  

28. P. M. Moran, S. Dharmatilleke, A. H. Khaw, K. W. Tan, M. L. Chan, and I. Rodriguez, “Fluidic lenses with variable focal lengths,” Appl. Phys. Lett. 88(4), 041120 (2006). [CrossRef]  

29. S. Shian, R. M. Diebold, and D. R. Clarke, “Tunable lenses using transparent dielectric elastomer actuators,” Opt. Express 21(7), 8669–8676 (2013). [CrossRef]  

30. Optotune, “El-10-30 datasheet,” https://www.optotune.com/images/products/Optotune%20EL-10-30.pdf.

31. K. Wei, H. Huang, Q. Wang, and Y. Zhao, “Focus-tunable liquid lens with an aspherical membrane for improved central and peripheral resolutions at high diopters,” Opt. Express 24(4), 3929–3939 (2016). [CrossRef]  

32. Q. Yang, P. Kobrin, C. Seabury, S. Narayanaswamy, and W. Christian, “Mechanical modeling of fluid-driven polymer lenses,” Appl. Opt. 47(20), 3658–3668 (2008). [CrossRef]  

33. N.-T. Nguyen, “Micro-optofluidic lenses: a review,” Biomicrofluidics 4(3), 031501 (2010). [CrossRef]  

34. J. E. Greivenkamp, Field guide to geometrical optics (SPIE, 2004).

35. S. T. Choi, B. S. Son, G. W. Seo, S.-Y. Park, and K.-S. Lee, “Opto-mechanical analysis of nonlinear elastomer membrane deformation under hydraulic pressure for variable-focus liquid-filled microlenses,” Opt. Express 22(5), 6133–6146 (2014). [CrossRef]  

36. J. Ortega, “Densities and refractive indices of pure alcohols as a function of temperature,” J. Chem. Eng. Data 27(3), 312–317 (1982). [CrossRef]  

37. S. A. Khodier, “Refractive index of standard oils as a function of wavelength and temperature,” Opt. Laser Technol. 34(2), 125–128 (2002). [CrossRef]  

38. P. Schiebener, J. Straub, J. M. H. L. Sengers, and J. S. Gallagher, “Refractive index of standard oils as a function of wavelength and temperature,” J. Phys. Chem. Ref. Data 19(3), 677–717 (1990). [CrossRef]  

39. A. Bower, in Applied mechanics of solids, (CRC, 2009), p. 666.

40. N. Sugiura and S. Morita, “Variable-focus liquid-filled optical lens,” Appl. Opt. 32(22), 4181–4186 (1993). [CrossRef]  

41. E. Hecht, in Optics, 4th Ed, (Addison Wesley, 2002), p. 168.

42. E. Hecht, in Optics, 2nd Ed, (Addison Wesley, 1987), p. chapter 6.1.

43. J. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Transactions on Pattern Analysis and Machine Intelligence pp. 965–980 (1992).

44. G.-Q. Wei and S. D. Ma, “Implicit and explicit camera calibration: Theory and experiments,” IEEE Trans. Pattern Anal. Machine Intell. 16(5), 469–480 (1994). [CrossRef]  

45. D. C. Brown, “Decentering distortion of lenses,” Photogramm. Eng. Remote. Sens. (1966).

46. D. Kim, H. Shin, J. Oh, and K. Sohn, “Automatic radial distortion correction in zoom lens video camera,” J. Electron. Imaging 19(4), 043010 (2010). [CrossRef]  

47. L. Alvarez, L. Gómez, and P. Henríquez, “Zoom dependent lens distortion mathematical models,” J. Math. Imaging Vis. 44(3), 480–490 (2012). [CrossRef]  

48. R. Kingslake, Optical System Design (Academic, 1983).

49. N. D. Narvekar and L. J. Karam, “A no-reference image blur metric based on the cumulative probability of blur detection (cpbd),” IEEE Trans. Image Process. 20(9), 2678–2683 (2011). [CrossRef]  

50. B. Triggs, P. F. McLauchlan, R. I. Hartley, and A. W. Fitzgibbon, “Bundle adjustment-a modern synthesis,” in International workshop on vision algorithms, (1999), pp. 298–372.

51. M. I. Lourakis, “A brief description of the levenberg-marquardt algorithm implemented by levmar,” Foundation Res. Technol. 4, 1–6 (2005).

52. A. Vision, “Allied vision manta g-283,” https://www.alliedvision.com/en/products/cameras/detail/Manta/G-283.html.

53. Canon, “Ef-s 18-55mm,” https://store.canon.com.hk/osp/ef-lenses/ef-s-lenses/ef-s-18-55mm-f-3-5-5-6-is-ii.html.

54. S. Zheng, Z. Wang, and R. Huang, “Zoom lens calibration with zoom-and focus-related intrinsic parameters applied to bundle adjustment,” ISPRS J. photogrammetry and remote sensing 102, 62–72 (2015). [CrossRef]  

55. B. Wu, H. Hu, Q. Zhu, and Y. Zhang, “A flexible method for zoom lens calibration and modeling using a planar checkerboard,” Photogramm. Eng. Remote Sens. 79(6), 555–571 (2013). [CrossRef]  

56. A. Datta, J.-S. Kim, and T. Kanade, “Accurate camera calibration using iterative refinement of control points,” in IEEE 12th International Conference on Computer Vision Workshops, (2009), pp. 1201–1208.

57. J. C. Wyant and K. Creath, “Basic wavefront aberration theory for optical metrology,” Appl. Optics and Optical Eng. 11, 28–39 (1992).

58. T. Collins and A. Bartoli, “Infinitesimal plane-based pose estimation,” Int. J. Comput. Vis. 109(3), 252–286 (2014). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. (a) Schematic structure of the ETL based on SCP. 1-cover glass; 2-liquid; 3-body structure; 4-membrane; 5-actuator; (b) camera system configuration with the ETL group.
Fig. 2.
Fig. 2. The imaging process of the ETL group based on SCP
Fig. 3.
Fig. 3. Setup for calibrating the temperature effect.
Fig. 4.
Fig. 4. Flow chart to calibrate the temperature effect.
Fig. 5.
Fig. 5. (a) Modeling errors of the polynomials with different degrees; (b) enlarged view of the olive yellow area in (a).
Fig. 6.
Fig. 6. Estimation errors of camera parameters with the proposed calibration method affected by different factors. (a) noise amplitude in the calibration data; (b) noise amplitude in the input power; (c) number of images for calibration.
Fig. 7.
Fig. 7. Experimental setup to measure the optical axis. 1-Laser beam: 3.4 mW, 670nm; 2-Optical attenuator: 98% filtered; 3-aperture; 4-ETL based on SCP: EL-10-30-TC-VIS-12D from Optotune [30], focal length ranges from 50mm to 120 mm; 5-CCD camera: Allied Vision G283-B [52], 1936 x 1458 pixels, pixel size is 4.54 microns.
Fig. 8.
Fig. 8. Deviation of the optical axis under three factors. (a) the input power of actuator; (b) the angle of the ETL relative to horizontal line; (c) temperature.
Fig. 9.
Fig. 9. Calibration setup. Checkerboard: produced in Nanosystem Fabrication Facility at HKUST by photolithography with $0.25\mu m$ accuracy; displacement platform: $1\mu m$ accuracy; prime lens: Canon EF-S 18 mm-55 mm, fixed at around 30 mm in the paper; ETL: EL-10-30-TC-VIS-12D from Optotune [30]; camera: Allied Vision G283-B [52].
Fig. 10.
Fig. 10. (a) Calibration results of the temperature effect; (b) comparison of the image sharpness with and without the proposed temperature compensation.
Fig. 11.
Fig. 11. Calibration results of distortion parameters with different numbers of images and different combinations of six images.
Fig. 12.
Fig. 12. Comparison of calibrated camera parameters by the proposed method and existing calibration method in [46] (monofocal calibration [8] followed by interpolation).

Tables (2)

Tables Icon

Table 1. Calibrated intrinsic parameters, extrinsic parameters and reprojection errors obtained respectively from initial estimation (initial) and bundle adjustment after 5 iterations (5 iters) with different numbers of images.

Tables Icon

Table 2. Calibrated intrinsic parameters, extrinsic parameters and reprojection errors with different combinations of six images. For instance, the sextuple {123456} means that the first, second, third, fourth, fifth and sixth images are adopted for calibration.

Equations (26)

Equations on this page are rendered with MathJax. Learn more.

p a = k E 2 E i n 2 + k E 1 E i n + k E 0
δ max = p 0 + p a E ( 1 v ) a 2 4 ε 0 h
R m e m = δ max 2 + a 2 2 δ max
f l i q = R m e m n 0 n l i q n 0
f l i q = n 0 n l i q n 0 [ p 0 + k E 2 E i n 2 + k E 1 E i n + k E 0 E ( 1 v ) a 2 8 ε 0 h + E p 0 + k E 2 E i n 2 + k E 1 E i n + k E 0 2 ε 0 h ( 1 v ) ]
e r r o r g r a v i t y = f l i q ( p a + Δ p g r a v i t y ) f l i q ( p a ) = n 0 n l i q n 0 ( Δ p g r a v i t y E ( 1 v ) a 2 8 ε 0 h Δ p g r a v i t y E 2 ε 0 h ( p 0 + p a + Δ p g r a v i t y ) ( p 0 + p a ) )
E T = f ( T , E i n )
f l i q = n 0 n l i q n 0 [ p 0 + k E 2 E c o m 2 + k E 1 E c o m + k E 0 E ( 1 v ) a 2 8 ε 0 h + E p 0 + k E 2 E c o m 2 + k E 1 E c o m + k E 0 2 ε 0 h ( 1 v ) ]
k g r a v i t y = p 0 + p a 2 ( ρ l i q ρ s u r ) g a
f c o m = f l i q f p r i f l i q + f p r i d l e n s
P f r o n t = f l i q ( n l i q n 0 ) R b a c k n l i q ,   P b a c k = f l i q ( n l i q n 0 ) R m e m n l i q
f c o m = k f 3 E c o m 3 + k f 2 E c o m 2 + k f 1 E c o m + k f 0
{P} {C} = {R} {P} {W} + {T}
R = [ r 1 r 2 r 3 r 4 r 5 r 6 r 7 r 8 r 9 ] ,   T = [ t x t y t z ]
[ u v 1 ] = λ A ( E c o m , j ) [ X C Y C Z C ] = λ [ f x ( E c o m , j ) 0 u 0 0 f y ( E c o m , j ) v 0 0 0 1 ]
A ( E c o m , j ) = A 0 + A 1 E c o m , j + A 2 E c o m , j 2 + A 3 E c o m , j 3 = [ k f 0 , x 0 u 0 0 k f 0 , y v 0 0 0 1 ] + [ k f 1 , x 0 0 0 k f 1 , y 0 0 0 0 ] E c o m , j + [ k f 2 , x 0 0 0 k f 2 , y 0 0 0 0 ] E c o m , j 2 + [ k f 3 , x 0 0 0 k f 3 , y 0 0 0 0 ] E c o m , j 3
u d = u + u k 1 ( u 2 + v 2 ) + p 1 ( 3 u 2 + v 2 ) + 2 p 2 u v v d = v + v k 1 ( u 2 + v 2 ) + 2 p 1 u v + p 2 ( 3 u 2 + v 2 )
k 1 , j = a 1 [ 1 f m e a n ( E c o m , j ) ] 2 + a 2 1 f m e a n ( E c o m , j ) + a 3 p 1 , j = a 4 [ 1 f m e a n ( E c o m , j ) ] 2 + a 5 1 f m e a n ( E c o m , j ) + a 6 p 2 , j = a 7 [ 1 f m e a n ( E c o m , j ) ] 2 + a 8 1 f m e a n ( E c o m , j ) + a 9
u v = X C Y C u v = r 1 X W + r 2 Y W + r 3 Z W + t x r 4 X W + r 5 Y W + r 6 Z W + t y
[ v i j X W , i j v i j Y W , i j v i j Y W , i j v i j u i j X W , i j u i j Y W , i j u i j Z W , i j ] [ r 1 t y r 2 t y r 3 t y t x t y r 4 t y r 5 t y r 6 t y ] T = u i j
[ X C , i j E c o m , j X C , i j E c o m , j 2 X C , i j E c o m , j 3 u i j u 0 ] [ k f 1 , x k f 2 , x k f 3 , x t z ] T = ( r 7 X W , i j + r 8 Y W , i j + r 9 Z W , i j ) ( u i j u 0 ) k f 0 , x X C , i j
[ Y C , i j E c o m , j Y C , i j E c o m , j 2 Y C , i j E c o m , j 3 v i j v 0 ] [ k f 1 , y k f 2 , y k f 3 , y t z ] T = ( r 7 X W , i j + r 8 Y W , i j + r 9 Z W , i j ) ( v i j v 0 ) k f 0 , y Y C , i j
[ u i j ( u i j 2 + v i j 2 ) 3 u i j 2 + v i j 2 2 u i j v i j v i j ( u i j 2 + v i j 2 ) 2 u i j v i j 3 u i j 2 + v i j 2 ] [ k 1 , j p 1 , j p 2 , j ] = [ u d , i j u i j v d , i j v i j ]
F = a r g min R , T , A , D i = 1 M j = 1 N | P d , i j P d , i j ^ ( P W , i j , R , T , A , D , E c o m , j ) | 2
G = [ X C , 11 E c o m , 1 X C , 11 E c o m , 1 2 X C , 11 E c o m , 1 3 u 11 u 0 X C , 21 E c o m , 1 X C , 21 E c o m , 1 2 X C , 21 E c o m , 1 3 u 21 u 0 X C , M 1 E c o m , 1 X C , M 1 E c o m , 1 2 X C , M 1 E c o m , 1 3 u M 1 u 0 X C , 12 E c o m , 2 X C , 12 E c o m , 2 2 X C , 12 E c o m , 2 3 u 12 u 0 X C , 22 E c o m , 2 X C , 22 E c o m , 2 2 X C , 22 E c o m , 2 3 u 22 u 0 X C , M N E c o m , N X C , M N E c o m , N 2 X C , M N E c o m , N 3 u M N u 0 ]
g 1 X C , i j E c o m , j + g 2 X C , i j E c o m , j 2 + g 3 X C , i j E c o m , j 3 + g 4 ( u i j u 0 ) = 0
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.