Abstract

Light field cameras capture spatial and angular information simultaneously. A scene point in the 3D space appears many times on the raw image, bringing challenges to light field camera calibration. This paper proposes a novel calibration method for standard plenoptic cameras by using corner features from raw images. We select appropriate micro-lens images on raw images and detect corner features on them. During calibration, we first build the relationship of corner features and points in object space by using a few intrinsic parameters and then perform a linear calculation of these parameters, which are further refined via a non-linear optimization. Experiments on Lytro and Lytro Illum cameras demonstrate that the accuracy and efficiency of the proposed method are superior to the state-of-the-art methods based on features of raw images.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Different from traditional cameras, plenoptic cameras (or light field cameras) record not only accumulated intensities but also separated intensity of light rays in the sampled directions [1,2]. The captured data are equivalent to a collection of sub-aperture views on a regular grid of pinhole cameras with parallel optical axes. This property allows or strengthens a range of applications, such as digital refocusing [3], depth of field extension [4], depth estimation [57], and 3D reconstruction [8,9]. As a prerequisite for these applications, it is crucial to conduct calibration accurately and build a precise geometric relationship between a certain point in 3D space and its corresponding pixels in the light field image.

Various types of light field cameras have been built given that the representation of light field is simplified as the intersection of two parallel planes; different designs could have different geometric calibration models. An intuitively pleasing approach to construct a light field camera is by assembling an array of conventional, monocular cameras [10]. In this way, high spatial resolution and quality can be obtained. However, it is hardware intense and not easy to use. Ng [11] proposed the first hand-held plenoptic camera, namely, standard plenoptic cameras by placing a micro-lens array (MLA) between the principal lens and the sensors. The sensors record the intensity of single direction rays passing through its corresponding micro-lens rather than the summation of different direction rays. Georgiev and Lumsdaine [12] presented a focused plenoptic camera, where the MLA is interpreted as an imaging system focused on the focal plane of the main camera lens. This study focuses on standard plenoptic cameras, released as Lytro [13], which is much cheaper and more widely used by semi-professionals than focused plenoptic cameras.

To complete the calibration of standard light field cameras, studies have proposed several methods. Dansereau [14] et al. presented a 15-parameter model to calibrate standard plenoptic cameras. In this work, initial intrinsic and extrinsic parameters are estimated from sub-aperture images extracted from the decoded light field and then refined by minimizing the squared sum of ray projection errors. However, these parameters do not have a specific meaning and are redundant in the decoding matrix. Zhang [15] et al. presented a 6-parameter multi-projection-center method by deducing the relationship between geometric structure and light field camera coordinates. Zhou [16] et al. proposed a 4-parameter epipolar-space-based light field geometrical model by exploring the relationship between the point in space and its corresponding line structure in an epipolar plane image. However, its accuracy is dependent on the results of the slope estimation. These methods may achieve good results under their optimization process, but do not handle raw data. Feature extraction from raw images is trivial due to the low micro-image resolution. Nevertheless, the computation of accurate sub-aperture images requires a good estimation of intrinsic camera parameters. The quality of the sub-aperture images extracted from raw images after geometric correction is better than that extracted from the decoded light field image according to the equal spacing.

Bok [17] et al. used line features of raw images to complete calibration, avoiding the decoding process. In this work, the line features are first extracted, and then the initial intrinsic and extrinsic parameters are computed by transferring the points in the 3D space to their corresponding lines in micro-lens images, which are refined by minimizing the distances between the collection of rays of points selected from the extracted lines and their corresponding lines in the 3D space. However, the refinement method is naturally flawed, that is, the selected points of the lines may not lie on the corresponding lines in the 3D space after transformation because the projection of a line in the 3D space is not an exact line in the micro-lens image due to nonlinear terms. Besides, the line detection accuracy is also limited by the number of templates during the searching process.

The work most similar to us is that of Liu [18] et al., who detect corner features on raw images and complete calibration by using a stepwise method. In their work, the intrinsic parameters are divided into parameters related to the pinhole model and parameters unique to the plenoptic camera, which can be calculated separately based on plenop disc feature [19]. The estimated parameters are further refined by minimizing the squared sum of re-projection errors of detected corner features and the corresponding corners. For the entire calibration process, the final accuracy is insensitive to the initial input and more related to the optimization method, thus a complete estimation of all intrinsic parameters is sufficient to meet the accuracy requirements of initial parameters.

In the above work, the corner features are detected using two corner prototypes with each composed of four filter kernels. This method may miss the corner features which do not match the utilized temple and the searching process will cost much time. Besides, this feature extraction method will fail when the images are shot at large slanted poses. Since the size of the micro-lens image is very small, traditional corner feature extraction methods cannot work or perform badly either, such as the one implemented in MATLAB [20] or Harris corner [21]. MATLAB requires pointing out the four points that indicate the checkboard area, however, each micro-lens image only contains one corner feature. Harris corner usually extracts multiple points for one corner or miss some corners due to the input parameters. Other methods [22,23] may fail in extremely slanted poses and severe radial distortion.

After careful observation and analysis, we found that the micro-lens images always show certain patterns regardless of the pose and distortion of the image. Based on this characteristic, we apply a method similar to Bok et al., which firstly samples a few seed points on the small micro-lens image, then filters the seed points using a circular boundary. Then the retained points are iteratively updated to the correct locations. Besides, the corner points located near the edge of the micro-lens image tend to be wrong, thus only the corner points with a small displacement to the centers of the micro-lens images are preserved.

Through utilizing the extracted corner features and employing a robust calculation strategy, this paper presents a novel standard plenoptic camera calibration method based on corner features of raw images. The micro-lens images containing corner features are first identified from the raw images and corner features are detected on these micro-lens images. Then we complete the calibration through the relationship of the extracted corner features and their corresponding centers under the same micro-lens image. Specifically, the initial extrinsic parameters are estimated using the central sub-aperture images, and the intrinsic parameters are computed by a close-form solution. Then, the initial results are refined via non-linear optimization. Compared with previous work [17] and [18], which also utilize features extracted directly from raw images, not only the running speed is largely improved, but also the performance of calibration is comparable.

2. Corner feature extraction

A checkboard with the known size is applied to build the correspondences between known environments and sensor measurements. Figure 1 shows an example of the raw image of a checkboard captured by a Lytro Illum camera. The micro-lens images can be classified into corner, line, or homogenous categories based on the consisting pixels. We extracted corner features merely from the micro-lens images containing corners and utilize them to calibrate the light field camera.

 figure: Fig. 1.

Fig. 1. Examples of raw images captured by a Lytro Illum camera. The right image is the Close-up image of a corner in the left image.

Download Full Size | PPT Slide | PDF

The central sub-aperture image, which is a collection of centers of micro-lens images, is applied to identify the micro-lens images containing corner features. The centers of the micro-lens images can be found using the white image provided by the manufacturer or by taking a white scene. The brightest spot in each white lenslet image approximates its actual center because of vignetting. The white image can also be utilized to eliminate the effect of vignetting. The distance of the corner feature from the center in a micro-lens image is nearly proportional to that of the corresponding point of the center from the detected corner feature in the central sub-aperture image [17]. If the angular resolution of a light field is N, then the N micro-lens images, whose corresponding points of the centers are nearest to the detected point feature on the central sub-aperture image, are deemed as micro-lens images containing point features for a corner on the chessboard.

In general, micro-lens images are usually very small (10×10 pixels for Lytro cameras, 15×15 for Lytro Illum camera), but they show similar patterns to the corner features shown in the local area of a traditional checkboard image. We adopt a modified version of the method in [24], which utilizes circular boundaries to identify corner features. A few uniformly distributed points are first considered as seed points. As displayed in Fig. 2, the points that satisfied the following three conditions are then maintained as candidates, and the others are discarded: four sign-changing indices (index a, b, c, d or index A, B, C, D) in a circular boundary; index a – index c = index b – index d = (all indices)/2; index a, b, c, d = index A, B, C, D.

 figure: Fig. 2.

Fig. 2. Corner feature identification using circular boundaries. The blue and yellow dots compose the circular boundaries of the target corner denoted by the red dot at different radii. Each circular boundary has four sign-changing indices, which are denoted by orange dots.

Download Full Size | PPT Slide | PDF

Lastly, the maintained corners are iteratively updated to the largest gradient values using patch-based structure tensor calculation. (Fig. 3) Each micro-lens image should only have one corner after refinement. Thus, the micro-lens images containing more than one corner are discarded.

 figure: Fig. 3.

Fig. 3. Examples of corner features extracted from raw images. Left: extraction results of a checkerboard with 9×6 corners excluding edge corners. Right: extraction results of a corner in the left image, red circles, and green dots denote micro-lens and detected corners, respectively.

Download Full Size | PPT Slide | PDF

3. Geometrical model of light field cameras

Standard plenoptic cameras contain two layers of lens, namely, the main lens (same as the traditional camera) and the MLA, which are considered as the thin lens model and pinhole model, respectively. First of all, we define the camera coordinate system. The origin is located at the optical center of the main lens, the Z-axis points towards the object, and the X and Y-axis point to the right and downwards.

3.1 Thin lens model

As described in Fig. 4, different rays of a point ${{\textbf P}_{\textbf C}}({{X_C},{Y_C},{Z_C}} )$ through a thin lens converge at a virtual image point ${{\textbf P}_{\textbf V}}({{X_V},{Y_V},{Z_V}} )$. ${{\textbf P}_{\textbf C}}$ and ${{\textbf P}_{\textbf V}}$ satisfy the following equation in the Z direction.

$$\frac{1}{{{Z_C}}} - \frac{1}{{{Z_V}}} = \frac{1}{F}$$
$$\left[ {\begin{array}{c} {{X_V}}\\ {{Y_V}}\\ {{Z_V}} \end{array}} \right] = \frac{F}{{F - {Z_c}}}{\; }\left[ {\begin{array}{c} {{X_c}}\\ {{Y_c}}\\ {{Z_c}} \end{array}} \right]$$
where F is the focal length of the main lens.

 figure: Fig. 4.

Fig. 4. Projection model of a standard plenoptic camera. The thin lens and pinhole model are applied to the main lens and the microlens, respectively.

Download Full Size | PPT Slide | PDF

3.2. Pinhole model

${{\textbf P}_{\textbf V}}$ projects to several locations through the MLA. Given that the corner feature ${\textbf p}$ and its corresponding center ${{\textbf p}_{\textbf c}}$ have been detected, whose coordinates are $({x,y} )$ and $({{x_c},{y_c}} )$ in the normalized coordinate system $({Z = 1} )$, respectively. Let ${D_M}$ and ${D_C}$ denote the Z coordinates of the points on the MLA and the CCD array in the camera coordinate system. The coordinates of the pinhole (the center of a micro image) in MLA and the corner coordinates in the CCD array can be described as ${\textbf P}_{\textbf{center}}^{\textbf M} = \; {D_M}{[{{x_c},{y_c},1} ]^T}$ and ${\textbf P}_{\textbf{corner}}^{\textbf C} = \; {D_C}{[{x,y,1} ]^T}$, respectively. Evidently, ${\textbf P}_{\textbf{corner}}^{\textbf C}$ can be described by ${\textbf P}_{\textbf{center}}^{\textbf M}$ and ${{\textbf P}_{\textbf V}}$ because of collinearity, which can be expressed as follows:

$${D_c}\left[ {\begin{array}{c} x\\ y\\ 1 \end{array}} \right] = \left[ {\begin{array}{c} {{X_V}}\\ {{Y_V}}\\ {{Z_V}} \end{array}} \right] + \frac{{{D_c} - {Z_V}}}{{{D_m} - {Z_V}}}\left( {{D_m}\left[ {\begin{array}{c} {{x_c}}\\ {{y_c}}\\ 1 \end{array}} \right] - \left[ {\begin{array}{c} {{X_V}}\\ {{Y_V}}\\ {{Z_V}} \end{array}} \right]} \right)$$
$$\left[ {\begin{array}{c} x\\ y \end{array}} \right] = {\; }\left[ {\begin{array}{c} {{x_c}}\\ {{y_c}} \end{array}} \right] + {\; }\frac{{{D_m} - {D_c}}}{{({{D_m} - {Z_V}} ){D_c}}}\left[ {\begin{array}{c} {{X_V} - {Z_V}{x_c}}\\ {{Y_V} - {Z_V}{y_c}} \end{array}} \right]$$

3.3 Complete projection model

Combining the thin lens model and the pinhole model, we can obtain the following:

$$\left[ {\begin{array}{c} x\\ y \end{array}} \right] = \left[ {\begin{array}{c} {{x_c}}\\ {{y_c}} \end{array}} \right] + \frac{{({{D_m} - {D_c}} )\cdot \frac{F}{{F - {Z_C}}}}}{{\left( {{D_m} - \frac{F}{{F - {Z_C}}}{Z_C}} \right){L_c}}}\left[ {\begin{array}{c} {{X_C} - {Z_C}{x_c}}\\ {{Y_C} - {Z_C}{y_c}} \end{array}} \right] = \left[ {\begin{array}{c} {{x_c}}\\ {{y_c}} \end{array}} \right] + {\; }\frac{1}{{{S_1}{Z_C} + {S_2}}}\left[ {\begin{array}{c} {{X_C} - {Z_C}{x_c}}\\ {{Y_C} - {Z_C}{y_c}} \end{array}} \right]$$
where ${S_1} = \; \frac{{({{D_m} + F} ){D_c}}}{{({{D_m} - {L_c}} )F}},\,{S_2} = \frac{{{D_m}{D_c}}}{{{D_m} - {D_c}}}$.

3.4 Distortion model

The characteristics of the main lens and MLA and the misalignment of different sensors may contribute to the lens distortion. The unit size and the distance to the CCD sensor of the MLA array are approximately 100–150 times smaller than that of the main lens. Thus, the distortions generated by the MLA are barely detected compared with the distortions of the main lens. We only consider the radial and tangential distortion of the main lens, and the well-known model of Brown [25] is adopted.

$${x_{corrected}} = ({1 + {k_1}{r^2} + {k_2}{r^4}} )x + 2{p_1}xy + {p_2}({{r^2} + 2{x^2}} )$$
$${y_{corrected}} = ({1 + {k_1}{r^2} + {k_2}{r^4}} )y + 2{p_2}xy + {p_1}({{r^2} + 2{y^2}} )$$
where $({{x_{corrected}},{y_{corrected}}} )$ are the coordinates after distortion correction, $({x,y} )$ are the distorted coordinates, ${r^2} = {x^2} + \; {y^2}$.

4. Calibration method

This section describes in detail how to conduct light field camera calibration effectively. We first estimate the initial pose and intrinsic parameters with an analytical solution, and then carried out a nonlinear optimization considering lens distortion.

4.1 Linear initialization

A central sub-aperture image can be extracted from the raw image given that the centers of each micro-image have been detected. We can obtain a rough estimation of image poses by processing all the central sub-aperture images through a conventional camera calibration method, for example, that proposed by Zhang [26]. Let ${{\textbf P}_{\textbf W}}({{X_W},{Y_W},{Z_W}} )$ denote the corner in the world coordinate system, the corners in the world coordinate system can be transformed into camera coordinates using ${{\textbf P}_{\textbf C}}$=$[{{\textbf R},{\textbf t}} ]{{\textbf P}_{\textbf W}}$, where $\; {\textbf R}$ and $\; {\textbf t}$ represent the rotation and translation matrix, respectively. Note that only the pose estimates are maintained, the yielded camera intrinsic is ignored.

From the normalized coordinate system to the image coordinate system, we adopt the conventional intrinsic transformation matrix ${\textbf K}$.

$${\textbf K} = \left[ {\begin{array}{ccc} {{f_x}}&0&{{c_x}}\\ {{\; }0}&{fy}&{{c_y}}\\ {{\; }0}&{{\; }0}&1 \end{array}} \right]$$

Given that the relationship of a pixel and its corresponding corner feature in the camera coordinate has been established in Eq. (5). Substituting Eq. (8), Eq. (5) becomes

$$\left[ {\begin{array}{c} u\\ v \end{array}} \right] = \left[ {\begin{array}{c} {{u_c}}\\ {{v_c}} \end{array}} \right] + \frac{1}{{{S_1}{Z_c} + {S_2}}}\left[ {\begin{array}{c} {{f_x}{X_c} - {Z_c}({{u_c} - {c_x}} )}\\ {{f_y}{Y_c} - {Z_c}({{v_c} - {c_y}} )} \end{array}} \right]$$

By performing some deformations, Eq. (9) can be expressed as follows:

$$\left[ {\begin{array}{cc} {\begin{array}{ccc} {{Z_c}\Delta u}&{\Delta u}&{ - {X_C}} \end{array}}&{\begin{array}{ccc} 0&{ - {Z_C}}&0 \end{array}}\\ {\begin{array}{ccc} {{Z_c}\Delta v}&{\Delta v}&{{\; \; \; \; \; }0} \end{array}}&{\begin{array}{ccc} { - {Y_C}}&0&{ - {Z_C}} \end{array}} \end{array}} \right]\left[ {\begin{array}{c} {\begin{array}{c} {{S_1}}\\ {{S_2}}\\ {{f_x}} \end{array}}\\ {\begin{array}{c} {{f_y}}\\ {{c_x}}\\ {{c_y}} \end{array}} \end{array}} \right] = \left[ {\begin{array}{c} { - {Z_C}{u_c}}\\ { - {Z_C}{v_c}} \end{array}} \right]$$

When $\; N$ images of the checkboard are taken in $\; N$ poses, Eq. (11) can be obtained by stacking $\; N$ equations similar to Eq. (10).

$${\textbf {Ax}} = {\textbf b}$$
where ${\textbf A}$ is a $2N \times 6$ matrix. The initial estimations of the parameters ${\textbf x}$ can be calculated by the least-squares estimate.

4.2 Nonlinear optimization considering distortion

A nonlinear optimization method is applied to refine the initial estimations. Firstly, two lines composed of the target corner and its adjacent horizontal and vertical corners are found on the checkboard, and then the nonlinear solution is carried out by minimizing the distances between the ray corresponding to the corner feature and these two lines in space simultaneously. Compared with minimizing the distance between the corner and the point of the corner feature after transferring to the object space, more accurate results can be obtained by constraining the line-to-line distance in two directions.

In terms of Eq. (5), if we set ${Z_c} = 0$, the location ${P_0}({{X_0},{Y_0},0} )$ can be obtained, where the ray passed through the main lens.

$$\left[ {\begin{array}{c} {{X_0}}\\ {{Y_0}} \end{array}} \right] = {\left[ {\begin{array}{c} {{X_c}}\\ {{Y_c}} \end{array}} \right]_{{Z_c} = 0}} = {S_2}\left[ {\begin{array}{c} {x - {x_c}}\\ {y - {y_c}} \end{array}} \right]$$

If the ray passes through ${\textbf P}({{X_L},{Y_L},{Z_L}} )$ and ${\textbf P^{\prime}}({X_L^{\prime},Y_L^{\prime},Z_L^{\prime}} )= ({{X_L} + {X_d},{Y_L} + {Y_d},{Z_L} + {Z_d}} )$, then another 3D point ${{\textbf P}_1}({{X_1},{Y_1},1} )$ can be obtained. Subsequently, the ray can be determined by points ${{\textbf P}_0}$ and ${{\textbf P}_1}$.

$$\left[ {\begin{array}{c} x\\ y \end{array}} \right] = \left[ {\begin{array}{c} {{x_c}}\\ {{y_c}} \end{array}} \right] + {\; }\frac{1}{{{S_1}{Z_C} + {S_2}}}\left[ {\begin{array}{c} {{X_L} - {Z_L}{x_c}}\\ {{Y_L} - {Z_L}{y_c}} \end{array}} \right] = {\; }\left[ {\begin{array}{c} {{x_c}}\\ {{y_c}} \end{array}} \right] + {\; }\frac{1}{{{S_1}Z_L^{\prime} + {S_2}}}\left[ {\begin{array}{c} {X_L^{\prime} - Z_L^{\prime}{x_c}}\\ {Y_L^{\prime} - Z_L^{\prime}{y_c}} \end{array}} \right]$$
$${S_1}{Z_d}{\; }\left[ {\begin{array}{c} {{X_L} - {Z_L}{x_c}}\\ {{Y_L} - {Z_L}{y_c}} \end{array}} \right] = {\; }({S_1}{Z_L} + {S_2})\left[ {\begin{array}{c} {{X_d} - {Z_d}{x_c}}\\ {{Y_d} - {Z_d}{y_c}} \end{array}} \right]$$
$$\left[ {\begin{array}{c} {{X_1}}\\ {{Y_1}} \end{array}} \right] = \left[ {\begin{array}{c} {{X_d}/{Z_d}}\\ {{Y_d}/{Z_d}} \end{array}} \right] = {\; }{S_1}\left[ {\begin{array}{c} {x - {x_c}}\\ {y - {y_c}} \end{array}} \right] + \left[ {\begin{array}{c} {{x_c}}\\ {{y_c}} \end{array}} \right]$$

Thus, to obtain the corresponding ray, we first transfer the image coordinates of a corner feature ${{\textbf p}_{\textbf w}}$ into its normalized camera coordinates by using Eq. (8). Then, we eliminate lens distortion based on Eqs. (6) and (7). Finally, we use the undistorted coordinates to calculate the ray passing through ${{\textbf P}_0}$ and ${{\textbf P}_1}$ based on Eqs. (12) and (13). Let ${{\textbf P}_{\textbf W}}$ be the corner of ${{\textbf p}_{\textbf w}}$, the following cost function is used to accomplish the nonlinear optimization.

$$f({{S_1},{S_2},{f_x},{f_y},{c_x},{c_y},{k_1},{k_2},{p_1},{p_2},{\textbf R},{\textbf t}} )= \sum \vert\vert{L_{{{\textbf P}_0},{{\textbf P}_1}}},{L_{{{\textbf P}_{\textbf W}},{{\textbf P}_{{\textbf W}\_{\textbf adj}}}}}\vert\vert^{ray - ray}$$
where ${||\cdot ||^{ray - ray}}$ is the distance between two rays in space, ${L_{{{\textbf P}_0},{{\textbf P}_1}}}$ is the ray passing through the point ${{\textbf P}_0}$ and ${{\textbf P}_1}$, and ${L_{{{\textbf P}_{\textbf W}},{{\textbf P}_{{\textbf W}\_{\textbf adj}}}}}$ is the ray passing through the corner ${{\textbf P}_{\textbf W}}$ and its adjacent corner ${{\textbf P}_{{\textbf W}\_{\textbf adj}}}$ in the camera coordinates.

The nonlinear minimization problem is solved with the Levenberg–Marquardt algorithm, where the “lsqnonlin” function in MATLAB is adopted.

5. Experiments and analysis

To verify the effectiveness of the proposed calibration method, we conduct experiments on both Lytro and Lytro Illum datasets. The performance is testified by comparing with the state-of-art method BJW by Bok [17] et al. and SCC by Liu [18] et al., which use line and corner features on raw images, respectively. For BJW and SCC, the codes provided by the authors are utilized, and we use the output of SCC as input to extract sub-aperture images and conduct evaluation.

5.1 Experiments using Lytro camera datasets

We conduct calibration on the public Lytro camera datasets [14]. Specifically, a few images in sub-datasets A, B, and E are selected because the line and corner features are visible only if the pattern is out of focus. The light fields within each dataset cover a range of orientations and depths.

With respect to evaluation criteria, Liu [18] et al. used the output of its optimization method. However, the number of line or corner features extracted from each method is different. It is hard to say the estimations are good when the output residual error is small. A small residual error only shows the method can converge better, there may be overfitting during optimization. Instead of comparing the results in their optimization, we use the output of the optimization process as input to extract sub-aperture images, and conduct evaluation on sub-aperture images independently, which can ensure the number of features used for comparison is the same and is more objective. Besides, the sub-aperture images with large displacement to the center sub-aperture have edge artifacts, thus we only evaluate the results of the inside sub-apertures as Bok [17] et al. did.

The size of the raw images is equal to 3280×3280 pixels, and the radius of the micro-lens images is equal to 5 pixels. 9×9 sub-aperture images are extracted based on the calibration results using the same method as BJW. The intrinsic and extrinsic parameters of these sub-apertures can be obtained from the calibration result. Subsequently, we detect the corners from sub-aperture images independently and compute the RMSEs of projections in pixel on the image and ray reprojection errors in millimeters in space.

Table 1 displays the comparison results with BJW and SCC on dataset A, B, and E. From the results, we can see that SCC failed in dataset A and obtained the best results on dataset B and the worst results on dataset E, which have a large fluctuance. The main reason is that the corner detection method is not robust enough. Corner detection will fail when the images have large affine transformations including rotation and viewpoint change, which will lead to calibration failure, such as the situation in dataset A. There were 9 images in dataset B, both BJW and our method can find correct features on all used images. SCC found correct corner features on 8 images and discarded the useless image automatically during the calibration process, achieving good accuracy. For dataset E, although SCC found corner features on all 15 images, some corner features were wrongly detected. However, SCC did not eliminate those incorrect corner features, which leads to a bad result.

Tables Icon

Table 1. Calibration Results of Lytro camera a

For our method, it provided smaller errors than BJW on all evaluation factors. This is because the corner features extraction method of our method is robust to large affine transformations, and the proposed calibration method is effective. After a large number of experiments on changing the initial parameters, we found that the optimized result is insensitive to initial estimations. Thus, the points used for optimization largely decide the final accuracy. The points of our method are detected from the raw image straightforwardly and have high accuracy, while the points of BJW are selected from the detected lines, which decreases the accuracy. First, the line accuracy is dependent on the number of selected line directions and radius to the centers of the micro-lens image. Second, the selected point may not lie on the line because the projection of a line in space to the micro-lens image is nonlinear. Thus, our point-based method is more accurate and stable.

5.2 Experiments using Lytro Illum camera datasets

We also verify our calibration method on two Lytro Illum camera datasets provided by Bok [17] et al. and Zhang [15] et al. The size of raw images captured by the Lytro Illum camera is equal to 7728×5368 pixels, and 13 × 13 sub-aperture images are extracted. The proposed method can be easily applied by simply changing the microlens image radius to 7 pixels. The estimated intrinsic camera parameters of the Lytro Illum camera used in [17] are detailed in Table 2.

Tables Icon

Table 2. Estimated camera intrinsic parameters according to the proposed method.

Table 3 displays the comparative results of the projection errors. Table 4 displays the detailed projection errors on the dataset used in [17]. The sub-aperture images with large displacement have artifacts due to the neighboring micro-lens. In general, the projection error and ray reprojection error of the sub-aperture image at $({i,j} )$ increases with the displacement $d\left( {d = \sqrt {{i^2} + {j^2}} \; } \right)$. Similar to [17], we discard the sub-aperture images with similar displacement to the radius of angular resolution, only the inside 11×11 images are used for evaluation. From the results, we can see that our method obtained the highest accuracy on both datasets.

Tables Icon

Table 3. Calibration results of Lytro Illum camera.

Tables Icon

Table 4. Ray re-projection error of sub-aperture images (Uint: mm)

To further verify the performance with respect to geometric accuracy, we conduct camera calibration and extract sub-aperture images on the dataset provided by Monteiro [27] et al. As shown in Fig. 5, SCC found a large number of wrong corner features on the image and did not eliminate them. As a result, SCC did not converge during the optimization process and failed to complete calibration.

 figure: Fig. 5.

Fig. 5. A failure case of feature extraction for SCC on the dataset provided by Barroso [27] et al. Left: extraction results of a checkerboard with 19×19 corners excluding edge corners. Right: the close-up image of the corner selected by the white box in the left image, which are all wrong.

Download Full Size | PPT Slide | PDF

On the contrary, our method and BJW both accomplish calibration successfully. We use CAE [5] to estimate the disparity maps on the sub-aperture images obtained from our method and BJW, respectively. CAE is one of the state-of-the-art disparity labeling methods, which calculate the correspondence cost based on the constrained entropy of photo consistency. Good accurate results can be obtained if the sub-aperture images are well aligned. Figure 6 displays the extracted sub-aperture images and estimated disparity maps by our method and BJW.

 figure: Fig. 6.

Fig. 6. The first and second line are the sub-aperture image and estimated depth map using BJW and our method, respectively. The red rectangles indicate the same area in corresponding figures.

Download Full Size | PPT Slide | PDF

From the results, we can see that the depth map estimated by our method is more consistent with the real image depth. For example, in the region selected by the red box, the depth estimated by BJW changes unexpectedly, indicating wrong depth estimation. However, the depth estimated by our method is continuous, and the depth change corresponds to the object edge in the sub-aperture image. The high accuracy of the estimated depth map proves the high geometric accuracy of sub-aperture images obtained from the proposed method.

5.3 Run time analysis

Table 5 illustrates the running time taken by BJW, SCC, and our method to extract features from all experimental images in each dataset. All algorithms are implemented in Matlab using a computer equipped with Intel Xeon Silver 4116 4.2 GHz CPU and 64 GB RAM without parallelization. The results show that our method is the most efficient, almost 10 times faster than BJW, and 3 times faster than SCC.

Tables Icon

Table 5. The run time analysis (Uint: s).

6. Conclusion

We presented a novel geometric calibration method for standard plenoptic cameras. First, the regions containing corner features on raw images are found and corner features are detected on these micro-lens images using a circular boundary strategy. Then, the initial extrinsic parameters are computed using the central sub-aperture images with a conventional calibration method, and the initial intrinsic parameters are calculated using a linear solution based on the geometric relationship of the corners and their corresponding centers. Finally, all the initial estimations are further refined via a non-linear optimization. The accuracy of the final result is not sensitive to the initial estimations but has a strong correlation with the optimization process. For the proposed method, high accuracy corner features can be extracted directly from the raw image for optimization, thus good accuracy is achieved. Besides, the proposed method is much faster than the competed methods. Future work aims to improve the accuracy of corner features and optimize lens distortion models.

Funding

National Natural Science Foundation of China (41801390, 41771360, 41971426); National Key Research and Development Program of China (2017YFB0504201, 2018YFD1100405).

Acknowledgments

The authors thank Yunsun Bok of the Korea Advanced Institute of Science and Technology (KAIST) and Qingsong Liu of Naval Aeronautical University for making their calibration codes available on the Web.

Disclosures

The authors declare no conflicts of interest.

References

1. R. Ng, “Digital light field photography,” PhD. Thesis (Stanford University, 2006).

2. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” 2009 IEEE Int. Conf. Comput. Photogr. ICCP 09 (May 2014), (2009).

3. R. Ng, “Fourier slice photography,” ACM Trans. Graph. 24(3), 735–744 (2005). [CrossRef]  

4. T. E. Bishop and P. Favaro, “The light field camera: Extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012). [CrossRef]  

5. I. K. Williem, K. M. Park, and Lee, “Robust Light Field Depth Estimation Using Occlusion-Noise Aware Data Costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2484–2497 (2018). [CrossRef]  

6. O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

7. T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Occlusion-aware depth estimation using light-field cameras,” Proc. IEEE Int. Conf. Comput. Vis. 2015, 3487–3495 (2015).

8. N. Zeller, F. Quint, and U. Stilla, “ISPRS Journal of Photogrammetry and Remote Sensing Depth estimation and camera calibration of a focused plenoptic camera for visual odometry,” ISPRS J. Photogramm. Remote Sens. 118, 83–100 (2016). [CrossRef]  

9. O. Johannsen, A. Sulc, and B. Goldluecke, “On linear structure from motion for light field cameras,” Proc. IEEE Int. Conf. Comput. Vis. 2015, 720–728 (2015).

10. B. Wilburn, N. Joshi, V. Vaish, E. V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24(3), 765–776 (2005). [CrossRef]  

11. R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light Field Photography with a Hand-held Plenoptic Camera,” Computer Science Technical Report CSTR 2(11), 1–11 (2005).

12. T. Georgiev and A. Lumsdaine, “Reducing plenoptic camera artifacts,” Comput. Graph. Forum 29(6), 1955–1968 (2010). [CrossRef]  

13. Lytro. “The Lytro camera,” https://www.lytro.com.

14. D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit, 1027–1034 (2013).

15. Q. Zhang, C. Zhang, J. Ling, Q. Wang, and J. Yu, “A Generic Multi-Projection-Center Model and Calibration Method for Light Field Cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 41(11), 2539–2552 (2019). [CrossRef]  

16. P. Zhou, Z. Yang, W. Cai, Y. Yu, and G. Zhou, “Light field calibration and 3D shape measurement based on epipolar-space,” Opt. Express 27(7), 10171 (2019). [CrossRef]  

17. Y. Bok, H. G. Jeon, and I. S. Kweon, “Geometric Calibration of Micro-Lens-Based Light Field Cameras Using Line Features,” IEEE Trans. Pattern Anal. Mach. Intell. 39(2), 287–300 (2017). [CrossRef]  

18. Q. Liu, X. Xie, X. Zhang, Y. Tian, J. Li, Y. Wang, and X. Xu, “Stepwise calibration of plenoptic cameras based on corner features of raw images,” Appl. Opt. 59(14), 4209 (2020). [CrossRef]  

19. S. O’Brien, J. Trumpf, V. Ila, and R. Mahony, “Calibrating light-field cameras using plenoptic disc features,” Proc. - 2018 Int. Conf. 3D Vision, 3DV 2018286–294 (2018).

20. W. Xue-Jun, G. Dong-Yuan, and Y. Xi-Fan, “Application of matlab calibration toolbox for camera’s intrinsic and extrinsic parameters solving,” Proc. - 2019 Int. Conf. Smart Grid Electr. Autom. ICSGEA 2019106–111 (2019).

21. J. Sánchez, N. Monzón, and A. Salgado, “An analysis and implementation of the harris corner detector,” Image Process. Line 8, 305–328 (2018). [CrossRef]  

22. Z. Wang, W. Wu, X. Xu, and D. Xue, “Recognition and location of the internal corners of planar checkerboard calibration pattern image,” Appl. Math. Comput. 185(2), 894–906 (2007). [CrossRef]  

23. A. Duda and U. Frese, “Accurate detection and localization of checkerboard corners for calibration,” Br. Mach. Vis. Conf. 2018, BMVC 2018 (2019).

24. Y. Bok, H. Ha, and I. S. Kweon, “Automated checkerboard detection and indexing using circular boundaries,” Pattern Recognit. Lett. 71, 66–72 (2016). [CrossRef]  

25. O. Johannsen, C. Heinze, B. Goldluecke, and C. Perwaß, “On the calibration of focused plenoptic cameras,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics) 8200 LNCS, 302–317 (2013).

26. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

27. N. B. Monteiro, J. P. Barreto, and J. A. Gaspar, “Standard Plenoptic Cameras Mapping to Camera Arrays and Calibration based on DLT,” IEEE Trans. Circuits Syst. Video Technol. 30(11), 4090 (2020). [CrossRef]  

References

  • View by:

  1. R. Ng, “Digital light field photography,” PhD. Thesis (Stanford University, 2006).
  2. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” 2009 IEEE Int. Conf. Comput. Photogr. ICCP 09 (May 2014), (2009).
  3. R. Ng, “Fourier slice photography,” ACM Trans. Graph. 24(3), 735–744 (2005).
    [Crossref]
  4. T. E. Bishop and P. Favaro, “The light field camera: Extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
    [Crossref]
  5. I. K. Williem, K. M. Park, and Lee, “Robust Light Field Depth Estimation Using Occlusion-Noise Aware Data Costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2484–2497 (2018).
    [Crossref]
  6. O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).
  7. T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Occlusion-aware depth estimation using light-field cameras,” Proc. IEEE Int. Conf. Comput. Vis. 2015, 3487–3495 (2015).
  8. N. Zeller, F. Quint, and U. Stilla, “ISPRS Journal of Photogrammetry and Remote Sensing Depth estimation and camera calibration of a focused plenoptic camera for visual odometry,” ISPRS J. Photogramm. Remote Sens. 118, 83–100 (2016).
    [Crossref]
  9. O. Johannsen, A. Sulc, and B. Goldluecke, “On linear structure from motion for light field cameras,” Proc. IEEE Int. Conf. Comput. Vis. 2015, 720–728 (2015).
  10. B. Wilburn, N. Joshi, V. Vaish, E. V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24(3), 765–776 (2005).
    [Crossref]
  11. R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light Field Photography with a Hand-held Plenoptic Camera,” Computer Science Technical Report CSTR 2(11), 1–11 (2005).
  12. T. Georgiev and A. Lumsdaine, “Reducing plenoptic camera artifacts,” Comput. Graph. Forum 29(6), 1955–1968 (2010).
    [Crossref]
  13. Lytro. “The Lytro camera,” https://www.lytro.com .
  14. D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit, 1027–1034 (2013).
  15. Q. Zhang, C. Zhang, J. Ling, Q. Wang, and J. Yu, “A Generic Multi-Projection-Center Model and Calibration Method for Light Field Cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 41(11), 2539–2552 (2019).
    [Crossref]
  16. P. Zhou, Z. Yang, W. Cai, Y. Yu, and G. Zhou, “Light field calibration and 3D shape measurement based on epipolar-space,” Opt. Express 27(7), 10171 (2019).
    [Crossref]
  17. Y. Bok, H. G. Jeon, and I. S. Kweon, “Geometric Calibration of Micro-Lens-Based Light Field Cameras Using Line Features,” IEEE Trans. Pattern Anal. Mach. Intell. 39(2), 287–300 (2017).
    [Crossref]
  18. Q. Liu, X. Xie, X. Zhang, Y. Tian, J. Li, Y. Wang, and X. Xu, “Stepwise calibration of plenoptic cameras based on corner features of raw images,” Appl. Opt. 59(14), 4209 (2020).
    [Crossref]
  19. S. O’Brien, J. Trumpf, V. Ila, and R. Mahony, “Calibrating light-field cameras using plenoptic disc features,” Proc. - 2018 Int. Conf. 3D Vision, 3DV 2018286–294 (2018).
  20. W. Xue-Jun, G. Dong-Yuan, and Y. Xi-Fan, “Application of matlab calibration toolbox for camera’s intrinsic and extrinsic parameters solving,” Proc. - 2019 Int. Conf. Smart Grid Electr. Autom. ICSGEA 2019106–111 (2019).
  21. J. Sánchez, N. Monzón, and A. Salgado, “An analysis and implementation of the harris corner detector,” Image Process. Line 8, 305–328 (2018).
    [Crossref]
  22. Z. Wang, W. Wu, X. Xu, and D. Xue, “Recognition and location of the internal corners of planar checkerboard calibration pattern image,” Appl. Math. Comput. 185(2), 894–906 (2007).
    [Crossref]
  23. A. Duda and U. Frese, “Accurate detection and localization of checkerboard corners for calibration,” Br. Mach. Vis. Conf. 2018, BMVC 2018 (2019).
  24. Y. Bok, H. Ha, and I. S. Kweon, “Automated checkerboard detection and indexing using circular boundaries,” Pattern Recognit. Lett. 71, 66–72 (2016).
    [Crossref]
  25. O. Johannsen, C. Heinze, B. Goldluecke, and C. Perwaß, “On the calibration of focused plenoptic cameras,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics) 8200 LNCS, 302–317 (2013).
  26. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
    [Crossref]
  27. N. B. Monteiro, J. P. Barreto, and J. A. Gaspar, “Standard Plenoptic Cameras Mapping to Camera Arrays and Calibration based on DLT,” IEEE Trans. Circuits Syst. Video Technol. 30(11), 4090 (2020).
    [Crossref]

2020 (2)

Q. Liu, X. Xie, X. Zhang, Y. Tian, J. Li, Y. Wang, and X. Xu, “Stepwise calibration of plenoptic cameras based on corner features of raw images,” Appl. Opt. 59(14), 4209 (2020).
[Crossref]

N. B. Monteiro, J. P. Barreto, and J. A. Gaspar, “Standard Plenoptic Cameras Mapping to Camera Arrays and Calibration based on DLT,” IEEE Trans. Circuits Syst. Video Technol. 30(11), 4090 (2020).
[Crossref]

2019 (2)

Q. Zhang, C. Zhang, J. Ling, Q. Wang, and J. Yu, “A Generic Multi-Projection-Center Model and Calibration Method for Light Field Cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 41(11), 2539–2552 (2019).
[Crossref]

P. Zhou, Z. Yang, W. Cai, Y. Yu, and G. Zhou, “Light field calibration and 3D shape measurement based on epipolar-space,” Opt. Express 27(7), 10171 (2019).
[Crossref]

2018 (2)

I. K. Williem, K. M. Park, and Lee, “Robust Light Field Depth Estimation Using Occlusion-Noise Aware Data Costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2484–2497 (2018).
[Crossref]

J. Sánchez, N. Monzón, and A. Salgado, “An analysis and implementation of the harris corner detector,” Image Process. Line 8, 305–328 (2018).
[Crossref]

2017 (1)

Y. Bok, H. G. Jeon, and I. S. Kweon, “Geometric Calibration of Micro-Lens-Based Light Field Cameras Using Line Features,” IEEE Trans. Pattern Anal. Mach. Intell. 39(2), 287–300 (2017).
[Crossref]

2016 (2)

N. Zeller, F. Quint, and U. Stilla, “ISPRS Journal of Photogrammetry and Remote Sensing Depth estimation and camera calibration of a focused plenoptic camera for visual odometry,” ISPRS J. Photogramm. Remote Sens. 118, 83–100 (2016).
[Crossref]

Y. Bok, H. Ha, and I. S. Kweon, “Automated checkerboard detection and indexing using circular boundaries,” Pattern Recognit. Lett. 71, 66–72 (2016).
[Crossref]

2012 (1)

T. E. Bishop and P. Favaro, “The light field camera: Extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref]

2010 (1)

T. Georgiev and A. Lumsdaine, “Reducing plenoptic camera artifacts,” Comput. Graph. Forum 29(6), 1955–1968 (2010).
[Crossref]

2007 (1)

Z. Wang, W. Wu, X. Xu, and D. Xue, “Recognition and location of the internal corners of planar checkerboard calibration pattern image,” Appl. Math. Comput. 185(2), 894–906 (2007).
[Crossref]

2005 (3)

R. Ng, “Fourier slice photography,” ACM Trans. Graph. 24(3), 735–744 (2005).
[Crossref]

B. Wilburn, N. Joshi, V. Vaish, E. V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24(3), 765–776 (2005).
[Crossref]

R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light Field Photography with a Hand-held Plenoptic Camera,” Computer Science Technical Report CSTR 2(11), 1–11 (2005).

2000 (1)

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

Adams, A.

B. Wilburn, N. Joshi, V. Vaish, E. V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24(3), 765–776 (2005).
[Crossref]

Alperovich, A.

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

Antunez, E.

B. Wilburn, N. Joshi, V. Vaish, E. V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24(3), 765–776 (2005).
[Crossref]

Barreto, J. P.

N. B. Monteiro, J. P. Barreto, and J. A. Gaspar, “Standard Plenoptic Cameras Mapping to Camera Arrays and Calibration based on DLT,” IEEE Trans. Circuits Syst. Video Technol. 30(11), 4090 (2020).
[Crossref]

Barth, A.

B. Wilburn, N. Joshi, V. Vaish, E. V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24(3), 765–776 (2005).
[Crossref]

Battisti, F.

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

Bishop, T. E.

T. E. Bishop and P. Favaro, “The light field camera: Extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref]

Bok, Y.

Y. Bok, H. G. Jeon, and I. S. Kweon, “Geometric Calibration of Micro-Lens-Based Light Field Cameras Using Line Features,” IEEE Trans. Pattern Anal. Mach. Intell. 39(2), 287–300 (2017).
[Crossref]

Y. Bok, H. Ha, and I. S. Kweon, “Automated checkerboard detection and indexing using circular boundaries,” Pattern Recognit. Lett. 71, 66–72 (2016).
[Crossref]

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

Brizzi, M.

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

Cai, W.

Carli, M.

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

Choe, G.

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

Dansereau, D. G.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit, 1027–1034 (2013).

Diebold, M.

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

Dong-Yuan, G.

W. Xue-Jun, G. Dong-Yuan, and Y. Xi-Fan, “Application of matlab calibration toolbox for camera’s intrinsic and extrinsic parameters solving,” Proc. - 2019 Int. Conf. Smart Grid Electr. Autom. ICSGEA 2019106–111 (2019).

Duda, A.

A. Duda and U. Frese, “Accurate detection and localization of checkerboard corners for calibration,” Br. Mach. Vis. Conf. 2018, BMVC 2018 (2019).

Duval, G.

R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light Field Photography with a Hand-held Plenoptic Camera,” Computer Science Technical Report CSTR 2(11), 1–11 (2005).

Efros, A. A.

T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Occlusion-aware depth estimation using light-field cameras,” Proc. IEEE Int. Conf. Comput. Vis. 2015, 3487–3495 (2015).

Favaro, P.

T. E. Bishop and P. Favaro, “The light field camera: Extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref]

Frese, U.

A. Duda and U. Frese, “Accurate detection and localization of checkerboard corners for calibration,” Br. Mach. Vis. Conf. 2018, BMVC 2018 (2019).

Gaspar, J. A.

N. B. Monteiro, J. P. Barreto, and J. A. Gaspar, “Standard Plenoptic Cameras Mapping to Camera Arrays and Calibration based on DLT,” IEEE Trans. Circuits Syst. Video Technol. 30(11), 4090 (2020).
[Crossref]

Georgiev, T.

T. Georgiev and A. Lumsdaine, “Reducing plenoptic camera artifacts,” Comput. Graph. Forum 29(6), 1955–1968 (2010).
[Crossref]

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” 2009 IEEE Int. Conf. Comput. Photogr. ICCP 09 (May 2014), (2009).

Goldluecke, B.

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

O. Johannsen, A. Sulc, and B. Goldluecke, “On linear structure from motion for light field cameras,” Proc. IEEE Int. Conf. Comput. Vis. 2015, 720–728 (2015).

O. Johannsen, C. Heinze, B. Goldluecke, and C. Perwaß, “On the calibration of focused plenoptic cameras,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics) 8200 LNCS, 302–317 (2013).

Gutsche, M.

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

Ha, H.

Y. Bok, H. Ha, and I. S. Kweon, “Automated checkerboard detection and indexing using circular boundaries,” Pattern Recognit. Lett. 71, 66–72 (2016).
[Crossref]

Hanrahan, P.

R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light Field Photography with a Hand-held Plenoptic Camera,” Computer Science Technical Report CSTR 2(11), 1–11 (2005).

Heinze, C.

O. Johannsen, C. Heinze, B. Goldluecke, and C. Perwaß, “On the calibration of focused plenoptic cameras,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics) 8200 LNCS, 302–317 (2013).

Honauer, K.

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

Horowitz, M.

R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light Field Photography with a Hand-held Plenoptic Camera,” Computer Science Technical Report CSTR 2(11), 1–11 (2005).

B. Wilburn, N. Joshi, V. Vaish, E. V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24(3), 765–776 (2005).
[Crossref]

Ila, V.

S. O’Brien, J. Trumpf, V. Ila, and R. Mahony, “Calibrating light-field cameras using plenoptic disc features,” Proc. - 2018 Int. Conf. 3D Vision, 3DV 2018286–294 (2018).

Jeon, H. G.

Y. Bok, H. G. Jeon, and I. S. Kweon, “Geometric Calibration of Micro-Lens-Based Light Field Cameras Using Line Features,” IEEE Trans. Pattern Anal. Mach. Intell. 39(2), 287–300 (2017).
[Crossref]

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

Johannsen, O.

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

O. Johannsen, A. Sulc, and B. Goldluecke, “On linear structure from motion for light field cameras,” Proc. IEEE Int. Conf. Comput. Vis. 2015, 720–728 (2015).

O. Johannsen, C. Heinze, B. Goldluecke, and C. Perwaß, “On the calibration of focused plenoptic cameras,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics) 8200 LNCS, 302–317 (2013).

Joshi, N.

B. Wilburn, N. Joshi, V. Vaish, E. V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24(3), 765–776 (2005).
[Crossref]

Kweon, I. S.

Y. Bok, H. G. Jeon, and I. S. Kweon, “Geometric Calibration of Micro-Lens-Based Light Field Cameras Using Line Features,” IEEE Trans. Pattern Anal. Mach. Intell. 39(2), 287–300 (2017).
[Crossref]

Y. Bok, H. Ha, and I. S. Kweon, “Automated checkerboard detection and indexing using circular boundaries,” Pattern Recognit. Lett. 71, 66–72 (2016).
[Crossref]

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

Lee,

I. K. Williem, K. M. Park, and Lee, “Robust Light Field Depth Estimation Using Occlusion-Noise Aware Data Costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2484–2497 (2018).
[Crossref]

Levoy, M.

B. Wilburn, N. Joshi, V. Vaish, E. V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24(3), 765–776 (2005).
[Crossref]

R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light Field Photography with a Hand-held Plenoptic Camera,” Computer Science Technical Report CSTR 2(11), 1–11 (2005).

Li, J.

Ling, J.

Q. Zhang, C. Zhang, J. Ling, Q. Wang, and J. Yu, “A Generic Multi-Projection-Center Model and Calibration Method for Light Field Cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 41(11), 2539–2552 (2019).
[Crossref]

Liu, Q.

Lumsdaine, A.

T. Georgiev and A. Lumsdaine, “Reducing plenoptic camera artifacts,” Comput. Graph. Forum 29(6), 1955–1968 (2010).
[Crossref]

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” 2009 IEEE Int. Conf. Comput. Photogr. ICCP 09 (May 2014), (2009).

Lytro,

Lytro. “The Lytro camera,” https://www.lytro.com .

Mahony, R.

S. O’Brien, J. Trumpf, V. Ila, and R. Mahony, “Calibrating light-field cameras using plenoptic disc features,” Proc. - 2018 Int. Conf. 3D Vision, 3DV 2018286–294 (2018).

Monteiro, N. B.

N. B. Monteiro, J. P. Barreto, and J. A. Gaspar, “Standard Plenoptic Cameras Mapping to Camera Arrays and Calibration based on DLT,” IEEE Trans. Circuits Syst. Video Technol. 30(11), 4090 (2020).
[Crossref]

Monzón, N.

J. Sánchez, N. Monzón, and A. Salgado, “An analysis and implementation of the harris corner detector,” Image Process. Line 8, 305–328 (2018).
[Crossref]

Ng, R.

R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light Field Photography with a Hand-held Plenoptic Camera,” Computer Science Technical Report CSTR 2(11), 1–11 (2005).

R. Ng, “Fourier slice photography,” ACM Trans. Graph. 24(3), 735–744 (2005).
[Crossref]

R. Ng, “Digital light field photography,” PhD. Thesis (Stanford University, 2006).

O’Brien, S.

S. O’Brien, J. Trumpf, V. Ila, and R. Mahony, “Calibrating light-field cameras using plenoptic disc features,” Proc. - 2018 Int. Conf. 3D Vision, 3DV 2018286–294 (2018).

Park, J.

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

Park, K. M.

I. K. Williem, K. M. Park, and Lee, “Robust Light Field Depth Estimation Using Occlusion-Noise Aware Data Costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2484–2497 (2018).
[Crossref]

Perwaß, C.

O. Johannsen, C. Heinze, B. Goldluecke, and C. Perwaß, “On the calibration of focused plenoptic cameras,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics) 8200 LNCS, 302–317 (2013).

Pizarro, O.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit, 1027–1034 (2013).

Quint, F.

N. Zeller, F. Quint, and U. Stilla, “ISPRS Journal of Photogrammetry and Remote Sensing Depth estimation and camera calibration of a focused plenoptic camera for visual odometry,” ISPRS J. Photogramm. Remote Sens. 118, 83–100 (2016).
[Crossref]

Ramamoorthi, R.

T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Occlusion-aware depth estimation using light-field cameras,” Proc. IEEE Int. Conf. Comput. Vis. 2015, 3487–3495 (2015).

Salgado, A.

J. Sánchez, N. Monzón, and A. Salgado, “An analysis and implementation of the harris corner detector,” Image Process. Line 8, 305–328 (2018).
[Crossref]

Sánchez, J.

J. Sánchez, N. Monzón, and A. Salgado, “An analysis and implementation of the harris corner detector,” Image Process. Line 8, 305–328 (2018).
[Crossref]

Schilling, H.

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

Sheng, H.

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

Si, L.

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

Stilla, U.

N. Zeller, F. Quint, and U. Stilla, “ISPRS Journal of Photogrammetry and Remote Sensing Depth estimation and camera calibration of a focused plenoptic camera for visual odometry,” ISPRS J. Photogramm. Remote Sens. 118, 83–100 (2016).
[Crossref]

Strecke, M.

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

Sulc, A.

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

O. Johannsen, A. Sulc, and B. Goldluecke, “On linear structure from motion for light field cameras,” Proc. IEEE Int. Conf. Comput. Vis. 2015, 720–728 (2015).

Tai, Y. W.

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

Talvala, E. V.

B. Wilburn, N. Joshi, V. Vaish, E. V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24(3), 765–776 (2005).
[Crossref]

Tian, Y.

Trumpf, J.

S. O’Brien, J. Trumpf, V. Ila, and R. Mahony, “Calibrating light-field cameras using plenoptic disc features,” Proc. - 2018 Int. Conf. 3D Vision, 3DV 2018286–294 (2018).

Vaish, V.

B. Wilburn, N. Joshi, V. Vaish, E. V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24(3), 765–776 (2005).
[Crossref]

Wang, Q.

Q. Zhang, C. Zhang, J. Ling, Q. Wang, and J. Yu, “A Generic Multi-Projection-Center Model and Calibration Method for Light Field Cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 41(11), 2539–2552 (2019).
[Crossref]

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

Wang, T. C.

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Occlusion-aware depth estimation using light-field cameras,” Proc. IEEE Int. Conf. Comput. Vis. 2015, 3487–3495 (2015).

Wang, Y.

Wang, Z.

Z. Wang, W. Wu, X. Xu, and D. Xue, “Recognition and location of the internal corners of planar checkerboard calibration pattern image,” Appl. Math. Comput. 185(2), 894–906 (2007).
[Crossref]

Wanner, S.

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

Wilburn, B.

B. Wilburn, N. Joshi, V. Vaish, E. V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24(3), 765–776 (2005).
[Crossref]

Williams, S. B.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit, 1027–1034 (2013).

Williem, I. K.

I. K. Williem, K. M. Park, and Lee, “Robust Light Field Depth Estimation Using Occlusion-Noise Aware Data Costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2484–2497 (2018).
[Crossref]

Wu, W.

Z. Wang, W. Wu, X. Xu, and D. Xue, “Recognition and location of the internal corners of planar checkerboard calibration pattern image,” Appl. Math. Comput. 185(2), 894–906 (2007).
[Crossref]

Xie, X.

Xi-Fan, Y.

W. Xue-Jun, G. Dong-Yuan, and Y. Xi-Fan, “Application of matlab calibration toolbox for camera’s intrinsic and extrinsic parameters solving,” Proc. - 2019 Int. Conf. Smart Grid Electr. Autom. ICSGEA 2019106–111 (2019).

Xiong, Z.

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

Xu, X.

Q. Liu, X. Xie, X. Zhang, Y. Tian, J. Li, Y. Wang, and X. Xu, “Stepwise calibration of plenoptic cameras based on corner features of raw images,” Appl. Opt. 59(14), 4209 (2020).
[Crossref]

Z. Wang, W. Wu, X. Xu, and D. Xue, “Recognition and location of the internal corners of planar checkerboard calibration pattern image,” Appl. Math. Comput. 185(2), 894–906 (2007).
[Crossref]

Xue, D.

Z. Wang, W. Wu, X. Xu, and D. Xue, “Recognition and location of the internal corners of planar checkerboard calibration pattern image,” Appl. Math. Comput. 185(2), 894–906 (2007).
[Crossref]

Xue-Jun, W.

W. Xue-Jun, G. Dong-Yuan, and Y. Xi-Fan, “Application of matlab calibration toolbox for camera’s intrinsic and extrinsic parameters solving,” Proc. - 2019 Int. Conf. Smart Grid Electr. Autom. ICSGEA 2019106–111 (2019).

Yang, Z.

Yu, J.

Q. Zhang, C. Zhang, J. Ling, Q. Wang, and J. Yu, “A Generic Multi-Projection-Center Model and Calibration Method for Light Field Cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 41(11), 2539–2552 (2019).
[Crossref]

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

Yu, Y.

Zeller, N.

N. Zeller, F. Quint, and U. Stilla, “ISPRS Journal of Photogrammetry and Remote Sensing Depth estimation and camera calibration of a focused plenoptic camera for visual odometry,” ISPRS J. Photogramm. Remote Sens. 118, 83–100 (2016).
[Crossref]

Zhang, C.

Q. Zhang, C. Zhang, J. Ling, Q. Wang, and J. Yu, “A Generic Multi-Projection-Center Model and Calibration Method for Light Field Cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 41(11), 2539–2552 (2019).
[Crossref]

Zhang, Q.

Q. Zhang, C. Zhang, J. Ling, Q. Wang, and J. Yu, “A Generic Multi-Projection-Center Model and Calibration Method for Light Field Cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 41(11), 2539–2552 (2019).
[Crossref]

Zhang, S.

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

Zhang, X.

Zhang, Z.

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

Zhou, G.

Zhou, P.

Zhu, H.

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

ACM Trans. Graph. (2)

R. Ng, “Fourier slice photography,” ACM Trans. Graph. 24(3), 735–744 (2005).
[Crossref]

B. Wilburn, N. Joshi, V. Vaish, E. V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24(3), 765–776 (2005).
[Crossref]

Appl. Math. Comput. (1)

Z. Wang, W. Wu, X. Xu, and D. Xue, “Recognition and location of the internal corners of planar checkerboard calibration pattern image,” Appl. Math. Comput. 185(2), 894–906 (2007).
[Crossref]

Appl. Opt. (1)

Comput. Graph. Forum (1)

T. Georgiev and A. Lumsdaine, “Reducing plenoptic camera artifacts,” Comput. Graph. Forum 29(6), 1955–1968 (2010).
[Crossref]

Computer Science Technical Report CSTR (1)

R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light Field Photography with a Hand-held Plenoptic Camera,” Computer Science Technical Report CSTR 2(11), 1–11 (2005).

IEEE Trans. Circuits Syst. Video Technol. (1)

N. B. Monteiro, J. P. Barreto, and J. A. Gaspar, “Standard Plenoptic Cameras Mapping to Camera Arrays and Calibration based on DLT,” IEEE Trans. Circuits Syst. Video Technol. 30(11), 4090 (2020).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (5)

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

Y. Bok, H. G. Jeon, and I. S. Kweon, “Geometric Calibration of Micro-Lens-Based Light Field Cameras Using Line Features,” IEEE Trans. Pattern Anal. Mach. Intell. 39(2), 287–300 (2017).
[Crossref]

Q. Zhang, C. Zhang, J. Ling, Q. Wang, and J. Yu, “A Generic Multi-Projection-Center Model and Calibration Method for Light Field Cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 41(11), 2539–2552 (2019).
[Crossref]

T. E. Bishop and P. Favaro, “The light field camera: Extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref]

I. K. Williem, K. M. Park, and Lee, “Robust Light Field Depth Estimation Using Occlusion-Noise Aware Data Costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2484–2497 (2018).
[Crossref]

Image Process. Line (1)

J. Sánchez, N. Monzón, and A. Salgado, “An analysis and implementation of the harris corner detector,” Image Process. Line 8, 305–328 (2018).
[Crossref]

ISPRS J. Photogramm. Remote Sens. (1)

N. Zeller, F. Quint, and U. Stilla, “ISPRS Journal of Photogrammetry and Remote Sensing Depth estimation and camera calibration of a focused plenoptic camera for visual odometry,” ISPRS J. Photogramm. Remote Sens. 118, 83–100 (2016).
[Crossref]

Opt. Express (1)

Pattern Recognit. Lett. (1)

Y. Bok, H. Ha, and I. S. Kweon, “Automated checkerboard detection and indexing using circular boundaries,” Pattern Recognit. Lett. 71, 66–72 (2016).
[Crossref]

Other (11)

O. Johannsen, C. Heinze, B. Goldluecke, and C. Perwaß, “On the calibration of focused plenoptic cameras,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics) 8200 LNCS, 302–317 (2013).

A. Duda and U. Frese, “Accurate detection and localization of checkerboard corners for calibration,” Br. Mach. Vis. Conf. 2018, BMVC 2018 (2019).

S. O’Brien, J. Trumpf, V. Ila, and R. Mahony, “Calibrating light-field cameras using plenoptic disc features,” Proc. - 2018 Int. Conf. 3D Vision, 3DV 2018286–294 (2018).

W. Xue-Jun, G. Dong-Yuan, and Y. Xi-Fan, “Application of matlab calibration toolbox for camera’s intrinsic and extrinsic parameters solving,” Proc. - 2019 Int. Conf. Smart Grid Electr. Autom. ICSGEA 2019106–111 (2019).

Lytro. “The Lytro camera,” https://www.lytro.com .

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit, 1027–1034 (2013).

O. Johannsen, A. Sulc, and B. Goldluecke, “On linear structure from motion for light field cameras,” Proc. IEEE Int. Conf. Comput. Vis. 2015, 720–728 (2015).

R. Ng, “Digital light field photography,” PhD. Thesis (Stanford University, 2006).

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” 2009 IEEE Int. Conf. Comput. Photogr. ICCP 09 (May 2014), (2009).

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, M. Gutsche, H. G. Jeon, I. S. Kweon, J. Park, J. Park, H. Schilling, H. Sheng, L. Si, M. Strecke, A. Sulc, Y. W. Tai, Q. Wang, T. C. Wang, S. Wanner, Z. Xiong, J. Yu, S. Zhang, and H. Zhu, “A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. 2017-July, 1795–1812 (2017).

T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Occlusion-aware depth estimation using light-field cameras,” Proc. IEEE Int. Conf. Comput. Vis. 2015, 3487–3495 (2015).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Examples of raw images captured by a Lytro Illum camera. The right image is the Close-up image of a corner in the left image.
Fig. 2.
Fig. 2. Corner feature identification using circular boundaries. The blue and yellow dots compose the circular boundaries of the target corner denoted by the red dot at different radii. Each circular boundary has four sign-changing indices, which are denoted by orange dots.
Fig. 3.
Fig. 3. Examples of corner features extracted from raw images. Left: extraction results of a checkerboard with 9×6 corners excluding edge corners. Right: extraction results of a corner in the left image, red circles, and green dots denote micro-lens and detected corners, respectively.
Fig. 4.
Fig. 4. Projection model of a standard plenoptic camera. The thin lens and pinhole model are applied to the main lens and the microlens, respectively.
Fig. 5.
Fig. 5. A failure case of feature extraction for SCC on the dataset provided by Barroso [27] et al. Left: extraction results of a checkerboard with 19×19 corners excluding edge corners. Right: the close-up image of the corner selected by the white box in the left image, which are all wrong.
Fig. 6.
Fig. 6. The first and second line are the sub-aperture image and estimated depth map using BJW and our method, respectively. The red rectangles indicate the same area in corresponding figures.

Tables (5)

Tables Icon

Table 1. Calibration Results of Lytro camera a

Tables Icon

Table 2. Estimated camera intrinsic parameters according to the proposed method.

Tables Icon

Table 3. Calibration results of Lytro Illum camera.

Tables Icon

Table 4. Ray re-projection error of sub-aperture images (Uint: mm)

Tables Icon

Table 5. The run time analysis (Uint: s).

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

1 Z C 1 Z V = 1 F
[ X V Y V Z V ] = F F Z c [ X c Y c Z c ]
D c [ x y 1 ] = [ X V Y V Z V ] + D c Z V D m Z V ( D m [ x c y c 1 ] [ X V Y V Z V ] )
[ x y ] = [ x c y c ] + D m D c ( D m Z V ) D c [ X V Z V x c Y V Z V y c ]
[ x y ] = [ x c y c ] + ( D m D c ) F F Z C ( D m F F Z C Z C ) L c [ X C Z C x c Y C Z C y c ] = [ x c y c ] + 1 S 1 Z C + S 2 [ X C Z C x c Y C Z C y c ]
x c o r r e c t e d = ( 1 + k 1 r 2 + k 2 r 4 ) x + 2 p 1 x y + p 2 ( r 2 + 2 x 2 )
y c o r r e c t e d = ( 1 + k 1 r 2 + k 2 r 4 ) y + 2 p 2 x y + p 1 ( r 2 + 2 y 2 )
K = [ f x 0 c x 0 f y c y 0 0 1 ]
[ u v ] = [ u c v c ] + 1 S 1 Z c + S 2 [ f x X c Z c ( u c c x ) f y Y c Z c ( v c c y ) ]
[ Z c Δ u Δ u X C 0 Z C 0 Z c Δ v Δ v 0 Y C 0 Z C ] [ S 1 S 2 f x f y c x c y ] = [ Z C u c Z C v c ]
Ax = b
[ X 0 Y 0 ] = [ X c Y c ] Z c = 0 = S 2 [ x x c y y c ]
[ x y ] = [ x c y c ] + 1 S 1 Z C + S 2 [ X L Z L x c Y L Z L y c ] = [ x c y c ] + 1 S 1 Z L + S 2 [ X L Z L x c Y L Z L y c ]
S 1 Z d [ X L Z L x c Y L Z L y c ] = ( S 1 Z L + S 2 ) [ X d Z d x c Y d Z d y c ]
[ X 1 Y 1 ] = [ X d / Z d Y d / Z d ] = S 1 [ x x c y y c ] + [ x c y c ]
f ( S 1 , S 2 , f x , f y , c x , c y , k 1 , k 2 , p 1 , p 2 , R , t ) = | | L P 0 , P 1 , L P W , P W _ a d j | | r a y r a y

Metrics