Abstract

Light-field imaging can simultaneously record spatio-angular information of light rays to carry out depth estimation via depth cues which reflect a coupling of the angular information and the scene depth. However, the unavoidable imaging distortion in a light-field imaging system has a side effect on the spatio-angular coordinate computation, leading to incorrectly estimated depth maps. Based on the previously established unfocused plenoptic metric model, this paper reports a study on the effect of the plenoptic imaging distortion on the light-field depth estimation. A method of light-field depth estimation considering the plenoptic imaging distortion is proposed. Besides, the accuracy analysis of the light-field depth estimation was performed by using standard components. Experimental results demonstrate that efficiently compensating the plenoptic imaging distortion results in a six-fold improvement in measuring accuracy and more consistency across the measuring depth range. Consequently, the proposed method is proved to be suitable for light-field depth estimation and three-dimensional measurement with high quality, enabling unfocused plenoptic cameras to be metrological tools in the potential application scenarios such as industry, biomedicine, entertainment, and many others.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

A camera with a built-in microlens array in front of an image sensor can flexibly record spatio-angular information of light rays, i.e., four-dimensional light fields (LFs) [1,2], becoming a useful plenoptic imaging device for application and research [3,4]. Developing from concepts [5,6] to portable devices, including unfocused plenoptic cameras, i.e., plenoptic 1.0 [7,8] and focused plenoptic cameras, i.e., plenoptic 2.0 [9,10], microlens array-based plenoptic cameras recently are the most commonly used LF imaging devices.

The angular information in the recorded LFs can be used to perform digital refocusing and viewpoint shifting. Thus, the peculiar structure of LFs can provide some depth cues (e.g., blur and disparity) for depth estimation in the image space. The estimated depths can be converted to actual dimensions in the object space, provided that the metric relationship between the object and image spaces is defined through a metric calibration. It can be seen that two major factors influence the measuring accuracy of plenoptic cameras: LF depth estimation and plenoptic metric calibration. The former determines how correct the depth maps can be estimated from the recorded LFs, and the latter determines how accurate the estimated depths can be converted to the metric dimensions.

For plenoptic metric calibration, some methods have been developed for Lytro unfocused plenoptic cameras [1115]. Recently, we established an unfocused plenoptic metric model physically conformed to the imaging properties of unfocused plenoptic cameras and designed a corresponding calibration strategy to achieve unfocused plenoptic three-dimensional (3D) measurement [16]. For LF depth estimation using Lytro unfocused plenoptic cameras, disparity and blur are two commonly used depth cues. The disparity cue extracts, at least, two images from various viewpoints from the LF data and then calculates disparity maps. The blur cue uses the LF data to focus on different depths and obtains focal stacks and then estimates the blurring kernel or focusing degree. Although the methods concerning the LF depth estimation have been proposed and improved continuously [1724], as far as we know, the effect of the imaging distortion on the LF depth estimation has not been considered and discussed yet.

In this paper, we proposed a novel approach for LF depth estimation based on the previously established unfocused plenoptic metric model. Unlike the previous work, this study address two issues. The first and crucial issue is how the plenoptic imaging distortion impacts the LF depth estimation. We found that the angular coordinate error introduced by the plenoptic imaging distortion propagates from the shear amount to the depth-cue response, and finally produces the non-negligible error in the estimated depth maps. Then, a strategy was designed to compute the depth-cue response in which the effect of the plenoptic imaging distortion, including the lateral and depth terms, was suppressed. The next issue is to what extent the plenoptic imaging distortion affects the LF depth estimation. We quantitatively evaluated the accuracy of unfocused plenoptic 3D measurement using standard components. Experimental results demonstrate that the measuring accuracy of unfocused plenoptic cameras can be improved by six times while the measurement becomes consistent across the depth range after efficient compensation for the imaging distortion.

2. Method

For an unfocused plenoptic camera, a microlens array is placed in the image plane of the main lens to record four-dimensional LFs in a two-dimensional image sensor by using a multiplexing technique. The recorded LF can be parameterized, by using two parallel planes (e.g., the planes of the microlens array and main lens), as ${L_{\textrm{I}}}({{\mathbf s},{\mathbf u}} )$, where ${L_{\textrm{I}}}$ denotes the radiant intensity, and ${\mathbf s} = {({s,t} )^\textrm{T}}$ and ${\mathbf u} = {({u,v} )^\textrm{T}}$ denote the spatial and angular coordinates of the intersection points of a light ray with the two parallel planes, with a unit of pixel, respectively. Besides, the recorded LF can be digitally resampled at diverse image planes associated with different metric depths, which can be represented by the previously established unfocused plenoptic metric model [16]:

$${{\mathbf s}_{\alpha} } = {\mathbf s} + {\mathbf u}\left( {1 - \frac{1}{\alpha }} \right), $$
$${Z_{f}} = \frac{{{m_{1}}\alpha + {m_{2}}}}{{\alpha + {m_{3}}}}, $$
where ${\mathbf m} = \{{{m_{1}},{m_{2}},{m_{3}}} \}$ are depth mapping parameters and ${{\mathbf s}_{\alpha} }$ is a shear associated with the spatial and angular coordinates in terms of a scale factor $\alpha$. In this situation, $\alpha$ defines a shear value corresponding to depth variation, namely, a nonmetric depth in the image space, and can be mapped to a metric depth ${Z_{f}}$ in the object space.

A recorded LF contains sufficient angular information for digitally refocusing at different depths and rendering perspective images from different viewpoints. Thus, the peculiar structure of LFs can provide different depth cues, for example, the defocus and correspondence, for LF depth estimation. In passive LF depth estimation, the depth cues are obtained via matching features provide by the LF image structure. However, dependence on the image structure makes the LF depth estimation suffer from the lack of robustness and accuracy in complex scenes with occlusion, discontinuous depth, repeating texture, and diverse illumination. Recently, we have performed an analysis of the defocus and correspondence cues from a different perspective of phase encoding [25]. The phase information can be used to actively construct matching features independent of the image structure for accurate LF depth estimation.

In this paper, the phase encoding is employed to construct accurate matching features for LF depth estimation and unfocused plenoptic metric calibration, thereby the impact of other factors, including the image structure, can be avoided as far as possible. It allows us to mainly focus on a methodology for analyzing and compensating for the effect of the imaging distortion on the LF depth estimation to achieve high-quality unfocused plenoptic 3D measurement. It should be noted that the proposed method essentially can be used to improve the quality of LF depth estimation, no matter the matching features are obtained passively or actively.

With the aid of the phase encoding, a phase-encoded field, $\phi ({{\mathbf s},{\mathbf u}} )$, which reflects the spatial distribution of the phase information, can be obtained. The phase-encoded field shares the same spatio-angular structure with the recorded LF and therefore can be resampled in desired image planes. The previous study indicated that the defocus cue obtained by the spatial variance is insensitive to the global spatial monotonicity of the phase-encoded field. Thus, only the correspondence cue with the weighted angular variance, which is sensitive to the angular variance of the phase-encoded field, is used for the LF depth estimation.

By shearing and integrating the resampled phase-encoded field across the angular dimensions, a refocused phase map associated with a specific depth $\alpha$ can be obtained:

$${\bar{\phi }_{\alpha} }({\mathbf s} )= \frac{1}{{|{{N_{\mathbf u}}} |}}\sum\limits_{{\mathbf u^{\prime}} \in {N_{\mathbf u}}} {\phi ({{{\mathbf s}_{\alpha} },{\mathbf u^{\prime}}} )}, $$
where ${N_{\mathbf u}}$ is a valid angular resolution. The weighted angular variance relative to the refocused phase map is
$${\sigma _\alpha }({\mathbf s} )= \sqrt {\frac{1}{{|{{N_{\mathbf u}}} |}}\sum\limits_{{\mathbf u^{\prime}} \in {N_{\mathbf u}}} {{{\{{({|{u^{\prime} + 1} |} )\cdot [{\phi ({{{\mathbf s}_{\alpha} },{\mathbf u^{\prime}}} )- {{\bar{\phi }}_{\alpha} }({\mathbf s} )} ]} \}}^2}} }, $$
When the phase-encoded field is resampled at an image plane correlated to an in-focus object point, the resampled phase-encoded field becomes consistent across angular coordinates and has the minimum angular variance. By using the weighted average angular variance in a small patch, a correspondence response can be computed:
$${C_{\alpha} }({\mathbf s} )= \frac{1}{{|{{W_{\mathbf s}}} |}}\sum\limits_{{\mathbf s^{\prime}} \in {W_{\mathbf s}}} {{\sigma _{\alpha} }({{\mathbf s^{\prime}}} )}, $$
where ${W_{\mathbf s}}$ is a spatial window around the current spatial coordinates. In the phase-encoded field, the correspondence response across the depth range exhibits a single-peak distribution trend that can be used to search an unambiguous depth.

Similar to the procedure described above, the existing methods concerning LF depth estimation, to the best of our knowledge, did not consider the impact of the imaging distortion. In practice, the imaging distortion is unavoidable owing to manufacturing and assembly errors. In previous work, we derived a mathematical model for the plenoptic imaging distortion [16], based on which Eq. (2) can be rewritten as

$$\left\{ \begin{array}{l} {\mathbf s}_{\alpha}^{u} = {\mathbf s} + ({{\mathbf u} + {\delta_{\mathbf u}}} )\left( {1 - \frac{1}{\alpha }} \right)\\ {\delta_{\mathbf u}} = {\mathbf u}\frac{{{k_{1}}{r^{2}} + {k_{2}}{r^{4}} + {k_{3}}{r^{6}}}}{{{q_{1}}({\alpha - 1} )+ {q_{2}}{{({\alpha - 1} )}^2}}} \end{array} \right., $$
where ${\delta _{\mathbf u}}$ denotes the angular coordinate error caused by the plenoptic imaging distortion simultaneously containing the lateral and depth components with the radial and depth distortion parameters ${\mathbf k} = \{{{k_{1}},{k_{2}},{k_{3}}} \}$ and ${\mathbf q} = \{{{q_{1}},{q_{2}}} \}$, respectively, and $r = ||{\mathbf u} ||$ denotes an angular coordinate distance from the intersection point of a light ray to the origin in the angular coordinate plane. The lateral distortion term, denoted as ${\delta _{r}} = {\mathbf u}({{k_{1}}{r^{2}} + {k_{2}}{r^{4}} + {k_{3}}{r^{6}}} )$, is primarily radially symmetric due to the symmetric lens design and only related to the angular coordinates. The depth distortion term, denoted as ${\delta _{\alpha} } = {1 \mathord{\left/ {\vphantom {1 {{q_{1}}({\alpha - 1} )+ {q_{2}}{{({\alpha - 1} )}^2}}}} \right.} {{q_{1}}({\alpha - 1} )+ {q_{2}}{{({\alpha - 1} )}^2}}}$, is a depth-dependent common factor acting on the radial distortion parameters. As long as the angular coordinate error is determined, an undistorted shear, ${\mathbf s}_{\alpha} ^{u}$, can be obtained.

The angular coordinate error changes the direction of an incident light ray after passing through the main lens. The shear value associated with a distorted light ray differs from that with a distortion-free light ray. So, an incorrect result of LF depth estimation may arise from the shear ${{\mathbf s}_{\alpha} }$ used in Eqs. (3)–(4) if the plenoptic imaging distortion is not considered. When the plenoptic imaging distortion is considered, the undistorted shear ${\mathbf s}_{\alpha} ^{u}$ should be used to rewrite Eq. (3) and Eq. (4) respectively as

$$\bar{\phi }_{\alpha} ^{u}({\mathbf s} )= \frac{1}{{|{{N_{\mathbf u}}} |}}\sum\limits_{{\mathbf u^{\prime}} \in {N_{\mathbf u}}} {\phi ({{\mathbf s}_{\alpha}^{u},{\mathbf u^{\prime}}} )}, $$
$$\sigma _{\alpha} ^{u}({\mathbf s} )= \sqrt {\frac{1}{{|{{N_{\mathbf u}}} |}}\sum\limits_{{\mathbf u^{\prime}} \in {N_{\mathbf u}}} {{{\{{({|{u^{\prime} + 1} |} )\cdot [{\phi ({{\mathbf s}_{\alpha}^{u},{\mathbf u^{\prime}}} )- \bar{\phi }_{\alpha}^{u}({\mathbf s} )} ]} \}}^2}} }, $$
where the superscript u is used for identifying the undistorted refocused phase map $\bar{\phi }_{\alpha} ^{u}$ and the undistorted weighted angular variance $\sigma _{\alpha} ^{u}$. By using Eqs. (7)–(8), the correspondence response across the depth range in the phase-encoded field can be computed to estimate the scene depth correctly.

3. Experiments and analysis

A commercially available unfocused plenoptic camera (Lytro Illum) was used for experimental demonstration and analysis. Raw images captured by the plenoptic camera were decoded with the spatial and angular resolutions as 434 × 625 pixels and 15 × 15 pixels, respectively. The working distance was about 300 mm. The plenoptic camera was calibrated with the aid of an auxiliary 3D measurement system. The phase encoding was employed to construct a target with continuous benchmarks for stable and accurate calibration. In comparison, most existing methods used a checkerboard for plenoptic camera calibration. However, the feature extraction of a target image with a low spatial resolution may introduce non-negligible uncertainties, which propagate in the plenoptic calibration and measurement and finally decrease the accuracy of the results designed. The calibrated system parameters would be used for later LF depth estimation and accuracy analysis.

During calibration, a white plane target was placed at different positions with different orientations relative to the plenoptic camera. At each target calibration position, images were captured by the plenoptic camera and the 3D measurement system. These image data were used in the calibration procedure with an optimization algorithm to obtain the optimal system parameters. The unfocused plenoptic metric calibration was performed twice under the situations without and with the consideration of plenoptic imaging distortion, respectively. Some calibrated system parameters corresponding to a principal light ray that passes through the center of the main lens and arrives at the position with spatial coordinates ${\mathbf s} = {({218,313} )^\textrm{T}}$, for example, are listed in Table 1. It can be seen that the two sets of the depth mapping parameters ${\mathbf m}$ determined without and with considering the plenoptic imaging distortion are quite different. The metric depths among the measured depth pairs $({{Z_{f}},\alpha } )$ used for the depth mapping calibration related to Eq. (2) remained unchanged. Thus, the difference between the two sets of the depth mapping parameters was caused by the nonmetric depth computation. When the plenoptic imaging distortion is considered, the nonmetric depths were optimized together with the distortion parameters in the calibrated depth range. In contrast, if the plenoptic imaging distortion is not considered, the nonmetric depths were directly computed from the shear amount at each calibrated position. To avoid duplication in content with our previous work [16], here we did not further discuss the modeling and calibration concerning the plenoptic imaging distortion (please refer to [16] for interest).

Tables Icon

Table 1. Calibrated system parameters.

After accomplishing the calibration, a standard ceramic plate was taken as a test object for the LF depth estimation. Figure 1(a) shows the central view of the recorded LF of the experimental scene at a measuring position, which is labeled as Scene 1. This scene with a weak-texture surface would lead to a problematic depth map that was estimated using the passive methods. In our experiments, the active method was used to avoid the influence of other factors, except for the plenoptic imaging distortion, on the LF depth estimation. Fringe-projection techniques can be used to obtain a phase-encoded field, of which the central view concerning Scene 1 is shown in Fig. 1(b). The phase information modulated by the scene depth was distributed monotonously along the horizontal direction and therefore used to construct accurate matching features for the LF depth estimation.

 figure: Fig. 1.

Fig. 1. Experimental Scene 1: central views of (a) a recorded LF and (b) a phase-encoded field, respectively.

Download Full Size | PPT Slide | PDF

Based on the phase-encoded field, the distorted correspondence response across the depth range was computed. Figure 2 shows the distribution curve of the distorted correspondence response at the spatial coordinates ${\mathbf s} = {({218,313} )^\textrm{T}}$. With the calibrated distortion parameters, the undistorted correspondence response was computed, and the distribution curve at the same spatial coordinates is also plotted in Fig. 2. The two curves present a similar single-peak distribution trend but have a separation, leading to different depths. The depth maps were estimated through the distorted and undistorted correspondence responses, as shown in Fig. 3(a) and 3(b), respectively.

 figure: Fig. 2.

Fig. 2. Distribution curves of distorted and undistorted correspondence responses across depth range.

Download Full Size | PPT Slide | PDF

 figure: Fig. 3.

Fig. 3. LF depth estimation of Scene 1: depth maps (a) without and (b) with considering the plenoptic imaging distortion, respectively.

Download Full Size | PPT Slide | PDF

An accuracy analysis cannot be performed directly by comparing the two depth maps because the ground-truth data of the nonmetric depths are unknown. Alternatively, by using the unfocused plenoptic metric model, the nonmetric depths in the image space can be converted to the metric dimensions in the object space to evaluate the measuring accuracy. In this situation, the LF depth estimation and unfocused plenoptic metric calibration are two major factors that influence on the measuring accuracy. Thus each of the two depth maps was mapped to two sets of objective 3D coordinates without and with the plenoptic imaging distortion, respectively, dividing into the following four cases:

Case 1: both the LF depth estimation and unfocused plenoptic metric calibration without considering the plenoptic imaging distortion.

Case 2: the LF depth estimation without considering the plenoptic imaging distortion while the unfocused plenoptic metric calibration with considering the plenoptic imaging distortion.

Case 3: the LF depth estimation with considering the plenoptic imaging distortion while the unfocused plenoptic metric calibration without considering the plenoptic imaging distortion.

Case 4: both the LF depth estimation and unfocused plenoptic metric calibration with considering the plenoptic imaging distortion.

The spatial distributions of the 3D coordinates reconstructed by using the calibrated system parameters are shown in Fig. 4(a). It can be seen that Case 1 is similar to Case 2, which deviates from a plane shape, whereas Case 3 is similar to Case 4, which trends to a plane shape. The plane fitting was performed in each case to quantitatively observe the spatial distribution of the plane-fitting errors that are defined as the distance from the reconstructed spatial points to the fitted plane, as shown in Fig. 4(b). The corresponding histograms of the plane-fitting errors are shown in Fig. 4(c). Figure 5 shows a boxplot of the plane-fitting errors, and Table 2 lists the relevant absolute-maximum values (MAX), mean values (MEAN), and root-mean-square values (RMS) in each case.

 figure: Fig. 4.

Fig. 4. Accuracy analysis of Scene 1: (a) spatial distributions of reconstructed 3D coordinates; (b) spatial distributions of plane-fitting errors; (c) histograms of plane-fitting errors, in four cases, respectively.

Download Full Size | PPT Slide | PDF

 figure: Fig. 5.

Fig. 5. Boxplot of plane-fitting errors of Scene 1.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 2. Relevant data of plane-fitting errors of Scene 1 (mm).

By comparing and analyzing the above experimental data, two significant conclusions can be drawn and summarized as follows:

First, the measuring accuracies in Case 3 and Case 4 with considering the plenoptic imaging distortion in the LF depth estimation were higher than those in Case 1 and Case 2 without considering the plenoptic imaging distortion in the LF depth estimation, regardless of whether the plenoptic imaging distortion is considered or not in the unfocused plenoptic metric calibration. Therefore, the plenoptic imaging distortion has a significant side effect on the LF depth estimation and should be compensated.

Second, the measuring accuracy in Case 4 was the highest. Furthermore, the histogram of the plane-fitting errors in Case 4 presented a Gaussian distribution, whereas the error distributions in other cases had a certain degree of systematic deviation. Therefore, the consideration of the plenoptic imaging distortion in the LF depth estimation and the unfocused plenoptic metric calibration can efficiently decrease the systematic error, resulting in high-accuracy unfocused plenoptic 3D measurement.

Besides, the measuring accuracy in Case 2 was lower than that in Case 1. It is because the system calibration with considering the plenoptic imaging distortion may introduce some systematic errors to the 3D measurement when the LF depth estimation does not consider the plenoptic imaging distortion.

Further evaluation of the measuring accuracy was performed for the standard ceramic plate at another position farther away from the plenoptic camera, which was labeled as Scene 2. Figures 6(a) and 6(b) show the central views of the recorded LF and phase-encoded field of Scene 2, respectively. According to the above analysis, the following accuracy analysis was performed only for Case 1 and Case 4.

 figure: Fig. 6.

Fig. 6. Experimental Scene 2: central views of (a) a recorded LF and (b) a phase-encoded field, respectively.

Download Full Size | PPT Slide | PDF

Similarly, the depth maps of Scene 2 were estimated based on the phase-encoded field, without and with considering the plenoptic imaging distortion, as shown in Figs. 7(a) and 7(b), respectively. With the calibrated system parameters, the two depth maps were mapped to two sets of metric 3D coordinates, whose spatial distributions are shown in Fig. 8(a). Then, the plane fitting was performed by using the reconstructed 3D coordinates, resulting in the spatial distributions and histograms of the plane-fitting errors, as shown in Figs. 8(b) and 8(c), respectively.

 figure: Fig. 7.

Fig. 7. LF depth estimation of Scene 2: depth maps (a) without and (b) with considering the plenoptic imaging distortion, respectively.

Download Full Size | PPT Slide | PDF

 figure: Fig. 8.

Fig. 8. Accuracy analysis of Scene 2: (a) spatial distributions of reconstructed 3D coordinates; (b) spatial distributions of plane-fitting errors; (c) histograms of plane-fitting errors, in Case 1 and Case 4, respectively.

Download Full Size | PPT Slide | PDF

The results in Fig. 8 were similar to those in Fig. 4 in Case 1 and Case 4. Table 3 lists the data of the plane-fitting errors related to Scene 2, along with some data listed in Table 2 related to Scene 1 for comparison. It can be seen that the measuring accuracies of the same measured object in different measuring positions, in particular, in different measuring depths, are different. The change of the RMS in Case 1 is 17.67%, while that in Case 4 is 11.93%. In other words, the measuring accuracy in Case 4 kept more consistent than that in Case 1. It is because the model of the plenoptic imaging distortion used in Case 4 can deal with the depth distortion together with the lateral distortion.

Tables Icon

Table 3. Relevant data of plane-fitting errors of Scene 1 and Scene 2 for comparison (mm).

To further explore the consistency in measuring accuracy, a standard gauge consisting of four ceramic cylinders, which was labeled as Scene 3, was used to perform the accuracy analysis. The top surface of each cylinder is a standard plane and parallel to each other, and all cylinders have different heights, as shown in Fig. 9(a). Figures 9(b) and 9(c) show the central views of the recorded LF and phase-encoded field of the standard gauge with its top surfaces approximately vertical to the optical axis of the plenoptic camera, respectively.

 figure: Fig. 9.

Fig. 9. Experimental Scene 3: (a) photograph; central views of (b) a recorded LF and (c) a phase-encoded field, respectively.

Download Full Size | PPT Slide | PDF

Figures 10(a) and 10(b) show the estimated depth map and the spatial distribution of reconstructed 3D coordinates in Case 1, while Figs. 10(c) and 10(d) show those in Case 4, respectively. In each case, the plane fitting was performed for each top surface in the reconstructed 3D model to compute the distance and parallelism between every two top surfaces. The plane distance was computed via the average of the projections of the connection line of the principal points of the two fitted planes on the respective plane normals. The plane parallelism was computed via the angle of the corresponding plane normals. The computed plane distances were then compared with the ground-truth data to obtain distance errors.

 figure: Fig. 10.

Fig. 10. LF depth estimation and unfocused plenoptic 3D measurement of Scene 3: (a) depth map and (b) spatial distribution of reconstructed 3D coordinates in Case 1; (c) depth map and (d) spatial distribution of reconstructed 3D coordinates in Case 4.

Download Full Size | PPT Slide | PDF

The relevant data of the computed distances and parallelisms are listed in Table 4. It can be seen that, on the whole, the distance errors and parallelisms in Case 4 were smaller than those in Case 1. Specifically, Fig. 11 shows the distribution curves of the distance errors. The distance errors in Case 4 kept consistent and below 0.4 mm over the distances, whereas those in Case 1 increased with the distances and the maximum error reached 2.4 mm. The measuring accuracy in Case 1 was obviously related to the measuring depth.

 figure: Fig. 11.

Fig. 11. Distribution curves of distance errors.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 4. Relevant data of distances (mm) and parallelisms (rad) of Scene 3.

Finally, the distance error weighted by the distance was defined to evaluate the measuring accuracy of the plenoptic camera such that

$$\sigma = \sqrt {\frac{{\sum\limits_{i = 1}^N {({{d_{i}}\cdot \Delta d_{i}^{2}} )} }}{{\sum\limits_{i = 1}^{N} {{d_{i}}} }}}, $$
where $\sigma$ is the distance-weighted measuring accuracy, d is the ground-truth distance and $\Delta d$ is the distance error, and N is the number of measurement. By using Eq. (9), the measuring accuracy in Case 1 and Case 4 were estimated to be 1.7537 mm and 0.2941 mm, respectively. The measuring accuracy after compensating the plenoptic imaging distortion was improved by six times. As a result, the second conclusion drawn above can be expanded as that the measuring accuracy in Case 4 not only was the highest but also kept consistent across the measuring depth range.

Except for standard components with flat surfaces, the proposed method can also be adopted in scenes composed of more complicated objects. In the final experiment, we used two plaster models with freeform surfaces, which was labeled as Scene 4 with a depth range of nearly 150 mm. Figures 12(a) and 12(d) show the estimated depth maps of Scene 4 in Case 1 and Case 4, respectively. Correspondingly, 3D coordinates were reconstructed from the depth maps, as visualized by 3D models in the front view in Figs. 12(b) and 12(e), and point clouds in the side view in Figs. 12(c) and 12(f), respectively. Unlike the experiments using standard components, here, the results could only be compared qualitatively. The 3D models in the front view look almost the same, whereas the morphologies of the point clouds in the side view were observed to be different. We employed the head-to-body tile angle of the model as a criterion for qualitative comparison. As illustrated by the dashed lines in Fig. 12(c) and the solid lines in Fig. 12(f), the tile angles in Case 1 are larger than those in Case 4. It means that the reconstructed result in Case 4 overall undergone smaller deformation and thereby is more accurate, which is consistent with the conclusion provided above.

 figure: Fig. 12.

Fig. 12. LF depth estimation and unfocused plenoptic 3D measurement of Scene 4: (a) depth map and reconstructed 3D coordinates visualized by (b) 3D model in front view and (c) point cloud in side view in Case 1; (d) depth map and reconstructed 3D coordinates visualized by (e) 3D model in front view and (f) point cloud in side view in Case 4.

Download Full Size | PPT Slide | PDF

4. Conclusion

At present, to our knowledge, there is no report on the impact of the imaging distortion on the LF depth estimation. The derivation through the unfocused plenoptic metric model established previously showed that the plenoptic imaging distortion would introduce the angular coordinate errors, leading to incorrect depth estimation. Based on the unfocused plenoptic metric model, we proposed a method to consider and compensate for the plenoptic imaging distortion in the LF depth estimation to obtain the correct depth maps.

Furthermore, the accuracy analysis of the LF depth estimation was experimentally performed through the unfocused plenoptic 3D measurement of the standard components. After compensating the plenoptic imaging distortion in the LF depth estimation and unfocused plenoptic metric calibration, the highest measuring accuracy was achieved to be 0.2941 mm, six-fold improvement compared to un-compensated measurement, and kept consistent across the measuring depth range.

In summary, the plenoptic imaging distortion in the LF depth estimation should be carefully considered and compensated for high-quality measurement, including accuracy and consistency. The proposed approach paves a way for the evaluation and compensation of the plenoptic imaging distortion. In this work, the calibration and measurement were carried out with fixed optical parameters. In our future work, we will explore methods of unfocused plenoptic metric calibration and measurement suitable for variable optical parameters (e.g., focal length) and how the measuring accuracy changes with these parameters after compensation for the plenoptic imaging distortion.

Funding

National Natural Science Foundation of China (NSFC) (61875137, 11804231); Sino-German Cooperation Group (GZ 1391); Natural Science Foundation of Guangdong Province (2018A030313831).

Acknowledgments

The authors acknowledge Dr. Jiping Guo in Shenzhen Academic of Metrology and Quality Inspection for providing the high-accuracy data of the standard components used for the accuracy analysis.

Disclosures

The authors declare no conflicts of interest.

References

1. M. Levoy and P. Hanrahan, “Light field rendering,” in Proceeding of ACM SIGGRAPH (ACM, 1996), pp. 31–42.

2. S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceeding of ACM SIGGRAPH (ACM, 1996), pp. 43–54.

3. I. Ihrke, J. Restrepo, and L. Mignard-Debise, “Principles of light field imaging briefly revisiting 25 years of research,” IEEE Signal Process. Mag. 33(5), 59–69 (2016). [CrossRef]  

4. G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017). [CrossRef]  

5. F. E. Ives, “Parallax stereogram and process of making same,” United States Patent Application 725,567, (1903).

6. G. Lippmann, “Epreuves reversible. Photographies integrals,” CR Acad. Sci. 146(3), 446–451 (1908).

7. E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Machine Intell. 14(2), 99–106 (1992). [CrossRef]  

8. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Technical Report CSTR (2005), pp. 1–11.

9. L. Andrew and G. Todor, “Full resolution lightfield rendering,” Adobe Technical Report (2008).

10. C. Perwass and L. Wietzke, “Single lens 3D-camera with extended depth-of-field,” in Conference on Human Vision and Electronic Imaging XVII (SPIE, 2012), 829108.

11. D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1027–1034.

12. Y. Bok, H.-G. Jeon, and I. S. Kweon, “Geometric calibration of micro-lens-based light field cameras using line features,” IEEE Trans. Pattern Anal. Machine Intell. 39(2), 287–300 (2017). [CrossRef]  

13. C. Li, X. Zhang, and D. Tu, “Metric three-dimensional reconstruction model from a light field and its calibration,” Opt. Eng. 56(1), 013105 (2017). [CrossRef]  

14. B. Chen and B. Pan, “Full-field surface 3D shape and displacement measurements using an unfocused plenoptic camera,” Exp. Mech. 58(5), 831–845 (2018). [CrossRef]  

15. S. Pertuz, E. Pulido-Herrera, and J.-K. Kamarainen, “Focus model for metric depth estimation in standard plenoptic cameras,” ISPRS J. Photogramm. Remote Sens. 144, 38–47 (2018). [CrossRef]  

16. Z. Cai, X. Liu, G. Pedrini, W. Osten, and X. Peng, “Unfocused plenoptic metric modeling and calibration,” Opt. Express 27(15), 20177–20198 (2019). [CrossRef]  

17. S. Wanner and B. Goldluecke, “Globally consistent depth labeling of 4D light fields,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 41–48.

18. C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32(4), 1 (2013). [CrossRef]  

19. M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 673–680.

20. H. Lin, C. Chen, S. B. Kang, and J. Yu, “Depth recovery from light field using focal stack symmetry,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2015), pp. 3451–3459.

21. C. Hahne, A. Aggoun, V. Velisavljevic, S. Fiebig, and M. Pesch, “Refocusing distance of a standard plenoptic camera,” Opt. Express 24(19), 21521–21540 (2016). [CrossRef]  

22. Y. Chen, X. Jin, and Q. Dai, “Distance measurement based on light field geometry and ray tracing,” Opt. Express 25(1), 59–76 (2017). [CrossRef]  

23. Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circuits Syst. Video Technol. 27(4), 739–747 (2017). [CrossRef]  

24. I. K. Williem, K. M. Park, and Lee, “Robust light field depth estimation using occlusion-noise aware data costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2484–2497 (2018). [CrossRef]  

25. Z. Cai, X. Liu, G. Pedrini, W. Osten, and X. Peng, “Accurate depth estimation in structured light fields,” Opt. Express 27(9), 13532–13546 (2019). [CrossRef]  

References

  • View by:

  1. M. Levoy and P. Hanrahan, “Light field rendering,” in Proceeding of ACM SIGGRAPH (ACM, 1996), pp. 31–42.
  2. S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceeding of ACM SIGGRAPH (ACM, 1996), pp. 43–54.
  3. I. Ihrke, J. Restrepo, and L. Mignard-Debise, “Principles of light field imaging briefly revisiting 25 years of research,” IEEE Signal Process. Mag. 33(5), 59–69 (2016).
    [Crossref]
  4. G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
    [Crossref]
  5. F. E. Ives, “Parallax stereogram and process of making same,” United States Patent Application 725,567, (1903).
  6. G. Lippmann, “Epreuves reversible. Photographies integrals,” CR Acad. Sci. 146(3), 446–451 (1908).
  7. E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Machine Intell. 14(2), 99–106 (1992).
    [Crossref]
  8. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Technical Report CSTR (2005), pp. 1–11.
  9. L. Andrew and G. Todor, “Full resolution lightfield rendering,” Adobe Technical Report (2008).
  10. C. Perwass and L. Wietzke, “Single lens 3D-camera with extended depth-of-field,” in Conference on Human Vision and Electronic Imaging XVII (SPIE, 2012), 829108.
  11. D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1027–1034.
  12. Y. Bok, H.-G. Jeon, and I. S. Kweon, “Geometric calibration of micro-lens-based light field cameras using line features,” IEEE Trans. Pattern Anal. Machine Intell. 39(2), 287–300 (2017).
    [Crossref]
  13. C. Li, X. Zhang, and D. Tu, “Metric three-dimensional reconstruction model from a light field and its calibration,” Opt. Eng. 56(1), 013105 (2017).
    [Crossref]
  14. B. Chen and B. Pan, “Full-field surface 3D shape and displacement measurements using an unfocused plenoptic camera,” Exp. Mech. 58(5), 831–845 (2018).
    [Crossref]
  15. S. Pertuz, E. Pulido-Herrera, and J.-K. Kamarainen, “Focus model for metric depth estimation in standard plenoptic cameras,” ISPRS J. Photogramm. Remote Sens. 144, 38–47 (2018).
    [Crossref]
  16. Z. Cai, X. Liu, G. Pedrini, W. Osten, and X. Peng, “Unfocused plenoptic metric modeling and calibration,” Opt. Express 27(15), 20177–20198 (2019).
    [Crossref]
  17. S. Wanner and B. Goldluecke, “Globally consistent depth labeling of 4D light fields,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 41–48.
  18. C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32(4), 1 (2013).
    [Crossref]
  19. M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 673–680.
  20. H. Lin, C. Chen, S. B. Kang, and J. Yu, “Depth recovery from light field using focal stack symmetry,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2015), pp. 3451–3459.
  21. C. Hahne, A. Aggoun, V. Velisavljevic, S. Fiebig, and M. Pesch, “Refocusing distance of a standard plenoptic camera,” Opt. Express 24(19), 21521–21540 (2016).
    [Crossref]
  22. Y. Chen, X. Jin, and Q. Dai, “Distance measurement based on light field geometry and ray tracing,” Opt. Express 25(1), 59–76 (2017).
    [Crossref]
  23. Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circuits Syst. Video Technol. 27(4), 739–747 (2017).
    [Crossref]
  24. I. K. Williem, K. M. Park, and Lee, “Robust light field depth estimation using occlusion-noise aware data costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2484–2497 (2018).
    [Crossref]
  25. Z. Cai, X. Liu, G. Pedrini, W. Osten, and X. Peng, “Accurate depth estimation in structured light fields,” Opt. Express 27(9), 13532–13546 (2019).
    [Crossref]

2019 (2)

2018 (3)

I. K. Williem, K. M. Park, and Lee, “Robust light field depth estimation using occlusion-noise aware data costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2484–2497 (2018).
[Crossref]

B. Chen and B. Pan, “Full-field surface 3D shape and displacement measurements using an unfocused plenoptic camera,” Exp. Mech. 58(5), 831–845 (2018).
[Crossref]

S. Pertuz, E. Pulido-Herrera, and J.-K. Kamarainen, “Focus model for metric depth estimation in standard plenoptic cameras,” ISPRS J. Photogramm. Remote Sens. 144, 38–47 (2018).
[Crossref]

2017 (5)

Y. Bok, H.-G. Jeon, and I. S. Kweon, “Geometric calibration of micro-lens-based light field cameras using line features,” IEEE Trans. Pattern Anal. Machine Intell. 39(2), 287–300 (2017).
[Crossref]

C. Li, X. Zhang, and D. Tu, “Metric three-dimensional reconstruction model from a light field and its calibration,” Opt. Eng. 56(1), 013105 (2017).
[Crossref]

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

Y. Chen, X. Jin, and Q. Dai, “Distance measurement based on light field geometry and ray tracing,” Opt. Express 25(1), 59–76 (2017).
[Crossref]

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circuits Syst. Video Technol. 27(4), 739–747 (2017).
[Crossref]

2016 (2)

C. Hahne, A. Aggoun, V. Velisavljevic, S. Fiebig, and M. Pesch, “Refocusing distance of a standard plenoptic camera,” Opt. Express 24(19), 21521–21540 (2016).
[Crossref]

I. Ihrke, J. Restrepo, and L. Mignard-Debise, “Principles of light field imaging briefly revisiting 25 years of research,” IEEE Signal Process. Mag. 33(5), 59–69 (2016).
[Crossref]

2013 (1)

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32(4), 1 (2013).
[Crossref]

1992 (1)

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Machine Intell. 14(2), 99–106 (1992).
[Crossref]

1908 (1)

G. Lippmann, “Epreuves reversible. Photographies integrals,” CR Acad. Sci. 146(3), 446–451 (1908).

Adelson, E. H.

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Machine Intell. 14(2), 99–106 (1992).
[Crossref]

Aggoun, A.

Andrew, L.

L. Andrew and G. Todor, “Full resolution lightfield rendering,” Adobe Technical Report (2008).

Bok, Y.

Y. Bok, H.-G. Jeon, and I. S. Kweon, “Geometric calibration of micro-lens-based light field cameras using line features,” IEEE Trans. Pattern Anal. Machine Intell. 39(2), 287–300 (2017).
[Crossref]

Brédif, M.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Technical Report CSTR (2005), pp. 1–11.

Cai, Z.

Chai, T.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

Chen, B.

B. Chen and B. Pan, “Full-field surface 3D shape and displacement measurements using an unfocused plenoptic camera,” Exp. Mech. 58(5), 831–845 (2018).
[Crossref]

Chen, C.

H. Lin, C. Chen, S. B. Kang, and J. Yu, “Depth recovery from light field using focal stack symmetry,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2015), pp. 3451–3459.

Chen, Y.

Cohen, M. F.

S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceeding of ACM SIGGRAPH (ACM, 1996), pp. 43–54.

Dai, Q.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

Y. Chen, X. Jin, and Q. Dai, “Distance measurement based on light field geometry and ray tracing,” Opt. Express 25(1), 59–76 (2017).
[Crossref]

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circuits Syst. Video Technol. 27(4), 739–747 (2017).
[Crossref]

Dansereau, D. G.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1027–1034.

Duval, G.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Technical Report CSTR (2005), pp. 1–11.

Fiebig, S.

Goldluecke, B.

S. Wanner and B. Goldluecke, “Globally consistent depth labeling of 4D light fields,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 41–48.

Gortler, S. J.

S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceeding of ACM SIGGRAPH (ACM, 1996), pp. 43–54.

Gross, M.

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32(4), 1 (2013).
[Crossref]

Grzeszczuk, R.

S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceeding of ACM SIGGRAPH (ACM, 1996), pp. 43–54.

Hadap, S.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 673–680.

Hahne, C.

Hanrahan, P.

M. Levoy and P. Hanrahan, “Light field rendering,” in Proceeding of ACM SIGGRAPH (ACM, 1996), pp. 31–42.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Technical Report CSTR (2005), pp. 1–11.

Horowitz, M.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Technical Report CSTR (2005), pp. 1–11.

Huang, Q.

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circuits Syst. Video Technol. 27(4), 739–747 (2017).
[Crossref]

Ihrke, I.

I. Ihrke, J. Restrepo, and L. Mignard-Debise, “Principles of light field imaging briefly revisiting 25 years of research,” IEEE Signal Process. Mag. 33(5), 59–69 (2016).
[Crossref]

Ives, F. E.

F. E. Ives, “Parallax stereogram and process of making same,” United States Patent Application 725,567, (1903).

Jarabo, A.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

Jeon, H.-G.

Y. Bok, H.-G. Jeon, and I. S. Kweon, “Geometric calibration of micro-lens-based light field cameras using line features,” IEEE Trans. Pattern Anal. Machine Intell. 39(2), 287–300 (2017).
[Crossref]

Jin, X.

Kamarainen, J.-K.

S. Pertuz, E. Pulido-Herrera, and J.-K. Kamarainen, “Focus model for metric depth estimation in standard plenoptic cameras,” ISPRS J. Photogramm. Remote Sens. 144, 38–47 (2018).
[Crossref]

Kang, S. B.

H. Lin, C. Chen, S. B. Kang, and J. Yu, “Depth recovery from light field using focal stack symmetry,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2015), pp. 3451–3459.

Kim, C.

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32(4), 1 (2013).
[Crossref]

Kweon, I. S.

Y. Bok, H.-G. Jeon, and I. S. Kweon, “Geometric calibration of micro-lens-based light field cameras using line features,” IEEE Trans. Pattern Anal. Machine Intell. 39(2), 287–300 (2017).
[Crossref]

Lee,

I. K. Williem, K. M. Park, and Lee, “Robust light field depth estimation using occlusion-noise aware data costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2484–2497 (2018).
[Crossref]

Levoy, M.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Technical Report CSTR (2005), pp. 1–11.

M. Levoy and P. Hanrahan, “Light field rendering,” in Proceeding of ACM SIGGRAPH (ACM, 1996), pp. 31–42.

Li, C.

C. Li, X. Zhang, and D. Tu, “Metric three-dimensional reconstruction model from a light field and its calibration,” Opt. Eng. 56(1), 013105 (2017).
[Crossref]

Lin, H.

H. Lin, C. Chen, S. B. Kang, and J. Yu, “Depth recovery from light field using focal stack symmetry,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2015), pp. 3451–3459.

Lippmann, G.

G. Lippmann, “Epreuves reversible. Photographies integrals,” CR Acad. Sci. 146(3), 446–451 (1908).

Liu, X.

Liu, Y.

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circuits Syst. Video Technol. 27(4), 739–747 (2017).
[Crossref]

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

Lv, H.

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circuits Syst. Video Technol. 27(4), 739–747 (2017).
[Crossref]

Malik, J.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 673–680.

Masia, B.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

Mignard-Debise, L.

I. Ihrke, J. Restrepo, and L. Mignard-Debise, “Principles of light field imaging briefly revisiting 25 years of research,” IEEE Signal Process. Mag. 33(5), 59–69 (2016).
[Crossref]

Ng, R.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Technical Report CSTR (2005), pp. 1–11.

Osten, W.

Pan, B.

B. Chen and B. Pan, “Full-field surface 3D shape and displacement measurements using an unfocused plenoptic camera,” Exp. Mech. 58(5), 831–845 (2018).
[Crossref]

Park, K. M.

I. K. Williem, K. M. Park, and Lee, “Robust light field depth estimation using occlusion-noise aware data costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2484–2497 (2018).
[Crossref]

Pedrini, G.

Peng, X.

Pertuz, S.

S. Pertuz, E. Pulido-Herrera, and J.-K. Kamarainen, “Focus model for metric depth estimation in standard plenoptic cameras,” ISPRS J. Photogramm. Remote Sens. 144, 38–47 (2018).
[Crossref]

Perwass, C.

C. Perwass and L. Wietzke, “Single lens 3D-camera with extended depth-of-field,” in Conference on Human Vision and Electronic Imaging XVII (SPIE, 2012), 829108.

Pesch, M.

Pizarro, O.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1027–1034.

Pritch, Y.

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32(4), 1 (2013).
[Crossref]

Pulido-Herrera, E.

S. Pertuz, E. Pulido-Herrera, and J.-K. Kamarainen, “Focus model for metric depth estimation in standard plenoptic cameras,” ISPRS J. Photogramm. Remote Sens. 144, 38–47 (2018).
[Crossref]

Ramamoorthi, R.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 673–680.

Restrepo, J.

I. Ihrke, J. Restrepo, and L. Mignard-Debise, “Principles of light field imaging briefly revisiting 25 years of research,” IEEE Signal Process. Mag. 33(5), 59–69 (2016).
[Crossref]

Sorkine-Hornung, A.

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32(4), 1 (2013).
[Crossref]

Szeliski, R.

S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceeding of ACM SIGGRAPH (ACM, 1996), pp. 43–54.

Tao, M. W.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 673–680.

Todor, G.

L. Andrew and G. Todor, “Full resolution lightfield rendering,” Adobe Technical Report (2008).

Tu, D.

C. Li, X. Zhang, and D. Tu, “Metric three-dimensional reconstruction model from a light field and its calibration,” Opt. Eng. 56(1), 013105 (2017).
[Crossref]

Velisavljevic, V.

Wang, H.

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circuits Syst. Video Technol. 27(4), 739–747 (2017).
[Crossref]

Wang, J. Y. A.

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Machine Intell. 14(2), 99–106 (1992).
[Crossref]

Wang, L.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

Wang, X.

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circuits Syst. Video Technol. 27(4), 739–747 (2017).
[Crossref]

Wanner, S.

S. Wanner and B. Goldluecke, “Globally consistent depth labeling of 4D light fields,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 41–48.

Wietzke, L.

C. Perwass and L. Wietzke, “Single lens 3D-camera with extended depth-of-field,” in Conference on Human Vision and Electronic Imaging XVII (SPIE, 2012), 829108.

Williams, S. B.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1027–1034.

Williem, I. K.

I. K. Williem, K. M. Park, and Lee, “Robust light field depth estimation using occlusion-noise aware data costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2484–2497 (2018).
[Crossref]

Wu, G.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

Xiang, X.

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circuits Syst. Video Technol. 27(4), 739–747 (2017).
[Crossref]

Yu, J.

H. Lin, C. Chen, S. B. Kang, and J. Yu, “Depth recovery from light field using focal stack symmetry,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2015), pp. 3451–3459.

Zhang, X.

C. Li, X. Zhang, and D. Tu, “Metric three-dimensional reconstruction model from a light field and its calibration,” Opt. Eng. 56(1), 013105 (2017).
[Crossref]

Zhang, Y.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circuits Syst. Video Technol. 27(4), 739–747 (2017).
[Crossref]

Zimmer, H.

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32(4), 1 (2013).
[Crossref]

ACM Trans. Graph. (1)

C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32(4), 1 (2013).
[Crossref]

CR Acad. Sci. (1)

G. Lippmann, “Epreuves reversible. Photographies integrals,” CR Acad. Sci. 146(3), 446–451 (1908).

Exp. Mech. (1)

B. Chen and B. Pan, “Full-field surface 3D shape and displacement measurements using an unfocused plenoptic camera,” Exp. Mech. 58(5), 831–845 (2018).
[Crossref]

IEEE J. Sel. Top. Signal Process. (1)

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017).
[Crossref]

IEEE Signal Process. Mag. (1)

I. Ihrke, J. Restrepo, and L. Mignard-Debise, “Principles of light field imaging briefly revisiting 25 years of research,” IEEE Signal Process. Mag. 33(5), 59–69 (2016).
[Crossref]

IEEE Trans. Circuits Syst. Video Technol. (1)

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circuits Syst. Video Technol. 27(4), 739–747 (2017).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

I. K. Williem, K. M. Park, and Lee, “Robust light field depth estimation using occlusion-noise aware data costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2484–2497 (2018).
[Crossref]

IEEE Trans. Pattern Anal. Machine Intell. (2)

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Machine Intell. 14(2), 99–106 (1992).
[Crossref]

Y. Bok, H.-G. Jeon, and I. S. Kweon, “Geometric calibration of micro-lens-based light field cameras using line features,” IEEE Trans. Pattern Anal. Machine Intell. 39(2), 287–300 (2017).
[Crossref]

ISPRS J. Photogramm. Remote Sens. (1)

S. Pertuz, E. Pulido-Herrera, and J.-K. Kamarainen, “Focus model for metric depth estimation in standard plenoptic cameras,” ISPRS J. Photogramm. Remote Sens. 144, 38–47 (2018).
[Crossref]

Opt. Eng. (1)

C. Li, X. Zhang, and D. Tu, “Metric three-dimensional reconstruction model from a light field and its calibration,” Opt. Eng. 56(1), 013105 (2017).
[Crossref]

Opt. Express (4)

Other (10)

S. Wanner and B. Goldluecke, “Globally consistent depth labeling of 4D light fields,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 41–48.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 673–680.

H. Lin, C. Chen, S. B. Kang, and J. Yu, “Depth recovery from light field using focal stack symmetry,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2015), pp. 3451–3459.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Technical Report CSTR (2005), pp. 1–11.

L. Andrew and G. Todor, “Full resolution lightfield rendering,” Adobe Technical Report (2008).

C. Perwass and L. Wietzke, “Single lens 3D-camera with extended depth-of-field,” in Conference on Human Vision and Electronic Imaging XVII (SPIE, 2012), 829108.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1027–1034.

F. E. Ives, “Parallax stereogram and process of making same,” United States Patent Application 725,567, (1903).

M. Levoy and P. Hanrahan, “Light field rendering,” in Proceeding of ACM SIGGRAPH (ACM, 1996), pp. 31–42.

S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceeding of ACM SIGGRAPH (ACM, 1996), pp. 43–54.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Experimental Scene 1: central views of (a) a recorded LF and (b) a phase-encoded field, respectively.
Fig. 2.
Fig. 2. Distribution curves of distorted and undistorted correspondence responses across depth range.
Fig. 3.
Fig. 3. LF depth estimation of Scene 1: depth maps (a) without and (b) with considering the plenoptic imaging distortion, respectively.
Fig. 4.
Fig. 4. Accuracy analysis of Scene 1: (a) spatial distributions of reconstructed 3D coordinates; (b) spatial distributions of plane-fitting errors; (c) histograms of plane-fitting errors, in four cases, respectively.
Fig. 5.
Fig. 5. Boxplot of plane-fitting errors of Scene 1.
Fig. 6.
Fig. 6. Experimental Scene 2: central views of (a) a recorded LF and (b) a phase-encoded field, respectively.
Fig. 7.
Fig. 7. LF depth estimation of Scene 2: depth maps (a) without and (b) with considering the plenoptic imaging distortion, respectively.
Fig. 8.
Fig. 8. Accuracy analysis of Scene 2: (a) spatial distributions of reconstructed 3D coordinates; (b) spatial distributions of plane-fitting errors; (c) histograms of plane-fitting errors, in Case 1 and Case 4, respectively.
Fig. 9.
Fig. 9. Experimental Scene 3: (a) photograph; central views of (b) a recorded LF and (c) a phase-encoded field, respectively.
Fig. 10.
Fig. 10. LF depth estimation and unfocused plenoptic 3D measurement of Scene 3: (a) depth map and (b) spatial distribution of reconstructed 3D coordinates in Case 1; (c) depth map and (d) spatial distribution of reconstructed 3D coordinates in Case 4.
Fig. 11.
Fig. 11. Distribution curves of distance errors.
Fig. 12.
Fig. 12. LF depth estimation and unfocused plenoptic 3D measurement of Scene 4: (a) depth map and reconstructed 3D coordinates visualized by (b) 3D model in front view and (c) point cloud in side view in Case 1; (d) depth map and reconstructed 3D coordinates visualized by (e) 3D model in front view and (f) point cloud in side view in Case 4.

Tables (4)

Tables Icon

Table 1. Calibrated system parameters.

Tables Icon

Table 2. Relevant data of plane-fitting errors of Scene 1 (mm).

Tables Icon

Table 3. Relevant data of plane-fitting errors of Scene 1 and Scene 2 for comparison (mm).

Tables Icon

Table 4. Relevant data of distances (mm) and parallelisms (rad) of Scene 3.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

s α = s + u ( 1 1 α ) ,
Z f = m 1 α + m 2 α + m 3 ,
ϕ ¯ α ( s ) = 1 | N u | u N u ϕ ( s α , u ) ,
σ α ( s ) = 1 | N u | u N u { ( | u + 1 | ) [ ϕ ( s α , u ) ϕ ¯ α ( s ) ] } 2 ,
C α ( s ) = 1 | W s | s W s σ α ( s ) ,
{ s α u = s + ( u + δ u ) ( 1 1 α ) δ u = u k 1 r 2 + k 2 r 4 + k 3 r 6 q 1 ( α 1 ) + q 2 ( α 1 ) 2 ,
ϕ ¯ α u ( s ) = 1 | N u | u N u ϕ ( s α u , u ) ,
σ α u ( s ) = 1 | N u | u N u { ( | u + 1 | ) [ ϕ ( s α u , u ) ϕ ¯ α u ( s ) ] } 2 ,
σ = i = 1 N ( d i Δ d i 2 ) i = 1 N d i ,

Metrics