Abstract

In this paper, we propose a geometric optical model to measure the distances of object planes in a light field image. The proposed geometric optical model is composed of two sub-models based on ray tracing: object space model and image space model. The two theoretic sub-models are derived on account of on-axis point light sources. In object space model, light rays propagate into the main lens and refract inside it following the refraction theorem. In image space model, light rays exit from emission positions on the main lens and subsequently impinge on the image sensor with different imaging diameters. The relationships between imaging diameters of objects and their corresponding emission positions on the main lens are investigated through utilizing refocusing and similar triangle principle. By combining the two sub-models together and tracing light rays back to the object space, the relationships between objects’ imaging diameters and corresponding distances of object planes are figured out. The performance of the proposed geometric optical model is compared with existing approaches using different configurations of hand-held plenoptic 1.0 cameras and real experiments are conducted using a preliminary imaging system. Results demonstrate that the proposed model can outperform existing approaches in terms of accuracy and exhibits good performance at general imaging range.

© 2017 Optical Society of America

1. Introduction

Light field imaging technologies are coming into prominence and the so-called light field cameras, also known as plenoptic cameras, have attracted an increasing attention in recent years. Contrary to conventional digital cameras, plenoptic cameras can capture both 2D spatial and angular information by only one shot. To acquire the 4D information, plenoptic cameras insert a microlens array (MLA) between the main lens and image sensor. According to the position of MLA placement, plenoptic cameras can be classified into two categories named as plenoptic 1.0 and plenoptic 2.0. Ng et al. [1] first presented the prototype of plenoptic 1.0 cameras having an MLA positioned at the imaging plane of main lens, commercially known as Lytro [2]. In 2009, Lumsdaine et al. [3] introduced a new rendering technique for plenoptic 2.0 cameras which makes the MLA focus on the imaging plane of main lens to form a relay system to reimage the image of the object onto the image sensor, commercially known as Raytrix [4]. The optical structure of plenoptic 1.0 cameras is depicted in Fig. 1. As shown in Fig. 1, light rays emitting from objects that are located on plane a will propagate through the main lens and converge at MLA. These converged light rays will be split into the sensor area of corresponding micro lenses which precisely leads to both 2D spatial and angular information being recorded. The acquired 4D information allows for digital refocusing [5], synthesizing viewpoints [6,7], extending depth of field [8], saliency detection [9], etc.

 figure: Fig. 1

Fig. 1 The optical structure of plenoptic 1.0 cameras where fx is the focal length of MLA.

Download Full Size | PPT Slide | PDF

Currently, distance measurement using hand-held plenoptic 1.0 cameras is becoming an attractive application. Existing techniques for distance measurement can be mainly divided into two types: active ranging and passive ranging. Active ranging, such as laser ranging [10] and radar ranging [11], requires expensive equipment and can be easily affected by the environment. Passive ranging, such as binocular cameras system [12] and cameras array system [13], is limited by portability and calibration. These problems can be solved through utilizing hand-held plenoptic 1.0 cameras. Although the depth information estimated from the light field data [14–19] can be used to inversely calculate distances, the accuracy is too limited because of the rough depth maps, especially at texture-less regions. Therefore, Hahne et al. [20] analyzed the imaging system of hand-held plenoptic 1.0 cameras and developed an approach to measure the distances of object planes at which the user is refocusing. The method treated a pair of light rays as a system of linear functions and its solution yielded an intersection indicating the distance to the refocused object plane. However, the problem to achieve all distances of object planes through only one refocusing process is not solved in [20]. In addition, when implementing refocusing to some planes that are closer to the main lens using the refocusing synthesis technique in [20], it inherently implies an interpolation to the light field image which requires huge computer memory.

In order to measure all distances of object planes in an image captured by the hand-held plenoptic 1.0 camera through only one refocusing implementation with higher accuracy, we put forward a geometric optical model in this paper. The proposed geometric optical model consists of two sub-models based on ray tracing, namely object space model and image space model, and the two theoretic sub-models are derived on account of on-axis point light sources. Object space model models light rays refraction inside the main lens following the refraction theorem. Image space model models the relationship between the emission positions of light rays exiting from the main lens and corresponding imaging diameters on the image sensor through refocusing and similar triangle principle. By combining the two sub-models together and tracing light rays back to the object space, the relationships between objects’ imaging diameters and corresponding distances of object planes are derived. Results of performance comparison demonstrate that the proposed geometric optical model can provide higher accuracy in distance measurement than existing methods with higher adaptation to different optical configurations in imaging system.

The rest of the paper is organized as follows. Section 2 illustrates the proposed geometric optical model in detail based on the optical analysis for the imaging of objects. Experimental results and analyses are provided in Section 3. Conclusions are drawn in Section 4 as well as the future work.

2. Optical analysis and geometric optical model

2.1 Optical analysis

Apart from the well-focused light field imaging case shown in Fig. 1, objects that do not locate on plane a will be defocused and imaged as Fig. 2 depicts. As shown in Fig. 2, with the suppose that light rays propagate into the main lens by covering its whole pupil diameter and refractions on MLA are neglected, light rays (highlighted in green) emitting from objects on plane a which is in front of plane a will focus behind the image sensor with the corresponding imaging diameters, D1. Similarly, light rays (highlighted in red) emitting from objects on plane a˜ that is nearer to plane a and still in front of plane a will focus between the image sensor and MLA. Light rays (highlighted in blue) emitting from objects on plane a^ that is behind plane a will focus between the MLA and main lens. As we can see from Fig. 2, the imaging diameters of objects on the image sensor are different and dependent on the corresponding distances of object planes. Based on this analysis, it is found that the distances of object planes are possible to be measured through investigating their relationships with corresponding objects’ imaging diameters.

 figure: Fig. 2

Fig. 2 Light field imaging of objects at different distances.

Download Full Size | PPT Slide | PDF

In order to establish the relationships between the imaging diameters of objects and distances of object planes, the whole light field imaging system is divided into two sub-systems. One sub-system includes light ray propagation between the objects and main lens and the other includes light ray propagation between the main lens and image sensor. By observing the optical structure of hand-held plenoptic 1.0 cameras, light rays that impinge on the image sensor are firstly traced back to the main lens through utilizing refocusing and similar triangle principle to obtain the relationships between the imaging diameters and emission positions of light rays on the main lens. Subsequently, light rays on the main lens are traced back to the object space by utilizing refraction theorem to derive the relationships between the emission positions and distances of object planes. The details of deriving these relationships are described in the following section.

2.2 Proposed geometric optical model

According to the above analyses, a geometric optical model is proposed correspondingly which consists of two sub-models based on ray tracing: object space model and image space model. Herein, point light sources are regarded as captured objects and the two theoretic sub-models are derived on account of on-axis point light sources.

Using the plane a in Fig. 2 as an instance, in the object space model, as shown in Fig. 3, light rays emitting from an object, on-axis point light source, on plane a propagate into the main lens and refract inside the main lens following the refraction theorem. As shown in the figure, dout defines the distance between plane a and the main lens, which is exactly needed to be measured. Its relationship with the angle, denoted by φ in Fig. 3, is mathematically given by

tanφ=D2(doutT/2+RR2D2/4),
where R represents the radius of curvature of main lens; T represents the central thickness of main lens and D is the pupil diameter of main lens. Equation (1) indicates that dout can be calculated by
dout=D2tanφ+R2D2/4R+T/2.
Therefore, we need to acquire the value of φ in order to obtain dout.

 figure: Fig. 3

Fig. 3 Object space model: light ray propagation from objects to the main lens.

Download Full Size | PPT Slide | PDF

In Fig. 3, light rays will refract inside the main lens and arrive at the position (p,q) which is marked by a green dot. The refractive angle ψ satisfies the refraction theorem that is given by

n1sinψ=sin(φ+θ1),
where n1 is the refractive index of the main lens; ψ is the included angle between the normal, the dashed line in purple, and the refractive light rays in the main lens; θ1 follows

sinθ1=D2R.

After arriving at position (p,q) as marked in Fig. 3, light rays will exit from position (p,q) with a refractive angle ω and then impinge on the image sensor. These propagations of light rays are carried out in the image space and the proposed image space model is shown in Fig. 4.

 figure: Fig. 4

Fig. 4 Image space model: light ray propagation from the main lens to image sensor.

Download Full Size | PPT Slide | PDF

By utilizing the refraction theorem, we have

n1sin(θ1ψ+θ2)=sinω,
where θ1ψ equals to the included angle between the refractive light rays and horizontal axis, andθ2 satisfies
sinθ2=qR.
If the emergent light rays that impinge on the image sensor are extended, then undoubtedly the emergent light rays will intersect on a plane behind the image sensor with a distance d, as shown in Fig. 4. Considering the focal length and thickness of general MLA are very small, the deviation caused by refraction on MLA is neglected. As a consequence, we have
tan(ωθ2)=D12d,
where D1 represents the imaging diameters on the image sensor. Besides, position (p,q) lies on the curved surface of main lens so that p and q satisfy
(RT/2+p)2+q2=R2,
tan(ωθ2)=qfx+d+dinp,
where fx is the focal length of MLA and din defines the distance between MLA and main lens. Combining Eqs. (7) and (9), we have
D12d=qfx+d+dinp.
Analyses on Eqs. (8) and (10) manifest that p and q can only be obtained after din, D1, and d are all known.

In order to obtain the values of din, D1, and d, refocusing and inverse ray tracing are required. Therefore, the light field image is refocused to a reference plane at a known distance away from the main lens once it is captured. The refocusing is carried out using the ray tracing technique proposed in [20] as shown in Fig. 5. The reference plane can be considered as the plane a in Fig. 2 whose distance is dout. Combining Fig. 2 and Fig. 5, objects on other planes such as a are defocused and equivalent to be captured under the optical configuration that the distance between MLA and main lens equals to din. According to [20], the slopes of light rays emitting from the centers of pixels on the image sensor to the corresponding micro lens, denoted by mi, is given by

mi=yjvifx,
where yj represents the vertical central coordinates of each micro lens and vi represents the vertical central coordinates of each pixel on the image sensor; Both subscript i and j start to count from zero while the range for i is the vertical-dimension number of pixels under each micro lens and the range for j is the vertical-dimension number of micro lenses. Since the right items in Eq. (11) are all known, each slope mi can be calculated. These light rays will propagate through the main lens and corresponding positions on the focal plane Fu, the dots marked in the same color with the light rays, and then converge to a point on plane a, as shown in Fig. 5. The intervals between the dots on plane Fu are the baselines of virtual viewpoints in hand-held plenoptic 1.0 cameras [20]. The positions of the dots can be derived by
Fi=mi×f,
where f is the focal length of main lens. With the known slope mi and focal length f, the positions of each dot can be obtained.

 figure: Fig. 5

Fig. 5 Refocusing model for deriving din where y and v represent MLA and the image sensor, respectively.

Download Full Size | PPT Slide | PDF

Further, the slopes of light rays in object space, denoted by ki2, can be derived by

ki2=y2Fidoutf,
where y2 represents the known vertical coordinates of point y2 on plane a. Using the purple light ray in Fig. 5 as an instance, k02 indicates the incident angle of this light ray. With the known y2 and F0, the intersection of this light ray with the right curve of the main lens can be obtained. Then, (p0,q0) can be ascertained by the refraction theorem. Finally, din can be derived by

din=q0y2+m0p0m0.

D1 can be obtained by recording the imaging diameters of objects on plane a in the refocused light field image.

For the purpose of achieving d, an approximation is made in the image space model that light rays will intersect at the yellow dots as marked in Fig. 4, the center of marginal pupil diameter of main lens, if the light rays are prolonged on the main lens. By utilizing similar triangle principle which gives

D1dDfx+d+din,
d can be approximately achieved by
d(fx+din)D1DD1.
After the above processing and approximation, p and q can be derived by plugging din, D1, and d into Eqs. (8) and (10), which is used for deriving dout.

After that, refractive angle ω can be calculated by

ω=arctan(D12d)+θ2=arctan(D12d)+arcsin(qR).
Subsequently, ψ and φ can be obtained by solving
ψ=arcsin(D2R)+arcsin(qR)arcsin(sin(ω)/n1),
φ=arcsin(n1sin(ψ))arcsin(D2R).
Finally, doutcan be obtained by plugging calculated φ into Eq. (2).

For plane a˜ and a^ in Fig. 2, their respective object space models are the same as Fig. 3 shows. Differences exist in the image space model since currently the focal planes in image space are located in front of the image sensor. Therefore, Eq. (9) is changed by

tan(ωθ2)=qfxd+dinp,
and this change is the same for both plane a˜ and a^. Then, Eq. (10) is changed correspondingly by
D12d=qfxd+dinp.
In addition, Eq. (15) should be replaced by
D1dDfxd+din,
and this leads to the change on Eq. (16) with
d(fx+din)D1D+D1.
Then, the steps for deriving the distances of plane a˜ and a^ are exactly the same with that described for plane a by including the updates in the equations.

3. Experimental results and analyses

3.1 Simulation system

For the purpose of validating the utility of the proposed geometric optical model, imaging systems of hand-held plenoptic 1.0 cameras are simulated in optics tool Zemax [21], as shown in Fig. 6. Figure 6(a) depicts a screenshot of the simulated imaging systems and components from left to right are image sensor (white), MLA and main lens, respectively. The gap between the image sensor and MLA is magnified for better visibility. The performance of the proposed geometric optical model is compared with the method provided in [20]. As mentioned before, the method in [20] treats a pair of light rays as a system of linear functions and considers the solutions as the distances of refocused objects planes. Two optical configurations of hand-held plenoptic 1.0 cameras are designed for comparison and the Fnumber of main lens and MLA is always kept to be equal [22]. Figures 6(b) and 6(c) show the zoomed MLA used in the two optical configurations. Geometrical parameters used for designing the two simulated imaging systems are summarized in Table 1. The focal lengths of MLA and main lens in imaging system 1 are the same with that used in [20]. The wavelength of light rays, λ, is set to be 632.8nm, the same with that used in [20], to ensure the fairness of performance comparison.

 figure: Fig. 6

Fig. 6 Zemax screenshots.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 1. Geometrical Parameters of Two Simulated Imaging Systems.

3.2 Performance comparison and analyses

The estimation error of distance, denoted by ERROR, is used for comparing the performance of proposed geometric optical model and the method in [20], which is given by

ERROR=|doutEdout|dout=|Ad+T/2(Ed+T/2)|Ad+T/2=|AdEd|Ad+T/2,
where dout represents the real distances of object planes, such as plane a, a˜ and a^, as depicted in Fig. 2; Edout represents the estimated distances of object planes that are achieved according to the derivations in Section 2; Ad refers to the actual distances between object planes and the vertex of right curved surface of main lens on the axis. Thus, dout equals to Ad+T/2;Ed represents the estimated distances between object planes and the vertex of right curved surface of main lens on the axis. Thus, Edout equals to Ed+T/2. Results of estimated distances and estimation errors of two imaging systems are listed in Table 2.

Tables Icon

Table 2. Estimation Error Comparison for the Proposed and That in [20].

As we can see from Table 2, the proposed geometric optical model can outperform the method provided in [20] for the majority of distances, particularly in the actual distance range of 0.2m~2.5m in imaging system 1. In addition, the proposed geometric optical model outperforms the method in [20] by an average of 1.954% reduction in estimation error and the maximum reduction reaches 4.678%. In imaging system 2, the proposed geometric optical model is superior to the method [20] for whole distance range. The reduction of average estimation error is 4.643% and the maximum reduction reaches 5.484%. More importantly, in near distance range of 0.2~1.3m, the estimation errors obtained by the proposed geometric optical model is more than 4 times lower, even 9 times lower at 0.3~0.6m, than the method provided in [20], which shows higher potential in applications.

The estimation errors of two imaging systems in Table 2 are graphed in Figs. 7(a) and 7(b). In general, the proposed geometric optical model outperforms the algorithm in [20] by providing much higher accuracy for both optical configurations. It is also observed from Figs. 7(a) and 7(b) that the estimation error will gradually increase with the distance for the proposed algorithm. The reason mainly lies in the approximation made in the image space model. During deriving Eqs. (15) and (22), light rays r1 and r2 are approximated by r1and r2, respectively, as shown in Fig. 8. Thus, deviations exist in the light rays emission position (p,q) on the main lens, such as Δq1 and Δq2, which leads to the estimation errors in Eqs. (10) and (21). Farther distances on the object space generally cause larger Δqand finally result in larger estimation errors. This also causes a lightly worse performance as the actual distance is farther than 2.5m in imaging system 1.

 figure: Fig. 7

Fig. 7 Estimation error comparison for two imaging systems in Table 3: (a) Imaging system 1;(b) Imaging system 2.

Download Full Size | PPT Slide | PDF

 figure: Fig. 8

Fig. 8 Ray tracing model for error analysis where si=din+d.The green and red light rays correspond to plane a and plane a˜ in Fig. 2, respectively.

Download Full Size | PPT Slide | PDF

However, it is found that the larger estimation error at farther distance may be compensated by changing the geometric parameters of components in the hand-held plenoptic 1.0 cameras, such as main lens. Therefore, eight testing cases (TCs) are designed in Table 3 by changing the parameters of main lens while keeping the MLA used in imaging system 2, convexo-convex shaped MLA, with unchanged focal length fx=2.816mm. The pitch of each micro lens, Dm, is varied with the change of focal length of main lens in order to keep the Fnumber of main lens and MLA being equal [22]. The eight testing cases can be mainly classified into three categories. We mainly select two focal lengths for the main lens: one is around 79mm, like that in TC1-TC3; the other is around 98mm, like that in TC4-TC6. They are controlled by changing the radius of curvature of main lens R. Among each group, TC1-TC3 or TC4-TC6, the central thickness of main lens, T, changes, which will affect the focal length a bit, to investigate its effect on the estimation error. TC7 and TC8 use the same focal length of the main lens with TC6, while investigate the effect of changing the pupil diameter of main lens D. The refractive index of main lens n1 and wavelength of light rays λ are the same as that in Table 1.

Tables Icon

Table 3. Geometric Parameters of Eight Testing Cases.

Estimation errors of all the testing cases are listed in Table 4. First, it is found that for the same R, which results in almost the same focal length of the main lens, smaller T can contribute to smaller estimation errors, like the performance comparison among TC1 to TC3 and that among TC4 to TC6. Actually, the imaging diameters on the image sensor will not be affected by changing T, while the estimated distance d will decrease when T becomes smaller, which results in larger ωθ2. For larger ωθ2, the estimated q will be smaller and leads to larger Δq. However, since T becomes smaller, the estimated p also becomes smaller and q eventually becomes larger according to Eq. (8). Therefore, Δq and estimation error decrease with the decrement in T. On the contrary, the estimation error increases with larger T, as depicted in Figs. 9(a) and 9(b).

Tables Icon

Table 4. Estimation Errors of All the Testing Cases in Table 3.

 figure: Fig. 9

Fig. 9 Estimation errors comparison for testing cases.

Download Full Size | PPT Slide | PDF

Second, it is observed that using exact the same focal length of the main lens, corresponding to the same R and T, increasing D can improve the estimation accuracy obviously as we compare the results among TC6 to TC8. The variation in the estimation error with the distance change is shown in Fig. 9(c). As D is larger, the thickness of the margins of the main lens where the yellow dots are located, as shown in Fig. 4, will be smaller. Thus, the effect of enlarging D is approximately equivalent to reducing T and inherently the maximum of Δq, as shown in Fig. 8, will be very small for a larger D, which results in smaller estimation errors.

Third, if we check the performance of each test case across the groups, it can be further discovered that larger focal length of the main lens, introduced by increasing R, can provide lower estimation errors. Generally, a larger R leads to a smaller ωθ2 and further leads to a smaller Δq according to the above analyses. While, it also results in an increase in the thickness of the margins of the main lens. As a consequence, it is hard to judge whether Δq is increased or reduced. Fortunately, it is found that the actual refractive angle, ψ, decreases with the increase in R, which accounts for smaller Δq. Integrating the above three factors, Δq is reduced eventually, which results in lower estimation errors. We compared TC1 with TC4, TC2 with TC5, and TC3 with TC6, which are shown in Figs. 9(e), 9(f) and 9(g), respectively. The three pairs of comparison present a conclusion consistent with what we discovered.

In addition, it is observed that the estimation errors in Table 2 and Table 4 increase with small fluctuations. Although Δq generally increases with the increment in the distance between the object plane and the main lens, the estimation error does not increase with Δq linearly. The reason mainly lies in that the majority of equations in the proposed model are non-linear. Besides, uncertainties exist in the amount of increment in Δqwhen changing the parameters of the main lens, especially for the pupil diameter D. Thus, tiny fluctuations are detected in the computational results shown in Fig. 7 and Fig. 9.

It can also be noticed from Figs. 9(a), 9(b) and 9(c) that the effects on the estimation accuracy are obvious when central thickness T and pupil diameter D are changed in a bigger way. Therefore, we compare their effects by increasing T and D at the same time. For this comparison, two additional testing cases are conducted and the geometric parameters are summarized in Table 5. Geometric parameters of TC9 are the same with those in imaging system 1 in Table 1.

Tables Icon

Table 5. Geometrical Parameters of Two Testing Cases Used for Comparing the Effects of changing T and D.

The estimation errors of TC9 and TC10 are graphed in Fig. 10. As we can see from Fig. 10, the estimation error caused by larger T can be decreased by enlarging D.

 figure: Fig. 10

Fig. 10 Estimation error comparison for increasing T and D at the same time

Download Full Size | PPT Slide | PDF

Generally speaking, more accurate distance measurement for object planes in a light field image can be achieved by the proposed geometric optical model for hand-held plenoptic 1.0 cameras using the main lens with larger pupil diameter, larger focal length as well as smaller thickness. We can also consider to replace the MLA with different focal lengths so that the new optical configurations of hand-held plenoptic 1.0 cameras are adaptable to measure farther distances, which is under investigation as one of our future works.

3.3 Testing on a real system

3.3.1 Prototype of a plenoptic imaging system

In order to further demonstrate the effectiveness of the proposed model, a prototype of a real imaging system has been built, as depicted in Fig. 11. The geometric parameters of the optical elements are summarized in Table 6. As shown in Fig. 11, a laser is used as an on-axis point light source and the wavelength of parallel light rays emitting from it is 532nm. To make the light rays be omnidirectional, an objective lens is placed in front of the laser so that the parallel light rays can first focus on the focal point of the objective lens and subsequently propagate divergently. The reference plane is 1000mm away from the main lens. The distance between the main lens and MLA is 101mm.

 figure: Fig. 11

Fig. 11 The prototype of a real imaging system: (a) The front view; (b) and (c) are the front view and the vertical view of the plenoptic imaging system in (a), respectively.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 6. Geometric Parameters of the Prototype.

3.3.2 Experimental results and analyses

Images obtained by changing the distances between the main lens and the light source are shown in Fig. 12. As we can see from Fig. 12, the imaging diameter decreases as the distance increases. The imaging diameter is measured by the number of valid pixels in the vertical direction multiplied by the pixel pitch size. Considering that the edges of the imaging results are not sharp enough due to the aberrations of the main lens, an advanced detecting technique is desired to pick out the “valid” pixels at the image boundary. Several possible signal processing techniques can be developed, such as detecting the “valid” pixels by measuring the relative intensity variation along the radial direction or by measuring the absolute difference between the current pixel and the average intensity. The algorithms can be further optimized by considering regional continuity and smoothness, which needs to be investigated carefully in our future work. Here, a preliminary method is used to determine the imaging diameter according to the histogram of intensity distribution. Counting from the highest intensity value, e.g. 255, the intensity at which the accumulated distribution reaching a specific ratio relative to the total number of pixels is set as the threshold. The pixels whose intensities are larger than the threshold are regarded as “valid” to the imaging diameter. Results of the estimated distances and estimation errors are listed in Table 7.

 figure: Fig. 12

Fig. 12 Images obtained at different distances dout: (a) 500mm; (b) 600mm; (c) 700mm; (d) 800mm; (e) 900mm.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 7. Results of Estimated Distances and Estimation Errors.

As shown in Table 7, the proposed model can outperform the method in [20] with an average of 0.4702% reduction in estimation error. It is also found that as the distance is larger than 800mm, the estimation accuracy of the proposed model is a bit worse than the method provided in [20]. It may be caused by the suboptimality in detecting the imaging diameter, which needs to be further investigated.

Based on the experiments, it is found that there are still some problems needed to be solved in estimating the distance for a real imaging system, such as how to compensate the fabrication errors and how to reduce the registration errors. We also notice that the aberrations of the main lens and micro lens exist. The aberrations of the main lens result in that the light rays converge to a small spot on the focal plane instead of a theoretic sharp point after propagating through the main lens. Therefore, when implementing refocusing using the model shown in Fig. 5, errors will arise in deriving din. Thus, it is important and indispensable for us to calibrate the imaging system, by which correction factors or compensation factors can be obtained and used to update the proposed model to ensure the estimation accuracy, which is also put as our future work.

4. Conclusions

In this paper, we put forward a geometric optical model to measure the distances of object planes in an image captured by a hand-held plenoptic 1.0 camera. The proposed geometric optical model consists of two sub-models based on ray tracing, namely object space model and image space model. Results of simulations and real experiments demonstrate that the proposed geometric optical model works better than existing distance measurement methods in terms of accuracy, especially at general imaging range. In addition, the accuracy in measurement is further investigated for the imaging system with diverse geometric parameters and results reveal that the estimation error can be compensated by enlarging the pupil diameter, focal length, or reducing the thickness of the main lens.

In order to further optimize the proposed model and improve its versatility, our future works include updating the image space model by further taking the refraction on MLA into consideration, developing calibration process to compensate the aberrations of the main lens and to reduce the registration errors in a real imaging system, and optimizing the signal processing techniques jointly with the image space model to detect the imaging diameter accurately.

The proposed two theoretic sub-models are derived based on on-axis point light sources. For a real scenario, the model can be further extended by choosing feature points on the object planes as sampled point light sources, updating the on-axis models to be the off-axis ones, measuring the respective distance of each feature point, and retrieving the average of measured distances as the final distances of corresponding object planes. It is also set as our future work to make the proposed work applicable to industrial detection, microscopy, retina imaging or even broader areas.

Funding

National Natural Science Foundation of China (NSFC) Guangdong Joint Foundation Key Project (U1201255 and 61371138).

References and links

1. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Reports (CSTR), 2005.

2. Lytro, https://www.lytro.com/.

3. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2009), pp. 1–8.

4. Raytrix, https://www.raytrix.de/.

5. C. C. Chen, Y. C. Lu, and M. S. Su, “Light field based digital refocusing using a DSLR camera with a pinhole array mask,” in IEEE International Conference on Acoustics Speech and Signal Processing (IEEE, 2010), pp.754–757. [CrossRef]  

6. Y. Taguchi and T. Naemura, “View-Dependent Coding of Light Fields Based on Free-Viewpoint Image Synthesis,” in IEEE International Conference on Image Processing (2006), pp. 509–512. [CrossRef]  

7. K. Rematas, T. Ritschel, M. Fritz, and T. Tuytelaars, “Image-Based Synthesis and Re-synthesis of Viewpoints Guided by 3D Models,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3898–3905. [CrossRef]  

8. T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012). [CrossRef]   [PubMed]  

9. N. Y. Li, J. W. Ye, J. Yu, H. B. Ling, and J. Y. Yu, “Saliency Detection on Light Field”, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp.2806–2813.

10. Metrology Resource Co, http://www.metrologyresource.com/.

11. Weibel Inc., Weibel RR-60034 Ranging Radar System.

12. J. Xu, Y. M. Chen, and Z. L. Shi, “Distance measurement using binocular-camera with adaptive focusing,” J. Shanghai Univ. 15(2), 10072861 (2009).

13. K. Venkataraman, P. Gallagher, A. Jain, and S. Nisenzon, “US patent 705,885”(2015).

14. Z. Yu, X. Guo, H. Ling, A. Lumsdaine, and J. Yu, “Line Assisted Light Field Triangulation and Stereo Matching,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 2792–2799. [CrossRef]  

15. M. J. Kim, T. H. Oh, and I. S. Kweon, “Cost-aware depth map estimation for Lytro camera,” in IEEE International Conference on Image Processing (2014), pp. 36–40. [CrossRef]  

16. M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from Combining Defocus and Correspondence Using Light-Field Cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 673–680. [CrossRef]  

17. M. W. Tao, P. P. Srinivasan, J. Malik, S. Rusinkiewicz, and R. Ramamoorthi, “Depth from shading, defocus, and correspondence using light-field angular coherence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE,2015), pp. 1940–1948. [CrossRef]  

18. Y. Xu, X. Jin, and Q. Dai, “Depth fused from intensity range and blur estimation for light-field cameras,” in IEEE International Conference on Acoustics Speech and Signal Processing (IEEE, 2016), pp. 2857–2861. [CrossRef]  

19. H. G. Jeon, J. S. Park, G. M. Choe, J. S. Park, Y. S. Bok, Y. W. Tai, and I. S. Kweon, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1547–1555. [CrossRef]  

20. C. Hahne, A. Aggoun, S. Haxha, V. Velisavljevic, and J. C. Fernández, “Light field geometry of a standard plenoptic camera,” Opt. Express 22(22), 26659–26673 (2014). [CrossRef]   [PubMed]  

21. “Zemax,” http://www.zemax.com/.

22. E. Y. Lam, “Computational photography with plenoptic camera and light field capture: tutorial,” J. Opt. Soc. Am. A 32(11), 2021–2032 (2015). [CrossRef]   [PubMed]  

References

  • View by:

  1. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Reports (CSTR), 2005.
  2. Lytro, https://www.lytro.com/ .
  3. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2009), pp. 1–8.
  4. Raytrix, https://www.raytrix.de/ .
  5. C. C. Chen, Y. C. Lu, and M. S. Su, “Light field based digital refocusing using a DSLR camera with a pinhole array mask,” in IEEE International Conference on Acoustics Speech and Signal Processing (IEEE, 2010), pp.754–757.
    [Crossref]
  6. Y. Taguchi and T. Naemura, “View-Dependent Coding of Light Fields Based on Free-Viewpoint Image Synthesis,” in IEEE International Conference on Image Processing (2006), pp. 509–512.
    [Crossref]
  7. K. Rematas, T. Ritschel, M. Fritz, and T. Tuytelaars, “Image-Based Synthesis and Re-synthesis of Viewpoints Guided by 3D Models,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3898–3905.
    [Crossref]
  8. T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
    [Crossref] [PubMed]
  9. N. Y. Li, J. W. Ye, J. Yu, H. B. Ling, and J. Y. Yu, “Saliency Detection on Light Field”, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp.2806–2813.
  10. Metrology Resource Co, http://www.metrologyresource.com/ .
  11. Weibel Inc., Weibel RR-60034 Ranging Radar System.
  12. J. Xu, Y. M. Chen, and Z. L. Shi, “Distance measurement using binocular-camera with adaptive focusing,” J. Shanghai Univ. 15(2), 10072861 (2009).
  13. K. Venkataraman, P. Gallagher, A. Jain, and S. Nisenzon, “US patent 705,885”(2015).
  14. Z. Yu, X. Guo, H. Ling, A. Lumsdaine, and J. Yu, “Line Assisted Light Field Triangulation and Stereo Matching,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 2792–2799.
    [Crossref]
  15. M. J. Kim, T. H. Oh, and I. S. Kweon, “Cost-aware depth map estimation for Lytro camera,” in IEEE International Conference on Image Processing (2014), pp. 36–40.
    [Crossref]
  16. M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from Combining Defocus and Correspondence Using Light-Field Cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 673–680.
    [Crossref]
  17. M. W. Tao, P. P. Srinivasan, J. Malik, S. Rusinkiewicz, and R. Ramamoorthi, “Depth from shading, defocus, and correspondence using light-field angular coherence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE,2015), pp. 1940–1948.
    [Crossref]
  18. Y. Xu, X. Jin, and Q. Dai, “Depth fused from intensity range and blur estimation for light-field cameras,” in IEEE International Conference on Acoustics Speech and Signal Processing (IEEE, 2016), pp. 2857–2861.
    [Crossref]
  19. H. G. Jeon, J. S. Park, G. M. Choe, J. S. Park, Y. S. Bok, Y. W. Tai, and I. S. Kweon, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1547–1555.
    [Crossref]
  20. C. Hahne, A. Aggoun, S. Haxha, V. Velisavljevic, and J. C. Fernández, “Light field geometry of a standard plenoptic camera,” Opt. Express 22(22), 26659–26673 (2014).
    [Crossref] [PubMed]
  21. “Zemax,” http://www.zemax.com/ .
  22. E. Y. Lam, “Computational photography with plenoptic camera and light field capture: tutorial,” J. Opt. Soc. Am. A 32(11), 2021–2032 (2015).
    [Crossref] [PubMed]

2015 (1)

2014 (1)

2012 (1)

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

2009 (1)

J. Xu, Y. M. Chen, and Z. L. Shi, “Distance measurement using binocular-camera with adaptive focusing,” J. Shanghai Univ. 15(2), 10072861 (2009).

Aggoun, A.

Bishop, T. E.

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

Bok, Y. S.

H. G. Jeon, J. S. Park, G. M. Choe, J. S. Park, Y. S. Bok, Y. W. Tai, and I. S. Kweon, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1547–1555.
[Crossref]

Chen, C. C.

C. C. Chen, Y. C. Lu, and M. S. Su, “Light field based digital refocusing using a DSLR camera with a pinhole array mask,” in IEEE International Conference on Acoustics Speech and Signal Processing (IEEE, 2010), pp.754–757.
[Crossref]

Chen, Y. M.

J. Xu, Y. M. Chen, and Z. L. Shi, “Distance measurement using binocular-camera with adaptive focusing,” J. Shanghai Univ. 15(2), 10072861 (2009).

Choe, G. M.

H. G. Jeon, J. S. Park, G. M. Choe, J. S. Park, Y. S. Bok, Y. W. Tai, and I. S. Kweon, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1547–1555.
[Crossref]

Dai, Q.

Y. Xu, X. Jin, and Q. Dai, “Depth fused from intensity range and blur estimation for light-field cameras,” in IEEE International Conference on Acoustics Speech and Signal Processing (IEEE, 2016), pp. 2857–2861.
[Crossref]

Favaro, P.

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

Fernández, J. C.

Fritz, M.

K. Rematas, T. Ritschel, M. Fritz, and T. Tuytelaars, “Image-Based Synthesis and Re-synthesis of Viewpoints Guided by 3D Models,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3898–3905.
[Crossref]

Gallagher, P.

K. Venkataraman, P. Gallagher, A. Jain, and S. Nisenzon, “US patent 705,885”(2015).

Georgiev, T.

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2009), pp. 1–8.

Guo, X.

Z. Yu, X. Guo, H. Ling, A. Lumsdaine, and J. Yu, “Line Assisted Light Field Triangulation and Stereo Matching,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 2792–2799.
[Crossref]

Hadap, S.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from Combining Defocus and Correspondence Using Light-Field Cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 673–680.
[Crossref]

Hahne, C.

Haxha, S.

Jain, A.

K. Venkataraman, P. Gallagher, A. Jain, and S. Nisenzon, “US patent 705,885”(2015).

Jeon, H. G.

H. G. Jeon, J. S. Park, G. M. Choe, J. S. Park, Y. S. Bok, Y. W. Tai, and I. S. Kweon, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1547–1555.
[Crossref]

Jin, X.

Y. Xu, X. Jin, and Q. Dai, “Depth fused from intensity range and blur estimation for light-field cameras,” in IEEE International Conference on Acoustics Speech and Signal Processing (IEEE, 2016), pp. 2857–2861.
[Crossref]

Kim, M. J.

M. J. Kim, T. H. Oh, and I. S. Kweon, “Cost-aware depth map estimation for Lytro camera,” in IEEE International Conference on Image Processing (2014), pp. 36–40.
[Crossref]

Kweon, I. S.

H. G. Jeon, J. S. Park, G. M. Choe, J. S. Park, Y. S. Bok, Y. W. Tai, and I. S. Kweon, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1547–1555.
[Crossref]

M. J. Kim, T. H. Oh, and I. S. Kweon, “Cost-aware depth map estimation for Lytro camera,” in IEEE International Conference on Image Processing (2014), pp. 36–40.
[Crossref]

Lam, E. Y.

Li, N. Y.

N. Y. Li, J. W. Ye, J. Yu, H. B. Ling, and J. Y. Yu, “Saliency Detection on Light Field”, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp.2806–2813.

Ling, H.

Z. Yu, X. Guo, H. Ling, A. Lumsdaine, and J. Yu, “Line Assisted Light Field Triangulation and Stereo Matching,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 2792–2799.
[Crossref]

Ling, H. B.

N. Y. Li, J. W. Ye, J. Yu, H. B. Ling, and J. Y. Yu, “Saliency Detection on Light Field”, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp.2806–2813.

Lu, Y. C.

C. C. Chen, Y. C. Lu, and M. S. Su, “Light field based digital refocusing using a DSLR camera with a pinhole array mask,” in IEEE International Conference on Acoustics Speech and Signal Processing (IEEE, 2010), pp.754–757.
[Crossref]

Lumsdaine, A.

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2009), pp. 1–8.

Z. Yu, X. Guo, H. Ling, A. Lumsdaine, and J. Yu, “Line Assisted Light Field Triangulation and Stereo Matching,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 2792–2799.
[Crossref]

Malik, J.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from Combining Defocus and Correspondence Using Light-Field Cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 673–680.
[Crossref]

M. W. Tao, P. P. Srinivasan, J. Malik, S. Rusinkiewicz, and R. Ramamoorthi, “Depth from shading, defocus, and correspondence using light-field angular coherence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE,2015), pp. 1940–1948.
[Crossref]

Naemura, T.

Y. Taguchi and T. Naemura, “View-Dependent Coding of Light Fields Based on Free-Viewpoint Image Synthesis,” in IEEE International Conference on Image Processing (2006), pp. 509–512.
[Crossref]

Nisenzon, S.

K. Venkataraman, P. Gallagher, A. Jain, and S. Nisenzon, “US patent 705,885”(2015).

Oh, T. H.

M. J. Kim, T. H. Oh, and I. S. Kweon, “Cost-aware depth map estimation for Lytro camera,” in IEEE International Conference on Image Processing (2014), pp. 36–40.
[Crossref]

Park, J. S.

H. G. Jeon, J. S. Park, G. M. Choe, J. S. Park, Y. S. Bok, Y. W. Tai, and I. S. Kweon, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1547–1555.
[Crossref]

H. G. Jeon, J. S. Park, G. M. Choe, J. S. Park, Y. S. Bok, Y. W. Tai, and I. S. Kweon, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1547–1555.
[Crossref]

Ramamoorthi, R.

M. W. Tao, P. P. Srinivasan, J. Malik, S. Rusinkiewicz, and R. Ramamoorthi, “Depth from shading, defocus, and correspondence using light-field angular coherence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE,2015), pp. 1940–1948.
[Crossref]

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from Combining Defocus and Correspondence Using Light-Field Cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 673–680.
[Crossref]

Rematas, K.

K. Rematas, T. Ritschel, M. Fritz, and T. Tuytelaars, “Image-Based Synthesis and Re-synthesis of Viewpoints Guided by 3D Models,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3898–3905.
[Crossref]

Ritschel, T.

K. Rematas, T. Ritschel, M. Fritz, and T. Tuytelaars, “Image-Based Synthesis and Re-synthesis of Viewpoints Guided by 3D Models,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3898–3905.
[Crossref]

Rusinkiewicz, S.

M. W. Tao, P. P. Srinivasan, J. Malik, S. Rusinkiewicz, and R. Ramamoorthi, “Depth from shading, defocus, and correspondence using light-field angular coherence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE,2015), pp. 1940–1948.
[Crossref]

Shi, Z. L.

J. Xu, Y. M. Chen, and Z. L. Shi, “Distance measurement using binocular-camera with adaptive focusing,” J. Shanghai Univ. 15(2), 10072861 (2009).

Srinivasan, P. P.

M. W. Tao, P. P. Srinivasan, J. Malik, S. Rusinkiewicz, and R. Ramamoorthi, “Depth from shading, defocus, and correspondence using light-field angular coherence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE,2015), pp. 1940–1948.
[Crossref]

Su, M. S.

C. C. Chen, Y. C. Lu, and M. S. Su, “Light field based digital refocusing using a DSLR camera with a pinhole array mask,” in IEEE International Conference on Acoustics Speech and Signal Processing (IEEE, 2010), pp.754–757.
[Crossref]

Taguchi, Y.

Y. Taguchi and T. Naemura, “View-Dependent Coding of Light Fields Based on Free-Viewpoint Image Synthesis,” in IEEE International Conference on Image Processing (2006), pp. 509–512.
[Crossref]

Tai, Y. W.

H. G. Jeon, J. S. Park, G. M. Choe, J. S. Park, Y. S. Bok, Y. W. Tai, and I. S. Kweon, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1547–1555.
[Crossref]

Tao, M. W.

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from Combining Defocus and Correspondence Using Light-Field Cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 673–680.
[Crossref]

M. W. Tao, P. P. Srinivasan, J. Malik, S. Rusinkiewicz, and R. Ramamoorthi, “Depth from shading, defocus, and correspondence using light-field angular coherence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE,2015), pp. 1940–1948.
[Crossref]

Tuytelaars, T.

K. Rematas, T. Ritschel, M. Fritz, and T. Tuytelaars, “Image-Based Synthesis and Re-synthesis of Viewpoints Guided by 3D Models,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3898–3905.
[Crossref]

Velisavljevic, V.

Venkataraman, K.

K. Venkataraman, P. Gallagher, A. Jain, and S. Nisenzon, “US patent 705,885”(2015).

Xu, J.

J. Xu, Y. M. Chen, and Z. L. Shi, “Distance measurement using binocular-camera with adaptive focusing,” J. Shanghai Univ. 15(2), 10072861 (2009).

Xu, Y.

Y. Xu, X. Jin, and Q. Dai, “Depth fused from intensity range and blur estimation for light-field cameras,” in IEEE International Conference on Acoustics Speech and Signal Processing (IEEE, 2016), pp. 2857–2861.
[Crossref]

Ye, J. W.

N. Y. Li, J. W. Ye, J. Yu, H. B. Ling, and J. Y. Yu, “Saliency Detection on Light Field”, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp.2806–2813.

Yu, J.

N. Y. Li, J. W. Ye, J. Yu, H. B. Ling, and J. Y. Yu, “Saliency Detection on Light Field”, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp.2806–2813.

Z. Yu, X. Guo, H. Ling, A. Lumsdaine, and J. Yu, “Line Assisted Light Field Triangulation and Stereo Matching,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 2792–2799.
[Crossref]

Yu, J. Y.

N. Y. Li, J. W. Ye, J. Yu, H. B. Ling, and J. Y. Yu, “Saliency Detection on Light Field”, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp.2806–2813.

Yu, Z.

Z. Yu, X. Guo, H. Ling, A. Lumsdaine, and J. Yu, “Line Assisted Light Field Triangulation and Stereo Matching,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 2792–2799.
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

J. Opt. Soc. Am. A (1)

J. Shanghai Univ. (1)

J. Xu, Y. M. Chen, and Z. L. Shi, “Distance measurement using binocular-camera with adaptive focusing,” J. Shanghai Univ. 15(2), 10072861 (2009).

Opt. Express (1)

Other (18)

“Zemax,” http://www.zemax.com/ .

K. Venkataraman, P. Gallagher, A. Jain, and S. Nisenzon, “US patent 705,885”(2015).

Z. Yu, X. Guo, H. Ling, A. Lumsdaine, and J. Yu, “Line Assisted Light Field Triangulation and Stereo Matching,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 2792–2799.
[Crossref]

M. J. Kim, T. H. Oh, and I. S. Kweon, “Cost-aware depth map estimation for Lytro camera,” in IEEE International Conference on Image Processing (2014), pp. 36–40.
[Crossref]

M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from Combining Defocus and Correspondence Using Light-Field Cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 673–680.
[Crossref]

M. W. Tao, P. P. Srinivasan, J. Malik, S. Rusinkiewicz, and R. Ramamoorthi, “Depth from shading, defocus, and correspondence using light-field angular coherence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE,2015), pp. 1940–1948.
[Crossref]

Y. Xu, X. Jin, and Q. Dai, “Depth fused from intensity range and blur estimation for light-field cameras,” in IEEE International Conference on Acoustics Speech and Signal Processing (IEEE, 2016), pp. 2857–2861.
[Crossref]

H. G. Jeon, J. S. Park, G. M. Choe, J. S. Park, Y. S. Bok, Y. W. Tai, and I. S. Kweon, “Accurate depth map estimation from a lenslet light field camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1547–1555.
[Crossref]

N. Y. Li, J. W. Ye, J. Yu, H. B. Ling, and J. Y. Yu, “Saliency Detection on Light Field”, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp.2806–2813.

Metrology Resource Co, http://www.metrologyresource.com/ .

Weibel Inc., Weibel RR-60034 Ranging Radar System.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Reports (CSTR), 2005.

Lytro, https://www.lytro.com/ .

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2009), pp. 1–8.

Raytrix, https://www.raytrix.de/ .

C. C. Chen, Y. C. Lu, and M. S. Su, “Light field based digital refocusing using a DSLR camera with a pinhole array mask,” in IEEE International Conference on Acoustics Speech and Signal Processing (IEEE, 2010), pp.754–757.
[Crossref]

Y. Taguchi and T. Naemura, “View-Dependent Coding of Light Fields Based on Free-Viewpoint Image Synthesis,” in IEEE International Conference on Image Processing (2006), pp. 509–512.
[Crossref]

K. Rematas, T. Ritschel, M. Fritz, and T. Tuytelaars, “Image-Based Synthesis and Re-synthesis of Viewpoints Guided by 3D Models,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3898–3905.
[Crossref]

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 The optical structure of plenoptic 1.0 cameras where f x is the focal length of MLA.
Fig. 2
Fig. 2 Light field imaging of objects at different distances.
Fig. 3
Fig. 3 Object space model: light ray propagation from objects to the main lens.
Fig. 4
Fig. 4 Image space model: light ray propagation from the main lens to image sensor.
Fig. 5
Fig. 5 Refocusing model for deriving d in where y and v represent MLA and the image sensor, respectively.
Fig. 6
Fig. 6 Zemax screenshots.
Fig. 7
Fig. 7 Estimation error comparison for two imaging systems in Table 3: (a) Imaging system 1;(b) Imaging system 2.
Fig. 8
Fig. 8 Ray tracing model for error analysis where s i = d in +d. The green and red light rays correspond to plane a and plane a ˜ in Fig. 2, respectively.
Fig. 9
Fig. 9 Estimation errors comparison for testing cases.
Fig. 10
Fig. 10 Estimation error comparison for increasing T and D at the same time
Fig. 11
Fig. 11 The prototype of a real imaging system: (a) The front view; (b) and (c) are the front view and the vertical view of the plenoptic imaging system in (a), respectively.
Fig. 12
Fig. 12 Images obtained at different distances d out : (a) 500mm; (b) 600mm; (c) 700mm; (d) 800mm; (e) 900mm.

Tables (7)

Tables Icon

Table 1 Geometrical Parameters of Two Simulated Imaging Systems.

Tables Icon

Table 2 Estimation Error Comparison for the Proposed and That in [20].

Tables Icon

Table 3 Geometric Parameters of Eight Testing Cases.

Tables Icon

Table 4 Estimation Errors of All the Testing Cases in Table 3.

Tables Icon

Table 5 Geometrical Parameters of Two Testing Cases Used for Comparing the Effects of changing T and D.

Tables Icon

Table 6 Geometric Parameters of the Prototype.

Tables Icon

Table 7 Results of Estimated Distances and Estimation Errors.

Equations (24)

Equations on this page are rendered with MathJax. Learn more.

tanφ= D 2( d out T/2+R R 2 D 2 /4 ) ,
d out = D 2tanφ + R 2 D 2 /4 R+T/2.
n 1 sinψ=sin(φ+ θ 1 ),
sin θ 1 = D 2R .
n 1 sin( θ 1 ψ+ θ 2 )=sinω,
sin θ 2 = q R .
tan(ω θ 2 )= D 1 2d ,
(RT/2+p) 2 + q 2 = R 2 ,
tan(ω θ 2 )= q f x +d+ d in p ,
D 1 2d = q f x +d+ d in p .
m i = y j v i f x ,
F i = m i ×f,
k i2 = y 2 F i d out f ,
d in = q 0 y 2 + m 0 p 0 m 0 .
D 1 d D f x +d+ d in ,
d ( f x + d in ) D 1 D D 1 .
ω=arctan( D 1 2d )+ θ 2 =arctan( D 1 2d )+arcsin( q R ).
ψ=arcsin( D 2R )+arcsin( q R )arcsin(sin(ω)/ n 1 ),
φ=arcsin( n 1 sin(ψ))arcsin( D 2R ).
tan(ω θ 2 )= q f x d+ d in p ,
D 1 2d = q f x d+ d in p .
D 1 d D f x d+ d in ,
d ( f x + d in ) D 1 D+ D 1 .
ERROR= | d out E d out | d out = | Ad+T/2 (Ed+T/2 ) | Ad+T/2 = | AdEd | Ad+T/2 ,

Metrics