Abstract

The Standard Plenoptic Camera (SPC) is an innovation in photography, allowing for acquiring two-dimensional images focused at different depths, from a single exposure. Contrary to conventional cameras, the SPC consists of a micro lens array and a main lens projecting virtual lenses into object space. For the first time, the present research provides an approach to estimate the distance and depth of refocused images extracted from captures obtained by an SPC. Furthermore, estimates for the position and baseline of virtual lenses which correspond to an equivalent camera array are derived. On the basis of paraxial approximation, a ray tracing model employing linear equations has been developed and implemented using Matlab. The optics simulation tool Zemax is utilized for validation purposes. By designing a realistic SPC, experiments demonstrate that a predicted image refocusing distance at 3.5 m deviates by less than 11% from the simulation in Zemax, whereas baseline estimations indicate no significant difference. Applying the proposed methodology will enable an alternative to the traditional depth map acquisition by disparity analysis.

© 2014 Optical Society of America

1. Introduction

In recent years, there has been an increasing interest in plenoptic cameras and their ability to refocus two-dimensional (2-D) images after image capture. Initial research in the subject of light field can be traced back to Ives [1] and Lippmann [2] who independently discovered the possibility of gathering light rays from different angles by an array pinholes and micro lenses, respectively. Subsequently, several studies have been produced over the last century including the seven-dimensional plenoptic function [3], the plenoptic camera [4] and the four-dimensional (4-D) light field parameterization [5]. The latter achievement simplifies the plenoptic function by describing the Light Field (LF) as a set of rays intersecting two 2-D planes enabling image acquisition and reconstruction of spatial and angular light information. In 2000, Isaksen et al. [6] explored the capability of refocusing using viewpoint images captured by an array of cameras. Refocusing can be seen as synthesizing a 2-D Focused Image Slice (FIS) of the 4-D LF. Ng et al. carried this technological idea further to investigate refocusing based on a hand-held Standard Plenoptic Camera (SPC) having a Micro Lens Array (MLA) attached in front of the sensor [7]. Afterwards, this plenoptic setup has been implemented in a microscope by Levoy et al. [8] and was recently advanced by Broxton et al. [9]. In 2009, Lumsdaine and Georgiev [10] massively improved the effective spatial resolution by introducing a new rendering technique for the Focused Plenoptic Camera (FPC) allowing for different positions of the MLA. Nevertheless, the FPC inherently causes a loss of angular information resulting in a trade-off between angular and spatial resolution [11]. An early investigation of depth measuring using a disparity analysis of integral images was conducted by Wu et al. [12]. The challenge with this approach is to minimize the error resulting from the disparity map which is due to the relatively small baseline compared to a camera array setup. Improvements have been made by Bishop et al. [13] and Perwass et al. [14]. First research examining the position of virtual micro lenses was undertaken by Georgiev et al. [15]. Therein, principal plane calculations of the FPC identified that virtual lenses are projected into object space comparable to an array of cameras. So far, the current state of that attempt does not provide baseline estimations.

In relation to the SPC, an uncertainty also exists about the virtual baseline just as the distance of an FIS. With the aid of paraxial approximation and linear equations, this paper addresses and solves these problems for the first time. The presented proposition contributes to the subject of LF in several ways. First, it assists in the specification of an SPC in advance. Secondly, the distance prediction of FISs will enhance SPC depth map computations. Moreover, since screening plenoptic content on multi-view displays requires knowledge about the baseline setup, the novel method presented in this paper supports that application. Experimental testings using the real ray tracing tool Zemax verify the Matlab implementation by a relative error below 0.5% at distances ≤ 300 mm and by less than 11% at a distance of approximately 3.5 m.

2. Ray tracing intersection model

Historically, the most influential achievement in capturing LF data was elaborated by Levoy et al. [5] when describing rays of an LF by intersections of two 2-D planes denoted by (u, v) and (s, t), respectively. Subsequently, Ng et al. [7] reinvestigated the LF parameterization L(s, t, u, v) by adapting it to a plenoptic camera in which the measured irradiance I occurring at the micro lens plane (s, t) is given by

IbU(s,t)=1bU2LbU(s,t,U,V)A(U,V)cos4θdUdV
where A(·) denotes the aperture, (U, V) the main lens plane and bU the spacing between the main lens and MLA plane (s, t). The roll-off factor cos4θ is also known as vignetting. By considering the aperture to be completely open, so that A(·) = 1, and neglecting the vignetting as well as the inverse square law in terms of the factor 1/bU2, the equation can be shortened. In order to further simplify following descriptions, the LF is seen to be captured by rays intersecting two one-dimensional (1-D) planes disregarding the vertical dimension. Thereby, subsequent declarations are based on the assumption that camera parameters are equally specified in horizontal and vertical dimension allowing to apply proposed solutions in both directions similarly. Hence, the irradiance is horizontally given by
IbU(s)=LbU(s,U)dU
On closer examination of the simplified Eq. (2) from Ng, it may be obvious that LbU (s, U) is a compressed LF image projection determined by intersections at micro lens plane s and main lens plane U, compliant with the LF parameterization proposed by Levoy.

However, the incident irradiance of light is actually measured at the image plane of micro lenses, denoted by u. Therefore, the two planes providing retrievable coordinates at which rays intersect are rather (u, s) than (s, U). Neglecting reflections and absorptions due to the lens material, Fig. 1 illustrates, by means of the method of similar triangles, that the irradiance IbU at a particular point s emerging from the main lens U is proportionally distributed along its micro image Ws(u) with the irradiance Ifs (u). So it follows that

IbU(s)=Ifs(u,s)du
In case the wavelength of the light spectrum is limited and weighted according to the human visual perception, the irradiance can be substituted by the photometric illuminance E giving
EbU(s)=Efs(u,s)du

 figure: Fig. 1

Fig. 1 Planes of irradiance (Ref. [16], Fig. 1).

Download Full Size | PPT Slide | PDF

2.1. Standard plenoptic camera

A growing body of literature explains refocusing primarily by using the method of similar triangles [7, 14, 15]. According to Ng et al. [7], in an SPC the image plane of the sensor is placed at the distance of the micro lens focal length fs behind the MLA. As initially shown in a research publication by Hahne and Aggoun [16], chief rays can be traced from the image plane of the sensor into the real object space by taking advantage of the thin lens equation [17]

1fs=1as+1bs
Given the constraint that the image distance bs of the micro lens equals its focal length fs (bs = fs), it is mathematically demonstrated in
0=limas(1as)
that the basic idea of this ray tracing approach relies on the fact that in geometrical optics collimated light rays converge on the focal point of a convex lens. At this stage, it can be assumed that focused spots behind the MLA are of an infinitesimal size. As a result, chief rays impinging on infinitesimal spots at discrete positions on the image plane can be traced through the micro and main lens as illustrated in Fig. 2. Therein, positions within a micro image are described by u having a consecutive index i ∈ ℤ in the range of [ (m^1)2, (m^1)2] where represents the 1-D number of pixels within each micro image that is considered to be consistent. On condition that is an odd number, the central micro image position can be obtained by c = ( − 1)/2. When starting to count the index from the central position c, micro image locations are given by uc+i. The spacing of two adjacent positions uc+i can be seen as the pixel pitch pp in terms of a digital camera.

 figure: Fig. 2

Fig. 2 (a) Micro lens sj and a chief ray mi. (b) Collimated light rays traveling through the main lens.

Download Full Size | PPT Slide | PDF

Since there is a single chief ray for each position u + i, chief rays may be distinguished by their respective slope mi. For instance, in Fig. 2(a) chief rays, having the slope m−1, focus at uc−1 under each micro lens s indexed by j ∈ ℤ. Hence, chief rays m−1 under all micro lenses s form a collimated light beam m−1 in front of the micro lenses. Generally, the depth is represented by z and more specifically by zU for the optical axis of the main lens and zsj for each micro lens correspondingly. As depicted in Fig. 2(b), due to the behavior of collimated light, parallel rays forming the light beam mc−1 are considered to be refracted at the system’s principal planes HU1 and HU2 of the main lens U and diverge from the main lens focal plane FU. This respective point is denoted by Fi and varies along the dimensions perpendicular to z.

2.2. Refocusing image synthesis

Apart from the assumptions made earlier in this section, it is supposed that propagating light is reflected from Lambertian surfaces into the camera device with a luminous emittance M equal to the illuminance EbU. In addition, since EbU consists of spatially sampled points sj, an object plane M can be similarly described by discrete object points s′j. As seen in Eq. (4), synthesizing an image E′bU, which would have been captured by a conventional camera with an illuminance EbU at plane s, requires the selection and summation of specific values Efs[uc+i, sj]. Furthermore, a raw LF image of an SPC enables the generation of images which would have focused behind the image sensor (at distances greater than bU) without an MLA. These hypothetic images would have an illuminance E and form the refocusable LF which is compressed by the micro lenses and distributed over Efs (u, s). In order to distinguish between refocusable image planes E, EbU is substituted with a synthesized illuminance E′a whereas EbU = E′0 and image planes E′a, being further away, are indexed consecutively by a.

Figure 3 depicts the principle of intersecting chief rays enabling the refocusing synthesis of LF slices. Note that bU > fU and the separation H1UH2U¯=0 in the given illustration. Tracing a pair of chief rays into object space results in an intersection at a plane Ma (e.g., M1) indicating the location where rays could have been emitted from. Hence, recovering a point M1[sj] may be accomplished by collecting and summing illuminance values among different micro images. As seen in Fig. 4, for the sake of convenience, the micro image resolution is defined to be = 3 having the micro image center at c = 1. Two respective examples are given by

E0[s0]=Efs[u2,s0]+Efs[u1,s0]+Efs[u0,s0]
E1[s2]=Efs[u2,s2]+Efs[u1,s3]+Efs[u0,s4]
To make it easier to follow the idea of ray tracing in the plenoptic LF, chief rays of the given examples in Eqs. (7) and (8) are highlighted in Fig. 4. Thereby, it is apparent that object points at plane M1 would have focused at E′1 behind the actual image plane Efs.

 figure: Fig. 3

Fig. 3 Ray tracing intersection model (Ref. [16], Fig. 2).

Download Full Size | PPT Slide | PDF

 figure: Fig. 4

Fig. 4 Ray tracing intersection examples demonstrating Eqs. (7) and (8).

Download Full Size | PPT Slide | PDF

Investigating all possible synthesis combinations allows for reverse-engineering an algorithmic formula providing FISs E′a from the raw data Efs which is given by

Ea[sj]=i=ccEfs[um^1c+i,sj+a(ci)],a
As suggested by Hahne and Aggoun [16], a translation from [uc+1, sj] coordinate space to a single index [xk], as commonly designated in a digital image, is given by
k=j×m^+c+i
enabling a conversion of the synthesis examples in Eqs. (7) and (8) to
E0[s0]=Efs[x2]+Efs[x1]+Efs[x0]
E1[s2]=Efs[x8]+Efs[x10]+Efs[x12]

3. Distance estimation of focused image slices

Given the background of the SPC and the refocusing synthesis, this section turns the focus on the elaboration of the FIS distance. In particular, the first subsection aims to develop a distance prediction based on chief rays at pixel centers whereas the subsequent part considers the pixel width and optical resolution limit to estimate the depth of an FIS.

3.1. Central position

Closer inspection of the syntheses of Eqs. (7) and (8) reveals that selecting only two combinations of Efs [uc+i, sj] being merged and tracing the related chief rays back to the intersection in object space suffices to acquire the metric distance of a respective slice a. By subdividing the path of each chief ray into the intervals at which refractions can be seen to occur, a ray path is mathematically described as a composition of linear equations. Referring to Fig. 2, rays converging at the same relative position uc+i under each micro lens sj, have the same incidence angle, in other words the same slope mi which is given by

mi=Δu×ppfs
where Δu = ucuc+i and uc denotes the position of a central focal point under a micro lens. By introducing ni,j to be the x-intercept at the image plane which is defined as
ni,j=j×pm^+pm^2+Δu×pp
p denotes the pitch of the micro lenses s indexed by j starting from the bottom lens. A linear function i,j(z) representing a chief ray from the micro lens image plane to the optical center of the main lens U is thus formed by
f^i,j(z)=mi×z+ni,j,z[0,U]
In paraxial ray tracing, a single ray can be seen to be geometrically refracted at the principal plane H of a thin lens. Depending on the slope mi, the micro image position Δu and its parent micro lens sj, rays intersect the optical center at point Ui,j which is given by
Ui,j=mi×(fs+bU)+ni,j
Since rays converging at Δu under s are parallel to each other in the interval of the optical centers from s to U, they also converge in front of the main lens at its focal plane FU. In using mi as the representation of the angle, the intersection of corresponding light rays with focal plane FU is derived from
Fi=mi×fU
so that each ray focusing at Δu under all sj originates in Fi. By having calculated Fi and the individual point Ui,j, the slope qi,j of each light ray in object space is deduced from
qi,j=FiUi,jfU
In case there is no object surface at Fi where light rays can be reflected from, the corresponding ray is emanated from an object at a further distance than Fi. Given qi,j, the path of Δu can be traced back to infinity by the function
f^i,j(z)=qi,j×z+Ui,j,z[U,)
The distance from U to a point of the respective FIS, denoted by za, can be algebraically obtained by solving
f^c,e(z)=f^ac,ea(z),z[U,)
where c,e(z) is an exemplary reference chief ray having a slope mc since (i, j) = (−c, e). Parameter e may be an arbitrary value to represent a valid micro lens se. To get an intersecting chief ray, the second position may be given by (i, j) = (ac, ea). The solution za of Eq. (20) gives the intersection of two chief rays in object space and merely the distance from the main lens U to the corresponding FIS. When considering the impact of a thick lens or even a lens system, separations H1sH2s¯ and H1UH2U¯ between first and secondary principal planes are simply added to the resulting intersection distance. Note that H1UH2U¯ may be negative in case principal planes are interchanged. The final distance da between the image plane of the sensor and the refocusing plane is therefore
da=fs+H1sH2s¯+bU+H1UH2U¯+za

3.2. Depth of field of a focused image slice

Due to the restriction of tracing chief rays back to the object space, only an infinitesimally thin depth range of an FIS is calculated. Contrary to the limitation on chief rays, light beams rather intersect each other in object space leading to a Depth Of Field (DOF) for each FIS. The DOF of an FIS is even larger as focused points on the sensor are not of an infinitesimal small width, but have some spacing due to the size of a pixel. Apart from the pixel pitch, optical elements of an imaging system also restrict the resolution limit (Δ)min by the separation of the Airy disks. In Hecht [17], it has been stated that the separation should at least equal the radius r1 of an Airy disk center peak in order to distinguish between adjacent Airy disks. In general, this is approximately given by

(Δ)min1.22fλA
where λ denotes the wave length of light. In subsequent explanations, it is supposed that (Δ)minpp provided that the pixel pitch determines the resolution limit.

Assuming that locations uc+i represent pixel centers, the approach described in the previous Subsection 3.1 solely reveals the central position of a respective FIS. Thus, in order to approximate the depth of an FIS, pixel borders need to be the subject of investigation. Because the illuminance samples Efs are considered to have some width, DOF ray positions of a pixel Efs [uc+i, sj] are class-divided into three types:

  • inner rays at pixel borders towards the micro image center
  • outer rays at the pixel borders closer to micro image edge
  • central rays at the pixel center
Recalling the example in Eq. (8), pixels forming E′1[s2] are subject of investigation. Given this example, Fig. 5 demonstrates that an intersection of inner rays, depicted in red color, occurs at the shorter distance da whereby intersecting outer rays in black color give the distance da+. Since all rays between outer and inner rays including central chief rays intersect at a distance in the range of da and da+, the DOF, denoted by Da, of an FIS at distance da is then
Da=da+da

 figure: Fig. 5

Fig. 5 Ray tracing intersection model indicating the DOF for FIS a = 1.

Download Full Size | PPT Slide | PDF

The tracing of inner and outer rays is achieved in a similar way to the central chief ray tracing. Taking into consideration that rays at the pixel border are separated by pp/2 from the position uc+i, their respective slope mi± to the optical micro lens center is

mi±=Δu×pp±pp/2fs
As seen in Fig. 5, inner and outer rays travel with a slope mi± from a micro lens border si,j± which is given by
si,j±=j×pm^+pm^/2±pm^/2
intersecting a main lens principal plane at Ui,j± which is obtained by
Ui,j±=mi±×bU+si,j±

Following the suggestion made in Subsection 3.1, collimated rays reach FU at Fi± yielding

Fi±=mi±×fU
Subsequently, the object space slope qi,j± is calculated from previous intersections such that
qi,j±=Fi±Ui,j±fU
As a result, a linear equation representing this ray in object space can be written in a fashion
f^i,j±(z)=qi,j±×z+Ui,j±,z[U,)

Investigations into the example depicted in Fig. 5 show that traced light beams of most distant pixel combinations Efs[uc+i, sj], which are merged in the refocusing synthesis of a particular position E′[sj], enclose the smallest area of that point in object space. In other words, an illuminance of potentially emanating light rays entering the camera device from that confined space, is covered by all pixels involved to form E′[sj]. In contrast, a larger space does not fulfill this condition as the illuminance of light reflected from slightly different locations is not equally measured by all those sensor pixels. Hence, the range bordered by rays of most distant pixels ensures to obtain the maximal effective resolution of the corresponding FIS. The selection of most distant positions of integrated values Efs[uc+i, sj] can be described by (i, j) = (−c, a( −1) + e) for the first ray position Efs[u0, s4], where e = 2 according to Fig. 5, and (i, j) = (c, e) for the second Efs[u2, s2] intersecting at the distance of FIS a = 1. Considering this example, depth borders lie at distances z which are obtained by the solutions of

f^0,4±(z)=f^2,2±(z),z[U,)
for z1− and z1+, respectively. Similar to the central refocusing distance procedure, Eq. (30) only gives the spacing between main lens principal plane and the depth range limits. Consequently, the path from sensor to main lens has to be added in order to get da± as given by
da±=fs+H1sH2s¯+bU+H1UH2U¯+za±

4. Baseline of virtual lenses

As carried out by Ng [7], rearranging each illuminance value Efs, having uc+i in common, to create a single image E′uc+i, provides a virtual viewpoint just as it would have been taken with a real camera array. A 1-D viewpoint image is extracted by

Euc+i[sj]=E[uc+i,sj]
whereas the given synthesis equation can certainly be used for the vertical dimension in the same manner. Due to that algorithm, the effective resolution of a viewpoint image acquired by an SPC corresponds to the number of micro lenses and the number of virtual lenses thus equals the micro image resolution .

In the absence of aberrations, the optical center of a virtual lens is optimally at the best focus of rays mi converging at plane FU as can be seen in the ray tracing intersection model in Fig. 3. Although, considering the pixel size, results in a larger optical center having a width w. Each virtual lens has a virtual optical axis z′i represented by a chief ray of the respective viewpoint. As a consequence, the virtual lens plane along w and the viewpoint’s virtual optical axis are at a right angle with respect to each other. Hence, when adjusting the tilt angle of a virtual lens to be parallel to the main optical axis z, the virtual optical axes of the viewpoints have to be collimated. This requirement is fulfilled when the spacing between main lens and MLA amounts to the focal length (fU = bU). The tilt angle Φi of a virtual lens is given by

Φi=arctan(mi×(bUfU)fU)
As shown in Figs. 6(a)–6(c), the tilt angle setting of virtual lenses depends on bU.

 figure: Fig. 6

Fig. 6 (a–c) Tilt angles Φi of the virtual lenses Fi; (d) Illustration of baseline ΔBg between virtual optical axes z′i.

Download Full Size | PPT Slide | PDF

When seeking Fi from Eq. (17) in Fig. 3, it may be obvious that the optical center of a virtual lens is located at position (FU, Fi). As a result, the horizontal positions of two virtual optical centers Fi provide the corresponding baseline ΔBg by

ΔBg=|FiFi+g|
where g serves as a gap between virtual lenses. Figure 6(d) illustrates different gaps g while having the main lens focused to infinity (bU = fU). Using the proposed approach, a baseline prediction of any pair of virtual lenses is possible. For example, a gap of g = 1 indicates direct adjacency among viewpoints. The width w of a virtual lens equals the smallest baseline (w = ΔB1) and can be visualised by tracing inner and outer rays as shown in Subsection 3.2.

5. Model validation

5.1. Implementation

In order to evaluate suggested estimations, the ray tracing attempt described in Section 3 has been implemented in Matlab. For verification purposes, the well known complex ray tracing simulation tool Zemax is used. A comparison requires the specification of a realistic SPC to be modeled equally in both environments. In Zemax, light rays have been emitted with a wavelength λ = 632.8 nm. For simplicity, the arrangement of the micro lenses is specified to be rectangular. Micro lenses are of a plano-convex shape having a focal length fs = 2.749 mm and a thickness ts = 1.1 mm. The MLA is made from glass subtrate with a refractive index of n632.8 = 1.515. By applying

Rs=fs×(n1)
derived from Hecht [17], a radius of curvature Rs ≈ 1.416 is calculated. The micro lens pitch is defined to be p = 250 μm. Modelling the specified micro lens in Zemax, it turns out that H1sH2s¯=0.374mm. Therefore, the distance ds from back vertex Vs2 to image plane is given by
ds=fs(tsH1sH2s¯)
and amounts to 2.023 mm. The image sensor of the camera has a pixel pitch pp = 0.01 mm which also describes the spacing between uc and its adjacent position uc−1. Referring to Eq. (13), it is apparent that the slope mc−1 ≈ 0.0036 and arctan(mc−1) ≈ 0.206° is a chief ray angle entered in Zemax. Spherical aberrations are suppressed as much as possible by choosing a Double Gauss objective which can be treated as a thick lens with two principal plane locations. Therefore, a Double Gauss 28 degree field objective, denoted by f100, was taken from the Zemax sample library. Alternatively, to examine the impact of main lens parameters, a 28 – 200 mm f/3.5 – 5.6 zoom objective, represented by f200, from Nikon is chosen which can be found online as a Zemax file containing the entire lens model [18]. Determination of principal plane positions and exact focal lengths has been accomplished by using Zemax functions. Table 1 summarizes the main lens data essential for the ray tracing approach.

Tables Icon

Table 1. Objective lens parameters.

Figures 7(a) and 7(b) depict screenshots of the distance and depth measurements in Zemax based on the objective lens f100 while Fig. 7(c) shows parts of the lens where chief rays reveal baseline results ΔB12 and ΔB24 in object space.

 figure: Fig. 7

Fig. 7 Zemax screenshots: (a–b) Intersecting light beams at distances da± considering the pixel pitch; (c) Chief rays traveling through Double Gauss objective indicating the baseline ΔBg.

Download Full Size | PPT Slide | PDF

Figure 8 shows a plot of the Matlab ray tracing implementation according to the suggestion made in Section 3. Measurements have shown that calculations in Matlab 7.11.0.584 (R2010b) on an Intel Core i7-3770 CPU @ 3.40 GHz require 0.05 s to 0.10 s for the DOF of a single FIS. In contrast, the more laborious design in Zemax includes modelling lenses and measuring distances which can take up to a few hours for an experienced optical engineer.

 figure: Fig. 8

Fig. 8 Matlab screenshots: (a) Paraxial ray tracing based on refraction at principal planes; (b) Close up of rays under a micro lens; (c) Close up of the ray intersection in object space.

Download Full Size | PPT Slide | PDF

When implementing the ray tracing equations, problems arise if the micro image resolution is inconsistent due to the dimensioning of pp and p. One approach to prevent this may be achieved by specifying pp and p appropriately in advance. Alternatively, one can solve this problem afterwards by resampling the entire raw image, aiming to create a homogeneous micro image resolution. Given the example of having a pixel pitch pp = 9 μm and a micro lens pitch p = 250 μm, the raw image may be downsampled by factor 1.1̄ yielding = 25. In case is even, micro images can be cropped leaving out a pixel in each micro image.

5.2. Results

A data comparison of predicted and simulated distances is shown in Table 2. The deviation ERR of the estimated depth of an FIS is obtained by

ERR=predictionsimulationprediction×100.

Tables Icon

Table 2. Comparison of refocusing distances da and da± with respect to the image sensor.

Note that for the respective experiment the separation between the main lens and the MLA has been set to the focal length of the main lens (bU = fU). Because of that, only inner rays intersect at FIS a = 0, since central rays travel parallel to each other and outer rays even diverge. This means that FIS a = 0 starts at distance d0− and is all-in-focus from that boundary to infinity. A visualization of the measured values is given in Fig. 9, although d0− has been left out for sake of clarity in the diagram. As seen in the chart, the DOF of an FIS shrinks with increasing a. Interestingly, detailed analysis also indicates that there are uncovered gaps between da±. However, these gaps may be retrieved by expanding the numerical range of FISs from a ∈ ℤ to a ∈ ℝ+, so that the DOF between integer number FIS is acquired. Referring to Eq. (9), it is apparent that this attempt inherently implies an interpolation of micro images upscaling the spatial resolution at the same time.

 figure: Fig. 9

Fig. 9 1-D plot of (a) predicted and (b) simulated distances d.

Download Full Size | PPT Slide | PDF

Due to a sufficient accuracy of an error of less than 0.5% at da± ≈ 300 mm, the suggested estimation attempt can contribute to a broader objective such as a depth map generation. Using the results of the proposed distance prediction in combination with a spatial frequency analysis and object segmentation, a novel and competitive depth map method can be developed.

Large errors affect the imaging perfomance in a way that an object surface, which is supposed to be in focus according to the predictions, may be perceived as slightly blurred. This occurs because the object surface is outside the DOF range, but still very close to its boundary. This observation merely applies to large errors, thus for objects being far away from the capture device. To give an example from Table 2, the paraxial approach predicts an image slice a=0 to be in focus from distance d0− = 3483.174 mm to infinity, however the FIS actually has its maximal effective resolution from distance d0− = 3863.248 mm to infinity. Hence, a predicted synthesized DOF boundary at approximately 3.5 m deviates by 40 cm at the maximum.

When undertaking a second experiment, focal length parameters fs, fU and the lens separation distance bU have been changed to investigate their impact on the captured LF. Therefore, another micro lens is introduced with the same specification as the previous one apart from the focal length fs = 1.623 mm. Estimated results are listed in Table 3 and depicted in Fig. 10. Table 3 is revealing in several ways. First, it is obvious that closer focusing (in other words shifting the objective lens so that bU > fU) moves the furthest FIS a = 0 towards the camera device. Secondly, decreasing fs shifts all FISs away from the camera. In addition, it is evident that the larger the value of fU, the further away the FISs are from the camera and the more space is between them. Hence, it is feasible to shrink and expand the depth of the captured LF which can be useful for photography as it keeps the flexibility to adjust the LF boundaries to landscape or portrait photos, respectively. Furthermore, the distance prediction helps to determine optical parameters of a refocusing-capable SPC in advance.

Tables Icon

Table 3. Refocusable distances da with different lens settings for fs and bUfU.

 figure: Fig. 10

Fig. 10 Impact of varying lens parameters on the refocusing distance da.

Download Full Size | PPT Slide | PDF

Conduction of a third experiment aims to validate the baseline estimation presented in Section 4. Given a plenoptic setup composed of the specified MLA with fs = 2.749 mm and the f100 objective, virtual lens positions are provided in Table 4. From the data, it is seen that there were no significant differences between the baseline prediction and simulation which proves the suggested concept. An interesting observation is that the baseline ΔBg does not exceed 10 mm which is relatively small compared to human stereo vision (ΔB ≈ 65 mm [19]). This finding confirms the association between plenoptic cameras and microscopy in previous research articles [8,9] as microscopes benefit from the small baseline. However, to apply an SPC in the field of stereo vision, ΔB needs to approach 65 mm. For example, a theoretical baseline ΔB180 = 65.161 mm may be accomplished by p = 2 mm, pp = 0.01 mm, fs = 2.749 mm and main lens f100 producing a decrease in spatial viewpoint image resolution by a factor of 8 in case the sensor and MLA format remain the same. In other words, a large aperture A, a large micro lens pitch p and large sizes in MLA and image sensor will provide larger baselines.

Tables Icon

Table 4. Virtual lens positions where fU = f100, bU1 = fU and bU2 = fU + 20 mm. The predicted depth position FU = 99.514 mm is given with respect to the principal plane H1U.

In the ray tracing intersection model, the SPC is viewed to be a paraxial optical system. Hence, spherical, coma and chromatic aberrations have not been taken into consideration. However, the real ray tracing in Zemax takes into account of aberrations and thus it is assumed that errors represent the deviation due to paraxial approximation disregarding optical aberrations.

6. Conclusions

Apart from distance predictions based on a camera array, to the extent of our knowledge the present work is the very first method providing object space distances of FISs and positions of virtual viewpoint cameras. Given the plenoptic ray tracing model, experimental work using objective lenses allows for the prediction of all available LF slices with a relative error of less than 0.5% with regard to distances of 300 mm and 11% for distances at 3.5 m, whereas baseline estimations of virtual views show no significant difference. Thereby, the suggestion supports to specify the optical parameters. Moreover, the major advantage of the proposed estimations over the complex simulation is an instant computation able to offer several results simultaneously. Implementing such an innovation to an SPC would provide a novel feature. Beyond that, the presented research may be a starting point to develop and evaluate an alternative depth map technique based on distances of the FISs rather than the commonly used disparity analysis.

Acknowledgments

We are grateful to Hector Navarro Fructuoso for the inspiring discussions just as Amy-Grace Douglas and Lascelle Mauricette for the helpful suggestions. This research was supported in part by the University of Bedfordshire and the EU under the ICT program as Project 3D VIVANT under EU-FP7 ICT-2010-248420.

References and links

1. F. Ives, “US patent 725,567,” (1903).

2. G. Lippmann, “Épreuves réversibles donnant la sensation du relief,” J. Phys. Théor. Appl. 7, 821–825 (1908). [CrossRef]  

3. E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing,M. S. Landy and J. A. Movshon, eds. (MIT Press, 1991), pp. 3–20.

4. E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992). [CrossRef]  

5. M. Levoy and P. Hanrahan, “Lightfield rendering,” in Proceedings of ACM SIGGRAPH, (1996), pp. 31–42.

6. A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proceedings of ACM SIGGRAPH, (2000), pp. 297–306.

7. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Lightfield photography with a hand-held plenoptic camera,” Stanford Tech. Report CTSR (2005), pp. 1–11.

8. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in Proceedings of ACM SIGGRAPH, (2006), pp. 924–934. [CrossRef]  

9. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21, 25418–25439 (2013). [CrossRef]   [PubMed]  

10. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2009), pp. 1–8.

11. T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of Eurographics Symposium on Rendering, (2006), pp. 263–272.

12. C. Wu, M. McCormick, A. Aggoun, and S. Y. Kung, “Depth Mapping of Integral Images Through Viewpoint Image Extraction With a Hybrid Disparity Analysis Algorithm,” Journal of Display Technology 4, 101–108 (2008). [CrossRef]  

13. T. E. Bishop, S. Zanetti, and P. Favaro, “Light field superresolution,” in IEEE International Conference on Computational Photography, (2009).

14. C. Perwass and L. Wietzke, “Single-lens 3D camera with extended depth-of-field,” in Human Vision and Electronic Imaging XVII, Proc. SPIE8291, 829108 (2012). [CrossRef]  

15. T. Georgiev, A. Lumsdaine, and S. Goma, “Plenoptic Principal Planes,” in Imaging and Applied Optics, OSA Technical Digest (CD) (Optical Society of America, 2011), paper JTuD3. [CrossRef]  

16. C. Hahne and A. Aggoun, “Embedded FIR filter design for real-time refocusing using a standard plenoptic video camera,” in Digital Photography X, Proc. SPIE9023, 902305 (2014). [CrossRef]  

17. E. Hecht, Optics, Fourth Edition (Addison Wesley, 2001).

18. N. Konidaris, “Optical Prescriptions in Zemax,” (2014), https://sites.google.com/site/nickkonidaris/prescriptions.

19. Y. Pritch, M. Ben-Ezra, and S. Peleg, “Automatic disparity control in stereo panoramas (OmniStereo),” in Proceedings of IEEE Workshop on Omnidirectional Vision, (2000), pp. 54–61. [CrossRef]  

References

  • View by:

  1. F. Ives, “US patent 725,567,” (1903).
  2. G. Lippmann, “Épreuves réversibles donnant la sensation du relief,” J. Phys. Théor. Appl. 7, 821–825 (1908).
    [Crossref]
  3. E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing,M. S. Landy and J. A. Movshon, eds. (MIT Press, 1991), pp. 3–20.
  4. E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992).
    [Crossref]
  5. M. Levoy and P. Hanrahan, “Lightfield rendering,” in Proceedings of ACM SIGGRAPH, (1996), pp. 31–42.
  6. A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proceedings of ACM SIGGRAPH, (2000), pp. 297–306.
  7. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Lightfield photography with a hand-held plenoptic camera,” Stanford Tech. Report CTSR (2005), pp. 1–11.
  8. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in Proceedings of ACM SIGGRAPH, (2006), pp. 924–934.
    [Crossref]
  9. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21, 25418–25439 (2013).
    [Crossref] [PubMed]
  10. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2009), pp. 1–8.
  11. T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of Eurographics Symposium on Rendering, (2006), pp. 263–272.
  12. C. Wu, M. McCormick, A. Aggoun, and S. Y. Kung, “Depth Mapping of Integral Images Through Viewpoint Image Extraction With a Hybrid Disparity Analysis Algorithm,” Journal of Display Technology 4, 101–108 (2008).
    [Crossref]
  13. T. E. Bishop, S. Zanetti, and P. Favaro, “Light field superresolution,” in IEEE International Conference on Computational Photography, (2009).
  14. C. Perwass and L. Wietzke, “Single-lens 3D camera with extended depth-of-field,” in Human Vision and Electronic Imaging XVII, Proc. SPIE8291, 829108 (2012).
    [Crossref]
  15. T. Georgiev, A. Lumsdaine, and S. Goma, “Plenoptic Principal Planes,” in Imaging and Applied Optics, OSA Technical Digest (CD) (Optical Society of America, 2011), paper JTuD3.
    [Crossref]
  16. C. Hahne and A. Aggoun, “Embedded FIR filter design for real-time refocusing using a standard plenoptic video camera,” in Digital Photography X, Proc. SPIE9023, 902305 (2014).
    [Crossref]
  17. E. Hecht, Optics, Fourth Edition (Addison Wesley, 2001).
  18. N. Konidaris, “Optical Prescriptions in Zemax,” (2014), https://sites.google.com/site/nickkonidaris/prescriptions .
  19. Y. Pritch, M. Ben-Ezra, and S. Peleg, “Automatic disparity control in stereo panoramas (OmniStereo),” in Proceedings of IEEE Workshop on Omnidirectional Vision, (2000), pp. 54–61.
    [Crossref]

2013 (1)

2008 (1)

C. Wu, M. McCormick, A. Aggoun, and S. Y. Kung, “Depth Mapping of Integral Images Through Viewpoint Image Extraction With a Hybrid Disparity Analysis Algorithm,” Journal of Display Technology 4, 101–108 (2008).
[Crossref]

1992 (1)

E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992).
[Crossref]

1908 (1)

G. Lippmann, “Épreuves réversibles donnant la sensation du relief,” J. Phys. Théor. Appl. 7, 821–825 (1908).
[Crossref]

Adams, A.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in Proceedings of ACM SIGGRAPH, (2006), pp. 924–934.
[Crossref]

Adelson, E. H.

E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992).
[Crossref]

E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing,M. S. Landy and J. A. Movshon, eds. (MIT Press, 1991), pp. 3–20.

Aggoun, A.

C. Wu, M. McCormick, A. Aggoun, and S. Y. Kung, “Depth Mapping of Integral Images Through Viewpoint Image Extraction With a Hybrid Disparity Analysis Algorithm,” Journal of Display Technology 4, 101–108 (2008).
[Crossref]

C. Hahne and A. Aggoun, “Embedded FIR filter design for real-time refocusing using a standard plenoptic video camera,” in Digital Photography X, Proc. SPIE9023, 902305 (2014).
[Crossref]

Andalman, A.

Ben-Ezra, M.

Y. Pritch, M. Ben-Ezra, and S. Peleg, “Automatic disparity control in stereo panoramas (OmniStereo),” in Proceedings of IEEE Workshop on Omnidirectional Vision, (2000), pp. 54–61.
[Crossref]

Bergen, J. R.

E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing,M. S. Landy and J. A. Movshon, eds. (MIT Press, 1991), pp. 3–20.

Bishop, T. E.

T. E. Bishop, S. Zanetti, and P. Favaro, “Light field superresolution,” in IEEE International Conference on Computational Photography, (2009).

Brédif, M.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Lightfield photography with a hand-held plenoptic camera,” Stanford Tech. Report CTSR (2005), pp. 1–11.

Broxton, M.

Cohen, N.

Curless, B.

T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of Eurographics Symposium on Rendering, (2006), pp. 263–272.

Deisseroth, K.

Duval, G.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Lightfield photography with a hand-held plenoptic camera,” Stanford Tech. Report CTSR (2005), pp. 1–11.

Favaro, P.

T. E. Bishop, S. Zanetti, and P. Favaro, “Light field superresolution,” in IEEE International Conference on Computational Photography, (2009).

Footer, M.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in Proceedings of ACM SIGGRAPH, (2006), pp. 924–934.
[Crossref]

Georgeiv, T.

T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of Eurographics Symposium on Rendering, (2006), pp. 263–272.

Georgiev, T.

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2009), pp. 1–8.

T. Georgiev, A. Lumsdaine, and S. Goma, “Plenoptic Principal Planes,” in Imaging and Applied Optics, OSA Technical Digest (CD) (Optical Society of America, 2011), paper JTuD3.
[Crossref]

Goma, S.

T. Georgiev, A. Lumsdaine, and S. Goma, “Plenoptic Principal Planes,” in Imaging and Applied Optics, OSA Technical Digest (CD) (Optical Society of America, 2011), paper JTuD3.
[Crossref]

Gortler, S. J.

A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proceedings of ACM SIGGRAPH, (2000), pp. 297–306.

Grosenick, L.

Hahne, C.

C. Hahne and A. Aggoun, “Embedded FIR filter design for real-time refocusing using a standard plenoptic video camera,” in Digital Photography X, Proc. SPIE9023, 902305 (2014).
[Crossref]

Hanrahan, P.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Lightfield photography with a hand-held plenoptic camera,” Stanford Tech. Report CTSR (2005), pp. 1–11.

M. Levoy and P. Hanrahan, “Lightfield rendering,” in Proceedings of ACM SIGGRAPH, (1996), pp. 31–42.

Hecht, E.

E. Hecht, Optics, Fourth Edition (Addison Wesley, 2001).

Horowitz, M.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Lightfield photography with a hand-held plenoptic camera,” Stanford Tech. Report CTSR (2005), pp. 1–11.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in Proceedings of ACM SIGGRAPH, (2006), pp. 924–934.
[Crossref]

Intwala, C.

T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of Eurographics Symposium on Rendering, (2006), pp. 263–272.

Isaksen, A.

A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proceedings of ACM SIGGRAPH, (2000), pp. 297–306.

Ives, F.

F. Ives, “US patent 725,567,” (1903).

Kung, S. Y.

C. Wu, M. McCormick, A. Aggoun, and S. Y. Kung, “Depth Mapping of Integral Images Through Viewpoint Image Extraction With a Hybrid Disparity Analysis Algorithm,” Journal of Display Technology 4, 101–108 (2008).
[Crossref]

Levoy, M.

M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21, 25418–25439 (2013).
[Crossref] [PubMed]

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in Proceedings of ACM SIGGRAPH, (2006), pp. 924–934.
[Crossref]

M. Levoy and P. Hanrahan, “Lightfield rendering,” in Proceedings of ACM SIGGRAPH, (1996), pp. 31–42.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Lightfield photography with a hand-held plenoptic camera,” Stanford Tech. Report CTSR (2005), pp. 1–11.

Lippmann, G.

G. Lippmann, “Épreuves réversibles donnant la sensation du relief,” J. Phys. Théor. Appl. 7, 821–825 (1908).
[Crossref]

Lumsdaine, A.

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2009), pp. 1–8.

T. Georgiev, A. Lumsdaine, and S. Goma, “Plenoptic Principal Planes,” in Imaging and Applied Optics, OSA Technical Digest (CD) (Optical Society of America, 2011), paper JTuD3.
[Crossref]

McCormick, M.

C. Wu, M. McCormick, A. Aggoun, and S. Y. Kung, “Depth Mapping of Integral Images Through Viewpoint Image Extraction With a Hybrid Disparity Analysis Algorithm,” Journal of Display Technology 4, 101–108 (2008).
[Crossref]

McMillan, L.

A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proceedings of ACM SIGGRAPH, (2000), pp. 297–306.

Nayar, S.

T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of Eurographics Symposium on Rendering, (2006), pp. 263–272.

Ng, R.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Lightfield photography with a hand-held plenoptic camera,” Stanford Tech. Report CTSR (2005), pp. 1–11.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in Proceedings of ACM SIGGRAPH, (2006), pp. 924–934.
[Crossref]

Peleg, S.

Y. Pritch, M. Ben-Ezra, and S. Peleg, “Automatic disparity control in stereo panoramas (OmniStereo),” in Proceedings of IEEE Workshop on Omnidirectional Vision, (2000), pp. 54–61.
[Crossref]

Perwass, C.

C. Perwass and L. Wietzke, “Single-lens 3D camera with extended depth-of-field,” in Human Vision and Electronic Imaging XVII, Proc. SPIE8291, 829108 (2012).
[Crossref]

Pritch, Y.

Y. Pritch, M. Ben-Ezra, and S. Peleg, “Automatic disparity control in stereo panoramas (OmniStereo),” in Proceedings of IEEE Workshop on Omnidirectional Vision, (2000), pp. 54–61.
[Crossref]

Salesin, D.

T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of Eurographics Symposium on Rendering, (2006), pp. 263–272.

Wang, J. Y.

E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992).
[Crossref]

Wietzke, L.

C. Perwass and L. Wietzke, “Single-lens 3D camera with extended depth-of-field,” in Human Vision and Electronic Imaging XVII, Proc. SPIE8291, 829108 (2012).
[Crossref]

Wu, C.

C. Wu, M. McCormick, A. Aggoun, and S. Y. Kung, “Depth Mapping of Integral Images Through Viewpoint Image Extraction With a Hybrid Disparity Analysis Algorithm,” Journal of Display Technology 4, 101–108 (2008).
[Crossref]

Yang, S.

Zanetti, S.

T. E. Bishop, S. Zanetti, and P. Favaro, “Light field superresolution,” in IEEE International Conference on Computational Photography, (2009).

Zheng, K. C.

T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of Eurographics Symposium on Rendering, (2006), pp. 263–272.

IEEE Trans. Pattern Anal. Mach. Intell. (1)

E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992).
[Crossref]

J. Phys. Théor. Appl. (1)

G. Lippmann, “Épreuves réversibles donnant la sensation du relief,” J. Phys. Théor. Appl. 7, 821–825 (1908).
[Crossref]

Journal of Display Technology (1)

C. Wu, M. McCormick, A. Aggoun, and S. Y. Kung, “Depth Mapping of Integral Images Through Viewpoint Image Extraction With a Hybrid Disparity Analysis Algorithm,” Journal of Display Technology 4, 101–108 (2008).
[Crossref]

Opt. Express (1)

Other (15)

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2009), pp. 1–8.

T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of Eurographics Symposium on Rendering, (2006), pp. 263–272.

E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing,M. S. Landy and J. A. Movshon, eds. (MIT Press, 1991), pp. 3–20.

M. Levoy and P. Hanrahan, “Lightfield rendering,” in Proceedings of ACM SIGGRAPH, (1996), pp. 31–42.

A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proceedings of ACM SIGGRAPH, (2000), pp. 297–306.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Lightfield photography with a hand-held plenoptic camera,” Stanford Tech. Report CTSR (2005), pp. 1–11.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in Proceedings of ACM SIGGRAPH, (2006), pp. 924–934.
[Crossref]

T. E. Bishop, S. Zanetti, and P. Favaro, “Light field superresolution,” in IEEE International Conference on Computational Photography, (2009).

C. Perwass and L. Wietzke, “Single-lens 3D camera with extended depth-of-field,” in Human Vision and Electronic Imaging XVII, Proc. SPIE8291, 829108 (2012).
[Crossref]

T. Georgiev, A. Lumsdaine, and S. Goma, “Plenoptic Principal Planes,” in Imaging and Applied Optics, OSA Technical Digest (CD) (Optical Society of America, 2011), paper JTuD3.
[Crossref]

C. Hahne and A. Aggoun, “Embedded FIR filter design for real-time refocusing using a standard plenoptic video camera,” in Digital Photography X, Proc. SPIE9023, 902305 (2014).
[Crossref]

E. Hecht, Optics, Fourth Edition (Addison Wesley, 2001).

N. Konidaris, “Optical Prescriptions in Zemax,” (2014), https://sites.google.com/site/nickkonidaris/prescriptions .

Y. Pritch, M. Ben-Ezra, and S. Peleg, “Automatic disparity control in stereo panoramas (OmniStereo),” in Proceedings of IEEE Workshop on Omnidirectional Vision, (2000), pp. 54–61.
[Crossref]

F. Ives, “US patent 725,567,” (1903).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Planes of irradiance (Ref. [16], Fig. 1).
Fig. 2
Fig. 2 (a) Micro lens sj and a chief ray mi. (b) Collimated light rays traveling through the main lens.
Fig. 3
Fig. 3 Ray tracing intersection model (Ref. [16], Fig. 2).
Fig. 4
Fig. 4 Ray tracing intersection examples demonstrating Eqs. (7) and (8).
Fig. 5
Fig. 5 Ray tracing intersection model indicating the DOF for FIS a = 1.
Fig. 6
Fig. 6 (a–c) Tilt angles Φ i of the virtual lenses Fi; (d) Illustration of baseline ΔBg between virtual optical axes z′i.
Fig. 7
Fig. 7 Zemax screenshots: (a–b) Intersecting light beams at distances da± considering the pixel pitch; (c) Chief rays traveling through Double Gauss objective indicating the baseline ΔBg.
Fig. 8
Fig. 8 Matlab screenshots: (a) Paraxial ray tracing based on refraction at principal planes; (b) Close up of rays under a micro lens; (c) Close up of the ray intersection in object space.
Fig. 9
Fig. 9 1-D plot of (a) predicted and (b) simulated distances d.
Fig. 10
Fig. 10 Impact of varying lens parameters on the refocusing distance da.

Tables (4)

Tables Icon

Table 1 Objective lens parameters.

Tables Icon

Table 2 Comparison of refocusing distances da and da± with respect to the image sensor.

Tables Icon

Table 3 Refocusable distances da with different lens settings for fs and bUfU.

Tables Icon

Table 4 Virtual lens positions where fU = f100, bU1 = fU and bU2 = fU + 20 mm. The predicted depth position FU = 99.514 mm is given with respect to the principal plane H1 U .

Equations (37)

Equations on this page are rendered with MathJax. Learn more.

I b U ( s , t ) = 1 b U 2 L b U ( s , t , U , V ) A ( U , V ) cos 4 θ d U d V
I b U ( s ) = L b U ( s , U ) d U
I b U ( s ) = I f s ( u , s ) d u
E b U ( s ) = E f s ( u , s ) d u
1 f s = 1 a s + 1 b s
0 = lim a s ( 1 a s )
E 0 [ s 0 ] = E f s [ u 2 , s 0 ] + E f s [ u 1 , s 0 ] + E f s [ u 0 , s 0 ]
E 1 [ s 2 ] = E f s [ u 2 , s 2 ] + E f s [ u 1 , s 3 ] + E f s [ u 0 , s 4 ]
E a [ s j ] = i = c c E f s [ u m ^ 1 c + i , s j + a ( c i ) ] , a
k = j × m ^ + c + i
E 0 [ s 0 ] = E f s [ x 2 ] + E f s [ x 1 ] + E f s [ x 0 ]
E 1 [ s 2 ] = E f s [ x 8 ] + E f s [ x 10 ] + E f s [ x 12 ]
m i = Δ u × p p f s
n i , j = j × p m ^ + p m ^ 2 + Δ u × p p
f ^ i , j ( z ) = m i × z + n i , j , z [ 0 , U ]
U i , j = m i × ( f s + b U ) + n i , j
F i = m i × f U
q i , j = F i U i , j f U
f ^ i , j ( z ) = q i , j × z + U i , j , z [ U , )
f ^ c , e ( z ) = f ^ a c , e a ( z ) , z [ U , )
d a = f s + H 1 s H 2 s ¯ + b U + H 1 U H 2 U ¯ + z a
( Δ ) min 1.22 f λ A
D a = d a + d a
m i ± = Δ u × p p ± p p / 2 f s
s i , j ± = j × p m ^ + p m ^ / 2 ± p m ^ / 2
U i , j ± = m i ± × b U + s i , j ±
F i ± = m i ± × f U
q i , j ± = F i ± U i , j ± f U
f ^ i , j ± ( z ) = q i , j ± × z + U i , j ± , z [ U , )
f ^ 0 , 4 ± ( z ) = f ^ 2 , 2 ± ( z ) , z [ U , )
d a ± = f s + H 1 s H 2 s ¯ + b U + H 1 U H 2 U ¯ + z a ±
E u c + i [ s j ] = E [ u c + i , s j ]
Φ i = arctan ( m i × ( b U f U ) f U )
Δ B g = | F i F i + g |
R s = f s × ( n 1 )
d s = f s ( t s H 1 s H 2 s ¯ )
ERR = prediction simulation prediction × 100 .

Metrics