Abstract

For unfocused plenoptic imaging systems, metric calibration is generally mandatory to achieve high-quality imaging and metrology. In this paper, we present an explicit derivation of an unfocused plenoptic metric model associating a measured light field in the object space with a recorded light field in the image space to conform physically to the imaging properties of unfocused plenoptic cameras. In addition, the impact of unfocused plenoptic imaging distortion on depth computation was experimentally explored, revealing that radial distortion parameters contain depth-dependent common factors, which were then modeled as depth distortions. Consequently, a complete unfocused plenoptic metric model was established by combining the explicit metric model with the imaging distortion model. A three-step unfocused plenoptic metric calibration strategy, in which the Levenberg-Marquardt algorithm is used for parameter optimization, is correspondingly proposed to determine 12 internal parameters for each microlens unit. Based on the proposed modeling and calibration, the depth measurement precision can be increased to 0.25 mm in a depth range of 300 mm, ensuring the potential applicability of consumer unfocused plenoptic cameras in high-accuracy three-dimensional measurement.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Regular cameras integrate a light beam emitted from an object point on a pixel area of an image sensor and can only detect two-dimensional (2D) optical signals. The 2D optical signals cannot retain the angular information of light rays, although this information is highly coupled with the scene depth and geometric structure. In comparison, light field imaging, as an advanced technology, can provide four-dimensional (4D) spatio-angular information of light rays [1,2] and therefore has attracted increasing attention and research interest recently [3–9].

The plenoptic camera concept involving the addition of a pinhole array or microlens array (MLA) was developed to record light fields more than a century ago by Ives [10] and Lippmann [11]. Nowadays, MLA-based plenoptic cameras are the most extensively used light field imaging devices because of the rapid growth of digital photography and microfabrication. An unfocused plenoptic camera (UPC, i.e., plenoptic 1.0), in which an MLA is placed in the image plane of the main lens to distinguish the directions of light rays emitted from an object point, was proposed by Adelson and Wang [12] and further exploited by Ng et al. [13]. This type of plenoptic camera became the first portable plenoptic camera. However, there is a trade-off between the spatial and angular resolutions: the higher the angular resolution, the lower the spatial resolution. To increase the spatial resolution, Lumsdaine and Georgiev designed a focused plenoptic camera (FPC, i.e., plenoptic 2.0) by using an MLA for secondary imaging [14]. This device has a low angular resolution.

The angular information in a recorded light field enables plenoptic cameras to achieve digital refocusing and viewpoint shifting; therefore, the recorded light field can be utilized to recover the scene depth and geometric structure in the image space. However, the recovered information cannot reflect actual dimensions in the object space. Plenoptic cameras can be used, for instance, in photogrammetry and industrial inspection, provided that metric calibration is performed to set up the metric relationship between the object and image spaces. Although some metric calibration methods exist, most of them have been developed for FPCs. In this paper, we focus on the development of plenoptic metric modeling and calibration for UPCs to achieve plenoptic three-dimensional (3D) measurement.

1.1. Related work

FPCs with microlens units of different focal lengths have been manufactured by Raytrix and targeted at industrial applications and scientific research [15]. An FPC acts like a micro camera array, in which so-called virtual depths are estimated and converted into metric depths. Recently, Johannsen et al. presented the first calibration method for the Raytrix FPC to calibrate a 15-parameter camera model [16]. This model contains depth distortion originating from the Petzval field curvature, aside from the widely used lateral distortion [17]. Heinze et al. extended the model of Johannsen et al. by assuming microlens units with different focal lengths to have different distances between the MLA and image sensor [18]. More recently, Zeller et al. derived a new depth distortion model according to depth estimation theory for the Raytrix FPC [19], which was used for visual odometry [20]. Simultaneously, Sardemann and Maas completed depth determination accuracy analysis for the Raytrix FPC in a range of up to 100 m [21].

Meanwhile, Lytro UPCs are mostly utilized in image processing and visualization applications in the consumer market. A UPC uses depth cues, e.g., blur and disparity, provided by the peculiar structure of light fields, to estimate depth maps without camera calibration. However, such depth maps depict the depth perceived in the image space rather than that measured in the object space. In 2013, Dansereau et al. employed ray transfer matrices to build the first mathematical model that consisted of 15 parameters for Lytro UPCs [22]. Instead of point features, Bok et al. presented a method of extracting line features from raw light field data to accomplish camera calibration [23]. Recently, Li et al. proposed a two-step method of obtaining the metric depth and lateral dimensions, respectively [24]. The depth dimension was estimated from the refocusing property of light fields. More recently, Chen and Pan also presented a two-step method, which differed from the method of Li et al. in that the depth was estimated based on parallax maps of light fields [25]. With this method, displacement measurements were conducted by using digital speckle correlation.

In previous works by the present authors, light rays from a measured light field in the object space were calibrated for multi-view 3D reconstruction [26]. Furthermore, the depth metric relationship between the object and image spaces was implicitly approximated using monotonic polynomials, resulting in light field 3D measurement for UPCs [27].

1.2. Contributions of this work

The work described in this paper unifies and extends the previous works [26,27]. The major contributions are as follows:

Firstly, we derived an explicit unfocused plenoptic metric model from associating a measured light field in the object space with a recorded light field in the image space, which physically conforms to the imaging properties of UPCs. In comparison, UPCs were treated as structure-unknown light field imaging systems in our previous work.

Secondly, we experimentally demonstrated the inconsistency of depth computations in different angular coordinates and introduced a universal radial distortion model to deal with lateral distortion. We further revealed that the radial distortion parameters involve common factors that change with depths; these factors were then modeled as depth distortions. Consequently, a new unfocused plenoptic imaging distortion model related to lateral and depth distortions was built and verified experimentally.

Thirdly, we established a complete unfocused plenoptic metric model by combining the explicit unfocused plenoptic metric model and unfocused plenoptic imaging distortion model, in which each microlens unit includes 12 internal parameters. Correspondingly, a three-step unfocused plenoptic metric calibration strategy is proposed. In this strategy, ray calibration is performed to determine four spatio-angular parameters and the metric depths in the object space, distortion calibration is employed to determine five distortion parameters and the corresponding nonmetric depths in the image space, and mapping calibration is utilized to determine three depth mapping parameters establishing the depth metric relationship between the object and image spaces.

2. Method

By placing a built-in MLA in the image plane of the main lens, a UPC can distinguish light rays emitted from an object point and therefore record a 4D light field in a 2D image sensor using a multiplexing technique. As illustrated in Fig. 1, we adopted two parallel planes to parameterize the light field in the image space. One plane is that of the MLA, in which each microlens unit corresponds to an object point with a specific position, defined as the spatial coordinate plane. The other plane is that of the main lens, which refracts an incident light ray in a specific direction, defined as the angular coordinate plane, with its origin in the center of the main lens. The separation between the angular and spatial coordinate planes is F. The recorded light field can be represented as LI(s,u), where LI denotes the radiant intensity, s=(s,t)T and u=(u,v)T denote the spatial and angular coordinates of the intersection points of a light ray with the two parallel planes, respectively.

 figure: Fig. 1

Fig. 1 Light field parameterization.

Download Full Size | PPT Slide | PDF

In the object space, we defined a light field coordinate system with the Zf axis parallel to the optical axis of the main lens and parameterized the light field by letting a light ray pass through the XfYf plane to express its spatial position and direction. The separation between the XfYf plane and the plane of the main lens is D. The measured light field can be represented as LO(af,θf), where LO denotes the radiant intensity, af=(af,bf)T are the coordinates of the intersection point and θf=(θf,φf)T are the slopes of the angles associated with the Xf and Yf axes, respectively.

2.1. Explicit unfocused plenoptic metric model

In the image space, the recorded light field can be digitally resampled at diverse image planes, which are defined as resampling spatial coordinate planes. Suppose that an object point is focused on an image plane with a separation of βF from the angular coordinate plane, where β is a scale factor, as shown in Fig. 2. When the recorded light field is resampled at this conjugate plane, the spatial coordinates of the light rays emitted from this object point are the same in the resampling spatial coordinate plane, but different in the spatial coordinate plane. The relationship between the resampling spatial coordinate plane and the spatial coordinate plane can be obtained according to the geometric structure in Fig. 2 such that

sβabsuabssabsuabs=FβF,
where sabs and sβabs are the metric spatial coordinates in the resampling spatial coordinate plane and spatial coordinate plane, respectively, and uabs are the metric angular coordinates in the angular coordinate plane.

 figure: Fig. 2

Fig. 2 Light field resampling.

Download Full Size | PPT Slide | PDF

Generally, the recorded light field is processed in dimensions of pixels. By introducing scale factors Fsrel, Fsβrel, and Furel with the same unit of meter/pixel for the resampling spatial coordinate plane, spatial coordinate plane, and angular coordinate plane, respectively, Eq. (1) can be rewritten as

sβFsβrel=sFsrel1β+uFurel(11β),
where s, sβ, and u are the corresponding pixel coordinates.

A principal light ray that passes through the center of the main lens will pass through the center of a corresponding microlens unit. Thus, the principal light ray has the same pixel spatial coordinates regardless of resampling. Based on this imaging property, the following relationship between Fsrel and Fsβrel can be deduced: Fsrel=βFsβrel. Substituting this relationship into Eq. (2),

sβ=s+u(11β)FurelFsβrel

We define a scale ratio λ=Furel/Fsβrel and a new scale factor α satisfying sα=sβ and

(11α)=(11β)λ
to retain a similar structure on the right side of Eq. (3) such that

sα=s+u(11α)

Equation (5) represents a shear sα associated with the pixel spatial and angular coordinates in terms of the scale factor α. In this situation, α defines a shear value corresponding to depth variation, namely, a nonmetric depth in the image space. Light field resampling at an image plane with depth α is equivalent to processing a 4D shear to obtain a sheared light field such that LIα(s,u)=LI(sα,u). Equation (5) is commonly used to compute α for light field depth estimation. From Eq. (4) it can be seen that if the sampling factors Furel and Fsβrel are equal, i.e., λ=1, the depth α is equal to the scale factor β. In general, Furel and Fsβrel are different, because the actual sizes, as well as the pixel sampling ratios in the angular and spatial coordinate planes, are different. So, α is not equal to β in any image plane except the plane of the MLA satisfying α=β=1. Consequently, without determining λ, object distances cannot be computed directly from α by employing the lens imaging formula with a known focal length.

In the object space, the object point with metric depth Zf is focused on the resampling spatial coordinate plane via the main lens with focal length f. According to the geometric structure of Fig. 2 and the thin lens model,

1βF+1D+Zf=1f

By combining Eq. (4) with Eq. (6), Zf can be deduced to be

Zf=m1α+m2α+m3m1=DFλ+DfFfλDfλfλFλf,m2=DffλFλf,m3=ffλFλf,
where m={m1,m2,m3} are depth mapping parameters related to D, F, f, and λ. For a specific UPC, F and λ are certain, and D remains unchanged when the UPC is calibrated. Therefore, for a fixed focal length f, the depth mapping parameters m are constants. In this paper, each microlens unit has specific depth mapping parameters, i.e., ms.

At this point, a novel unfocused plenoptic metric model can be explicitly derived as

{sα=s+u(11α)Zf=m1α+m2α+m3,
which establishes the depth metric relationship between the object and image spaces for the UPC.

2.2. Unfocused plenoptic metric calibration

Based on the explicit unfocused plenoptic metric model derived above, the goal of unfocused plenoptic metric calibration is to determine the depth mapping parameters ms. Precise depth pairs (Zf,α) are required to accomplish this objective. We introduced an auxiliary 3D measurement system (3DMS), which consists of a regular camera and projector, for unfocused plenoptic metric calibration, as illustrated in Fig. 3. The 3DMS can perform fringe projection profilometry [28,29] to measure spatial points in the object space and provide phase encoding for depth computation in the image space.

 figure: Fig. 3

Fig. 3 Unfocused plenoptic metric calibration by using an auxiliary 3DMS.

Download Full Size | PPT Slide | PDF

The unfocused plenoptic metric calibration can be divided into the following three steps:

  • 1. Light field ray calibration.
  • 2. Depth measurement via coordinate transformation.
  • 3. Depth computation via phase encoding.

2.2.1. Light field ray calibration

Before measuring the spatial points, the 3DMS must be calibrated in advance [30,31]. After that, a target can be placed in a measurement volume onto which vertical and horizontal fringe patterns are projected by the projector, respectively. Then, the UPC and the regular camera can simultaneously capture objective fringe images to compute two absolute phase maps with vertical and horizontal directions, i.e., orthogonal phase maps. For an object point lying on a recorded light ray of the UPC, its homologous image points in the 3DMS can be located to reconstruct its 3D spatial coordinates by using the orthogonal phase maps. By changing the orientation of the target relative to the UPC, different spatial points lying on this light ray can be reconstructed similarly. These reconstructed collinear spatial points Xp=(Xp,Yp,Zp)T can be used to determine the corresponding light rays with metric spatio-angular parameters (ap,θp) satisfying the parametric equation

[XpYp]=Zpθp+ap
Note that these reconstructed coordinates and calibrated parameters are defined in the projector coordinate system, as denoted by the subscript p in Eq. (9).

2.2.2. Depth measurement via coordinate transformation

As stated above, from the reconstructed spatial point Xp, we can obtain the metric depth defined in the projector coordinate system, but not in the light field coordinate system. A coordinate transformation from the projector coordinate system into the light field coordinate system must be implemented to determine the metric depth Zf.

Generally, a coordinate transformation contains a rotation R and a translation t satisfying Xf=RXp+t. Since the explicit unfocused plenoptic metric model focuses on depth mapping along the Zf axis, it is reasonable to assume that the projector coordinate system and light field coordinate system share the same origin, i.e., t=0. So, the coordinate transformation only contains the rotation associated with three rotation angles ωX, ωY, and ωZ around the Xp, Yp, and Zp axes, respectively, i.e., R=RZ(ωZ)RY(ωY)RX(ωX). Furthermore, because the rotation around the Zp axis does not impact on the direction of the Zf axis, the rotation angle ωZ can be further ignored, i.e., ωZ=0, and the rotation can be simplified such that

R=RY(ωY)RX(ωX)=[cosωYsinωXsinωYcosωXsinωY0cosωXsinωXsinωYsinωXcosωYcosωY].

The rotation in Eq. (10) has two degrees of freedom, which can be determined with the calibrated spatio-angular parameters. According to the imaging properties of UPCs, in the ideal case, the incident light ray closest to the optical axis of the main lens will strike the central pixel area of the image sensor. In this paper, the light ray with the central pixel spatial and angular coordinates is approximated as the optical axis of the main lens. In practice, however, the optical axis of the main lens will slightly deviate from the central pixel area of the image sensor due to the misalignment and manufacturing errors of the optical components, causing some degree of calibration error. This will be further studied in our future work.

After rotation, the direction of the central light ray defined in the projector coordinate system, denoted as θp0, will be parallel to the Zf axis defined in the light field coordinate system, which can be represented as

R[θp01]=λfnZf,
where nZf=(0,0,1)T is a unit vector along the Zf axis and λf is an arbitrary scale factor.

With known θp0, Eq. (11) provides two constraints to determine the rotation angles exactly, as follows:

{ωX=arcsinφp0φp02+1ωY=arcsinθp0θp02+φp02+1.

Then, the coordinate transformation can be performed to obtain the spatial points Xf, which can be utilized to determine the metric spatio-angular parameters (af,θf) and the metric depth Zf defined in the light field coordinate system.

2.2.3. Depth computation via phase encoding

In this step, some specific constraint conditions from the imaging properties of UPCs are employed to compute the depth α corresponding to the reconstructed metric depth Zf. When a recorded light field is resampled at an image plane with a depth α corresponding to an in-focus object point on a Lambertian surface, light rays emitted from this object point are focused on the same spatial location in the resampling spatial coordinate plane and have the same radiance intensities. That is to say, at this spatial location, the sheared light field LIα remains constant for any angular coordinates. We refer to the resampled light rays at this spatial location as focused ones. Besides, it can be seen from Eq. (5) that the shear is independent of the depth for central angular coordinates of u=0. Therefore, the sheared light field provides photo consistency for this object such that LI(sα,u)=LI(s,0).

Instead of photo intensity, we adopt fringe-projection-based phase encoding for depth computation. The phase information, which is insensitive to the scene features (e.g., color, texture, and shading), can be utilized to construct matching features precisely for consistency constraints of the light field imaging. By using the 3DMS, orthogonal fringe projection can be performed to retrieve phase-encoded fields ϕIV,H(s,u), where the superscripts V and H correspond to vertical and horizontal directions of fringe patterns, respectively. The consistency property stated above can also be applied to the phase-encoded fields, which we refer to as phase consistency.

Then, the shear sα can be precisely computed under the dual phase consistency constraints represented as

{ϕIV(sα,u)=ϕIV(s,0)ϕIH(sα,u)=ϕIH(s,0).
A shear amount of the spatial coordinates associated with specific angular coordinates, defined as
Δs=sαs=u(11α),
provides two degrees of freedom to compute one depth. Thus, the depth can be optimized from all of the validly recorded light rays. After that, the measured depth pairs (Zfi,αi),i=1,2,,Np, where Np is the number of calibrated positions in the measurement volume, are acquired to determine the depth mapping parameters ms.

2.3. Unfocused plenoptic imaging distortion

Aside from optimization through the constraints provided by Eq. (14), the depth α can be directly obtained from

Δs=r|11α|,
where r=u denotes an angular coordinate distance from the intersection point of a light ray to the origin in the angular coordinate plane. Since the shear amounts can be accurately obtained from the phase-encoded fields, the computed depths related to the focused light rays are theoretically consistent.

However, the experimental results indicated that the depth computation is inconsistent for the angular coordinates (see Section 3.1), even though the shear amount can be precisely obtained via the phase encoding. By analyzing Eq. (5), one can see that there must be errors in the angular coordinates parameterized by the plane of the main lens. Indeed, it is the lens distortion that introduces angular coordinate errors that significantly impact the depth computation, which we refer to as unfocused plenoptic imaging distortion.

We thus define the angular coordinate error as δu and rewrite Eq. (5) as

sα=s+(u+δu)(11α)
The angular coordinate error changes the direction of an incident light ray after passing through the main lens. As illustrated in Fig. 4, the shear value α associated with a direction-changed light ray (solid red line) differs from that associated with a distortion-free light ray (black dashed line). Thus, the angular coordinate error must be modeled and determined to compute the shear value accurately.

 figure: Fig. 4

Fig. 4 Unfocused plenoptic imaging distortion.

Download Full Size | PPT Slide | PDF

2.3.1. Lateral distortion

Generally, the lens distortion is primarily radially symmetric because of the symmetric lens design. We employed the classical radial distortion model

δr=u(k1r2+k2r4+k3r6),
where k={k1,k2,k3} are radial distortion parameters, to represent the lateral component of the unfocused plenoptic imaging distortion, i.e., δu=δr. In this case, the unfocused plenoptic imaging distortion is only related to the angular coordinates. For light rays with the same angular coordinates but propagating to different microlens units, the main lens may introduce discrepant angular coordinate errors. Thus, each microlens unit has specific radial distortion parameters, i.e., ks. Regarding this point, a remaining question is whether or not the lateral distortion can fully express the unfocused plenoptic imaging distortion. Specifically, the consistency of the radial distortion parameters at diverse depths must be checked.

If the depths are computed without considering the imaging distortion and then substituted into Eq. (16) to determine the radial distortion parameters, theoretically, the result will be ks=0, because the residual term tends to δu(11/α)=0 regardless of the computed depth. We thus employed the shear amount Δsi obtained from the phase encoding and the depth αi computed without considering the imaging distortion at different calibration positions to perform parameter optimization

argτminuΔsi(u+δu)(11αi)2,
where δu=δr and τ={ksi}, to check the consistency of the radial distortion parameters. In this paper, the Levenberg-Marquardt algorithm was used for parameter optimization.

The characteristic curves of the optimal radial distortion parameters ksi concerning the computed depths αi in specific spatial coordinates s=(74,359)T are shown in Fig. 5. It can be observed in Fig. 5 that ksi are inconsistent at different depths. Consequently, the lateral distortion cannot fully express the unfocused plenoptic imaging distortion. Furthermore, the characteristic curves have similar trends; in particular, there exist breakpoints all at α=1. Thus, ksi contain a common factor associated with the computed depths.

 figure: Fig. 5

Fig. 5 Characteristic curves of the optimal radial distortion parameters concerning the computed depths in the specific spatial coordinates.

Download Full Size | PPT Slide | PDF

2.3.2. Depth distortion

According to the variations of the characteristic curves in Fig. 5, we constructed the following mathematical model for the common factor of the radial distortion parameters:

δα=1q1(α1)+q2(α1)2
Because the common factor is dependent on depths, we defined the common factor as a depth distortion, where q={q1,q2} are the depth distortion parameters. Similarly, each microlens unit has specific depth distortion parameters, i.e., qs.

The aberration of an unfocused plenoptic imaging system, especially of the main lens, causes light rays emitted from an object point to deviate from an ideal image point after propagating through the imaging system. The deviation depends on not only the incidence height on the pupil of the imaging system, i.e., the angular coordinate distance r, but also the image distance related to the depth α. In this case, the unfocused plenoptic imaging distortion can be modeled as δu=δrδα, simultaneously containing the lateral and depth distortions.

Similarly, we performed the parameter optimization of Eq. (18) with δu=δrδα and τ={ksi,qsi} to check the consistency of the radial and depth distortion parameters at different depths. Figure 6 shows the characteristic curves of the optimal radial and depth distortion parameters concerning the computed depths. Note that the orders of magnitude of the radial distortion parameters in Figs. 5 and 6 are different. When simultaneously considering the lateral and depth components of the unfocused plenoptic imaging distortion, the relevant distortion parameters remain consistent overall within the calibrated depth range. Therefore, the unfocused plenoptic imaging distortion model can fully express the unfocused plenoptic imaging distortion.

 figure: Fig. 6

Fig. 6 Characteristic curves of the optimal radial and depth distortion parameters concerning the computed depths in the specific spatial coordinates.

Download Full Size | PPT Slide | PDF

2.4. Complete unfocused plenoptic metric model

Now, the complete unfocused plenoptic metric model can be established by combining the explicit unfocused plenoptic metric model and unfocused plenoptic imaging distortion model such that

{sα=s+(u+δu)(11α)δu=uk1r2+k2r4+k3r6q1(α1)+q2(α1)2Zf=m1α+m2α+m3[XfYf]=Zfθf+af
The complete unfocused plenoptic metric model can sufficiently represent the imaging properties of UPCs, including light field shear, imaging distortion, depth mapping, and 3D reconstruction. Each microlens unit in specific spatial coordinates contains 12 internal parameters: four spatio-angular parameters, five distortion parameters, and three depth mapping parameters. Once the internal parameters are determined, the metric coordinates Xf of an object point can be mapped from the shear sα, enabling unfocused plenoptic 3D measurement.

We correspondingly designed a three-step unfocused plenoptic metric calibration strategy to determine the internal parameters. The overall flow chart of unfocused plenoptic metric calibration and 3D measurement is shown in Fig. 7.

 figure: Fig. 7

Fig. 7 Overall flow chart of unfocused plenoptic metric calibration and 3D measurement.

Download Full Size | PPT Slide | PDF

The proposed unfocused plenoptic metric calibration method can be summarized as follows:

Step 1. Ray calibration and depth measurement

First, light field ray calibration can be performed to determine the metric spatio-angular parameters (ap,θp) defined in the projector coordinate system. Then, the metric angular parameters θp0 of the central light ray can be utilized to determine the coordinate transformation associated with the two rotation angles (ωX,ωY). The calibrated rotation angles can be used to transform the reconstructed collinear spatial points from the projector coordinate system into the light field coordinate system. Finally, the metric spatio-angular parameters and metric depths defined in the light field coordinate system, i.e., {afs,θfs;Zfsi}, can be determined.

Step 2. Distortion calibration and depth optimization

The nonmetric depths αsi corresponding to the metric depths Zfsi cannot be computed directly from the shear through Eq. (16), which contains the nonlinear lens distortion. Since the unfocused plenoptic imaging distortion model is consistent within the calibrated depth range, the radial and depth distortion parameters and the nonmetric depths can be optimized simultaneously. In this situation, the parameter optimization of Eq. (18) can be rewritten as

argτminiuΔsi(u+δu)(11αi)2,
where τ={ks,qs;αsi}. Besides, the depths α˜si, which are computed without considering the imaging distortion, can be set to initial values for the parameter optimization.

Step 3. Depth mapping calibration

By using the precise pairs of the metric depths and optimal depths from the first two steps, the depth mapping parameters {ms} can be fitted by Eq. (7).

After that, the overall internal parameters {afs,θfs;ks,qs;ms} are determined for unfocused plenoptic 3D measurement. First, the shear sα can be obtained by using either passive or active light field depth estimation and employed to optimize the depths of the measured scene through parameter optimization of Eq. (18) with τ={αs}. Then, the corresponding depth and transverse dimensions can be computed by using Eq. (20) together with the calibrated internal parameters, enabling unfocused plenoptic 3D measurement.

It should be noted that the system structure represented diagrammatically in Figs. 1-4 does not limit to UPCs but is shared by any other unfocused plenoptic imaging systems simplified by the main lens and an MLA located on the image plane of the main lens. It means that the proposed method of unfocused plenoptic metric modeling and calibration is suitable for all unfocused plenoptic imaging systems in theory. Besides, any device that can provide accurate matching features may be used for system calibration using the proposed method, not limited to the digital projector.

3. Results and discussion

We employed a Lytro UPC (Lytro Illum) for experiments. An auxiliary 3DMS consisting of a projector (Casio XJ-VC100) and a regular camera (Daheng MER-130-30UM) was set up. The overall system architecture is showed in Fig. 8. The work distance was about 300mm. Light field images were decoded with spatial and angular resolutions of 434 × 625 pixels and 15 × 15 pixels, respectively. A white plane taken as a target was placed at 21 positions with different orientations relative to the UPC. At each target calibration position, the orthogonal fringe projection was performed by the 3DMS to provide phase encoding as accurate matching features to locate the spatial point and shear.

 figure: Fig. 8

Fig. 8 Experimental system architecture.

Download Full Size | PPT Slide | PDF

3.1. Depth computation consistency check

One of the target calibration positions was chosen for consistency check. Figure 9(a) presents a visualization of the phase-encoded field ϕIV(s,u) retrieved via fringe analysis [32,33]. The detailed structure of the phase-encoded field is demonstrated by two enlarged heat-maps. It can be seen from Fig. 9(a) that the phase-encoded field changes with not only the spatial coordinates of microlens units but also the angular coordinates inside each microlens unit. Figure 9(b) provides a plot of the cross-sectional segment indicated by the red line segment in Fig. 9(a) to further analyze the phenomenon of the change in the phase-encoded field. The curves with the same spatial coordinates monotonically decrease, while those with the same angular coordinates, marked by the blue and green lines corresponding to u=(0,0)T and u=(4,0)T, respectively, monotonically increase. In other words, the phase-encoded field associated with the target in this calibration position monotonically increases globally but monotonically decreases locally.

 figure: Fig. 9

Fig. 9 Phase-encoded field: (a) visualization with two enlarged heat-maps; (b) curves of a cross-sectional segment indicated by the red line segment in (a); (c) and (d) phase distributions corresponding to the two angular coordinates, respectively; (e) curves of two cross-sectional segments indicated by the blue and green line segments in (c) and (d), respectively; (f) enlarged version of the curve segments within the black box in (e).

Download Full Size | PPT Slide | PDF

The phase distributions ϕIV(s) corresponding to the two angular coordinates are shown in Figs. 9(c) and 9(d), respectively. In Fig. 9(e), the curves of two cross-sectional segments indicated by the blue and green line segments in Figs. 9(c) and 9(d), respectively, are plotted. The two curves have the same monotonic distribution trends and consistent separation. This separation is related to the shear amount, as demonstrated in Fig. 9(f), which shows an enlarged version of the curve segments within the black box in Fig. 9(e).

Then, the shear amounts Δs were determined to compute the depth α in different angular coordinates through Eq. (13) and Eq. (15), respectively. Table 1 lists the computation results in central local angular coordinates. It can be observed that the computed depths vary distinctly depending on the angular coordinates, as visualized in Fig. 10. The maximum depth is 1.5878 in u=(1,4)T, while the minimum depth is 1.1183 in u=(1,1)T, with a significant difference of 0.4695. It seems that the computed depth gradually increases as the angular coordinates become farther away from the center. Consequently, the depth computation is inconsistent for the angular coordinates.

Tables Icon

Table 1. Computed depths in central local angular coordinates.

 figure: Fig. 10

Fig. 10 Visualization of the computation results listed in Table 1.

Download Full Size | PPT Slide | PDF

3.2. Unfocused plenoptic metric calibration

According to the flow in Fig. 7, light field ray calibration was performed first. With the orthogonal absolute phase maps retrieved from the orthogonal fringe projection, 21 pairs of homologous image points in the 3DMS were identified to reconstruct the corresponding spatial points Xp lying on the same light ray. Their spatial coordinates were defined in the projector coordinate system. One set of reconstructed collinear spatial points related to a specific light ray with spatial coordinates s=(74,359)T and angular coordinates u=(0,0)T was taken as an example. The light ray was fitted to cross the reconstructed spatial points, with a maximum (MAX) of 0.0355 mm and a root-mean-square (RMS) of 0.0098 mm in the fitting error. Figure 11(a) shows the reconstructed collinear spatial points along with the corresponding fitted light ray.

 figure: Fig. 11

Fig. 11 Light field ray calibration: (a) measured collinear spatial points along with the corresponding fitted light ray; (b) central light ray relative to the XY planes in the light field coordinate system and projector coordinate system.

Download Full Size | PPT Slide | PDF

The central light ray with central spatial coordinates s=(218,313)T and central angular coordinates u=(0,0)T was selected to be the optical axis of the main lens. Its corresponding metric angular parameters were θp0=(0.1394,0.0850)T, which were employed to compute the rotation angles ωX and ωY to be 0.0848 rad and 0.1381 rad, respectively. Then, the rotation R could transform the reconstructed spatial points and fitted light rays from the projector coordinate system into the light field coordinate system. Figure 11(b) demonstrates that after transformation, the central light ray in the light field coordinate system is perpendicular to the XfYf plane at a point with metric spatial coordinates of (77.2663; −222.1767; 0). In other words, the central light ray is parallel to the Zf axis. In comparison, the XpYp plane is transformed and simultaneously shown in Fig. 11(b). It can be observed that the central light ray merely intersects the XpYp plane at a point with metric spatial coordinates of (77.2663; −222.1767; −29.7926), but is not perpendicular to the XpYp plane.

The next step was the optimization of the distortion parameters and depths in the image space. Before iterative optimization, the depths were computed without considering the imaging distortion and used as the initial depths. For instance, the initial depth related to the calibration position taken for the depth computation consistency check was computed to be α˜si=1.2529 by using Eq. (14). However, the final result of the parameter optimization via Eq. (21) was αsi=1.1323.

The distortion parameters ks and qs were also optimized within the calibrated depth range. To analyze the effects of unfocused plenoptic imaging distortion, we used the calibrated distortion parameters to visualize vector field graphs of the angular coordinate errors corresponding to different depths, as shown in Fig. 12. On the one hand, the effect of the radial distortion on the angular coordinate error gradually increases from the center to the border in the angular coordinate plane. On the other hand, the depth distortion at different depths leads to divergence in the vector field graphs. Specifically, when α<1, the angular coordinate error makes the absolute value of the angular coordinate small, as shown in Figs. 12(a) and 12(b); when α>1, it is the opposite, as shown in Figs. 12(c) and 12(d). Moreover, the effect of the depth distortion on the angular coordinate error increases as the depth approaches to 1.

 figure: Fig. 12

Fig. 12 Visualization of the effects of unfocused plenoptic imaging distortion on the angular coordinate error with depths of (a) 0.4; (b) 0.8; (c) 1.2; (d) 1.6.

Download Full Size | PPT Slide | PDF

Figure 13 shows the precisely measured depth pairs (Zfsi,αsi) (blue box), demonstrating that the intrinsic relationship between the depths in the object and image spaces is monotonic. Utilizing the measured depth pairs, we fit the depth mapping curve with a MAX of 0.4545 mm and an RMS of 0.2554 mm in the fitting error. Figure 13 also presents the fitted curve (red curve) and the corresponding fitting errors (green crosses). At this point, the unfocused plenoptic metric calibration was accomplished. The relevant calibrated internal parameters are listed in Table 2.

 figure: Fig. 13

Fig. 13 Depth mapping calibration: measured depth pairs and fitted results.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 2. Calibrated internal parameters in specific spatial coordinates s = (74, 359)T.

We selected one of the measured depth pairs at a specific calibration position to perform the accuracy analysis of the proposed unfocused plenoptic metric modeling and calibration. The remaining 20 pairs were utilized to recalibrate the UPC, similar to the calibration procedure described above. The difference was that the calibration procedure was implemented twice this time, with and without considering the unfocused plenoptic imaging distortion. Then, two sets of internal parameters were determined to reconstruct the target with the depths of the selected measured depth pair. Correspondingly, two sets of reconstructed 3D data were obtained.

Since the measurement precision of the 3DMS (less than 0.05mm) is higher than that of the UPC, the reconstruction accuracy of the UPC can be tested by referring to the 3DMS. Figures 14(a) and 14(b) present the distributions of the depth differences between the two sets of reconstructed depths and the metric depths of the selected measured depth pair, respectively. Table 3 lists the relevant depth difference data. It can be observed that the depth measurement precision increases when considering the imaging distortion. Histograms corresponding to the two depth difference distributions are shown in Figs. 14(c) and 14(d) to quantify the precision. The depth measurement precision without considering the imaging distortion was 0.4 mm in a depth range of 300 mm. When considering the imaging distortion, the depth measurement precision increased to 0.25 mm. In comparison, in our previous work [27], the depth measurement precision of the implicit polynomial-approximated model was 0.5 mm. Thus, these results confirm the validity of the proposed unfocused plenoptic metric modeling and calibration.

 figure: Fig. 14

Fig. 14 Accuracy analysis: depth difference distributions (a) with and (b) without considering imaging distortion; depth difference histograms (c) with and (d) without considering imaging distortion.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 3. Relevant MAX, MEAN, and RMS of depth differences (mm).

3.3. Unfocused plenoptic 3D measurement

Finally, we adopted the calibrated internal parameters to implement unfocused plenoptic 3D measurement of an experimental scene involving two separate objects, whose far and near refocused views are presented in Figs. 15(a) and 15(b), respectively. The shear of the measured scene was accurately obtained through the phase consistency constraint, and the corresponding depths were optimized through Eq. (18) with τ={αs} and the calibrated distortion parameters, as shown in Fig. 15(c). Then, by using the depth mapping parameters, the nonmetric depths could be mapped to the metric depths, as shown in Fig. 15(d). The scene was consequently reconstructed with the metric depths and calibrated spatio-angular parameters, with the 3D point cloud and 3D model results presented in Figs. 15(e) and 15(f), respectively.

 figure: Fig. 15

Fig. 15 Unfocused plenoptic 3D measurement: (a) far and (b) near refocused views; (c) nonmetric depth map; (d) metric depth map; (e) 3D point cloud; (d) 3D model.

Download Full Size | PPT Slide | PDF

4. Summary and conclusion

In this paper, we presented an explicit derivation of an unfocused plenoptic metric model associating a measured light field in the object space with a recorded light field in the image space following the imaging properties of UPCs. The depth measurement precision was improved to 0.4 mm, in comparison with the precision of 0.5 mm obtained using an implicit polynomial-approximated model in our previous work.

We further found that the depth computation is inconsistent in different angular coordinates owing to the lens distortion that results in angular coordinate errors. Thus, we introduced a universal radial distortion model to deal with the lateral distortion. The experimental results demonstrated that the radial distortion parameters contained depth-dependent common factors, which were then modeled as the depth distortions. By extending the unfocused plenoptic imaging distortion to include the lateral and depth distortions, we established a complete unfocused plenoptic metric model.

Furthermore, we proposed a three-step unfocused plenoptic metric calibration strategy to determine 12 internal parameters for each microlens unit: ray calibration for four spatio-angular parameters, distortion calibration for five distortion parameters, and mapping calibration for three depth mapping parameters. By considering the unfocused plenoptic imaging distortion, the depth measurement precision can be increased to 0.25 mm.

In conclusion, we accomplished unfocused plenoptic metric modeling and developed a calibration method to determine the system parameters and achieve high-accuracy unfocused plenoptic 3D measurement.

Funding

National Key R&D Program of China (2017YFF0106401); Sino-German Cooperation Group (GZ 1391); National Natural Science Foundation of China (NSFC) (61875137, 11804231); Natural Science Foundation of Guangdong Province (2018A030313831).

References

1. M. Levoy and P. Hanrahan, “Light field rendering,” in Proceeding of ACM SIGGRAPH (ACM, 1996), 31–42.

2. S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceeding of ACM SIGGRAPH (ACM, 1996), 43–54.

3. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006). [CrossRef]  

4. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013). [CrossRef]   [PubMed]  

5. R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014). [CrossRef]   [PubMed]  

6. X. Lin, J. Wu, G. Zheng, and Q. Dai, “Camera array based light field microscopy,” Biomed. Opt. Express 6(9), 3179–3189 (2015). [CrossRef]   [PubMed]  

7. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2(2), 104–111 (2015). [CrossRef]  

8. N. C. Pegard, H.-Y. Liu, N. Antipa, M. Gerlock, H. Adesnik, and L. Waller, “Compressive light-field microscopy for 3D neural activity recording,” Optica 3(5), 517–524 (2016). [CrossRef]  

9. M. A. Taylor, T. Nobauer, A. Pernia-Andrade, F. Schlumm, and A. Vaziri, “Brain-wide 3D light-field imaging of neuronal activity with speckle-enhanced resolution,” Optica 5(4), 345–353 (2018). [CrossRef]  

10. F. E. Ives, “Parallax stereogram and process of making same,” United States Patent Application 725,567, (1903).

11. G. Lippmann, “Epreuves reversible. Photographies integrals,” Comptes Rendus Academie des Sciences 146(3), 446–451 (1908).

12. E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992). [CrossRef]  

13. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Technical Report CSTR, 1–11 (2005).

14. L. Andrew and G. Todor, “Full resolution lightfield rendering,” Adobe Technical Report (2008).

15. C. Perwass and L. Wietzke, “Single lens 3D-camera with extended depth-of-field,” in Conference on Human Vision and Electronic Imaging XVII (SPIE, 2012), 829108. [CrossRef]  

16. O. Johannsen, C. Heinze, B. Goldluecke, and C. Perwass, “On the Calibration of Focused Plenoptic Cameras,” in Time-of-Flight and Depth Imaging: Sensors, Algorithms and Applications (Springer, 2013), 302–317.

17. D. C. Brown, “Decentering distortion of lenses,” Photogramm. Eng. 32, 444–462 (1966).

18. C. Heinze, S. Spyropoulos, S. Hussmann, and C. Perwass, “Automated robust metric calibration of multi-focus plenoptic cameras,” in International Instrumentation and Measurement Technology Conference (IEEE, 2015), 2038–2043. [CrossRef]  

19. N. Zeller, C. A. Noury, F. Quint, C. Teuliere, U. Stilla, and M. Dhome, “Metric calibration of a focused plenoptic camera based on a 3D calibration target,” in International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences (ISPRS, 2016), 449–456.

20. N. Zeller, F. Quint, and U. Stilla, “Depth estimation and camera calibration of a focused plenoptic camera for visual odometry,” ISPRS J. Photogramm. Remote Sens. 118, 83–100 (2016). [CrossRef]  

21. H. Sardemann and H.-G. Maas, “On the accuracy potential of focused plenoptic camera range determination in long distance operation,” ISPRS J. Photogramm. Remote Sens. 114, 1–9 (2016). [CrossRef]  

22. D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in Conference on Computer Vision and Pattern Recognition (IEEE, 2013), 1027–1034. [CrossRef]  

23. Y. Bok, H.-G. Jeon, and I. S. Kweon, “Geometric calibration of micro-lens-based light field cameras using line features,” IEEE Trans. Pattern Anal. Mach. Intell. 39(2), 287–300 (2017). [CrossRef]   [PubMed]  

24. C. Li, X. Zhang, and D. Tu, “Metric three-dimensional reconstruction model from a light field and its calibration,” Opt. Eng. 56(1), 013105 (2017). [CrossRef]  

25. B. Chen and B. Pan, “Full-field surface 3D shape and displacement measurements using an unfocused plenoptic camera,” Exp. Mech. 58(5), 831–845 (2018). [CrossRef]  

26. Z. Cai, X. Liu, X. Peng, and B. Z. Gao, “Ray calibration and phase mapping for structured-light-field 3D reconstruction,” Opt. Express 26(6), 7598–7613 (2018). [CrossRef]   [PubMed]  

27. Z. Cai, X. Liu, Q. Tang, X. Peng, and B. Z. Gao, “Light field 3D measurement using unfocused plenoptic cameras,” Opt. Lett. 43(15), 3746–3749 (2018). [CrossRef]   [PubMed]  

28. Z. Zhang, “Review of single-shot 3D shape measurement by phase calculation-based fringe projection techniques,” Opt. Lasers Eng. 50(8), 1097–1106 (2012). [CrossRef]  

29. Z. Cai, X. Liu, A. Li, Q. Tang, X. Peng, and B. Z. Gao, “Phase-3D mapping method developed from back-projection stereovision model for fringe projection profilometry,” Opt. Express 25(2), 1262–1277 (2017). [CrossRef]   [PubMed]  

30. Y. Yin, X. Peng, A. Li, X. Liu, and B. Z. Gao, “Calibration of fringe projection profilometry with bundle adjustment strategy,” Opt. Lett. 37(4), 542–544 (2012). [CrossRef]   [PubMed]  

31. X. Liu, Z. Cai, Y. Yin, J. Hao, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2D reference target,” Opt. Lasers Eng. 89, 131–137 (2017). [CrossRef]  

32. C. Quan, W. Chen, and C. J. Tay, “Phase-retrieval techniques in fringe-projection profilometry,” Opt. Lasers Eng. 48(2), 235–243 (2010). [CrossRef]  

33. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016). [CrossRef]  

References

  • View by:

  1. M. Levoy and P. Hanrahan, “Light field rendering,” in Proceeding of ACM SIGGRAPH (ACM, 1996), 31–42.
  2. S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceeding of ACM SIGGRAPH (ACM, 1996), 43–54.
  3. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
    [Crossref]
  4. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013).
    [Crossref] [PubMed]
  5. R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014).
    [Crossref] [PubMed]
  6. X. Lin, J. Wu, G. Zheng, and Q. Dai, “Camera array based light field microscopy,” Biomed. Opt. Express 6(9), 3179–3189 (2015).
    [Crossref] [PubMed]
  7. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2(2), 104–111 (2015).
    [Crossref]
  8. N. C. Pegard, H.-Y. Liu, N. Antipa, M. Gerlock, H. Adesnik, and L. Waller, “Compressive light-field microscopy for 3D neural activity recording,” Optica 3(5), 517–524 (2016).
    [Crossref]
  9. M. A. Taylor, T. Nobauer, A. Pernia-Andrade, F. Schlumm, and A. Vaziri, “Brain-wide 3D light-field imaging of neuronal activity with speckle-enhanced resolution,” Optica 5(4), 345–353 (2018).
    [Crossref]
  10. F. E. Ives, “Parallax stereogram and process of making same,” United States Patent Application 725,567, (1903).
  11. G. Lippmann, “Epreuves reversible. Photographies integrals,” Comptes Rendus Academie des Sciences 146(3), 446–451 (1908).
  12. E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992).
    [Crossref]
  13. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Technical Report CSTR, 1–11 (2005).
  14. L. Andrew and G. Todor, “Full resolution lightfield rendering,” Adobe Technical Report (2008).
  15. C. Perwass and L. Wietzke, “Single lens 3D-camera with extended depth-of-field,” in Conference on Human Vision and Electronic Imaging XVII (SPIE, 2012), 829108.
    [Crossref]
  16. O. Johannsen, C. Heinze, B. Goldluecke, and C. Perwass, “On the Calibration of Focused Plenoptic Cameras,” in Time-of-Flight and Depth Imaging: Sensors, Algorithms and Applications (Springer, 2013), 302–317.
  17. D. C. Brown, “Decentering distortion of lenses,” Photogramm. Eng. 32, 444–462 (1966).
  18. C. Heinze, S. Spyropoulos, S. Hussmann, and C. Perwass, “Automated robust metric calibration of multi-focus plenoptic cameras,” in International Instrumentation and Measurement Technology Conference (IEEE, 2015), 2038–2043.
    [Crossref]
  19. N. Zeller, C. A. Noury, F. Quint, C. Teuliere, U. Stilla, and M. Dhome, “Metric calibration of a focused plenoptic camera based on a 3D calibration target,” in International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences (ISPRS, 2016), 449–456.
  20. N. Zeller, F. Quint, and U. Stilla, “Depth estimation and camera calibration of a focused plenoptic camera for visual odometry,” ISPRS J. Photogramm. Remote Sens. 118, 83–100 (2016).
    [Crossref]
  21. H. Sardemann and H.-G. Maas, “On the accuracy potential of focused plenoptic camera range determination in long distance operation,” ISPRS J. Photogramm. Remote Sens. 114, 1–9 (2016).
    [Crossref]
  22. D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in Conference on Computer Vision and Pattern Recognition (IEEE, 2013), 1027–1034.
    [Crossref]
  23. Y. Bok, H.-G. Jeon, and I. S. Kweon, “Geometric calibration of micro-lens-based light field cameras using line features,” IEEE Trans. Pattern Anal. Mach. Intell. 39(2), 287–300 (2017).
    [Crossref] [PubMed]
  24. C. Li, X. Zhang, and D. Tu, “Metric three-dimensional reconstruction model from a light field and its calibration,” Opt. Eng. 56(1), 013105 (2017).
    [Crossref]
  25. B. Chen and B. Pan, “Full-field surface 3D shape and displacement measurements using an unfocused plenoptic camera,” Exp. Mech. 58(5), 831–845 (2018).
    [Crossref]
  26. Z. Cai, X. Liu, X. Peng, and B. Z. Gao, “Ray calibration and phase mapping for structured-light-field 3D reconstruction,” Opt. Express 26(6), 7598–7613 (2018).
    [Crossref] [PubMed]
  27. Z. Cai, X. Liu, Q. Tang, X. Peng, and B. Z. Gao, “Light field 3D measurement using unfocused plenoptic cameras,” Opt. Lett. 43(15), 3746–3749 (2018).
    [Crossref] [PubMed]
  28. Z. Zhang, “Review of single-shot 3D shape measurement by phase calculation-based fringe projection techniques,” Opt. Lasers Eng. 50(8), 1097–1106 (2012).
    [Crossref]
  29. Z. Cai, X. Liu, A. Li, Q. Tang, X. Peng, and B. Z. Gao, “Phase-3D mapping method developed from back-projection stereovision model for fringe projection profilometry,” Opt. Express 25(2), 1262–1277 (2017).
    [Crossref] [PubMed]
  30. Y. Yin, X. Peng, A. Li, X. Liu, and B. Z. Gao, “Calibration of fringe projection profilometry with bundle adjustment strategy,” Opt. Lett. 37(4), 542–544 (2012).
    [Crossref] [PubMed]
  31. X. Liu, Z. Cai, Y. Yin, J. Hao, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2D reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
    [Crossref]
  32. C. Quan, W. Chen, and C. J. Tay, “Phase-retrieval techniques in fringe-projection profilometry,” Opt. Lasers Eng. 48(2), 235–243 (2010).
    [Crossref]
  33. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
    [Crossref]

2018 (4)

2017 (4)

Y. Bok, H.-G. Jeon, and I. S. Kweon, “Geometric calibration of micro-lens-based light field cameras using line features,” IEEE Trans. Pattern Anal. Mach. Intell. 39(2), 287–300 (2017).
[Crossref] [PubMed]

C. Li, X. Zhang, and D. Tu, “Metric three-dimensional reconstruction model from a light field and its calibration,” Opt. Eng. 56(1), 013105 (2017).
[Crossref]

Z. Cai, X. Liu, A. Li, Q. Tang, X. Peng, and B. Z. Gao, “Phase-3D mapping method developed from back-projection stereovision model for fringe projection profilometry,” Opt. Express 25(2), 1262–1277 (2017).
[Crossref] [PubMed]

X. Liu, Z. Cai, Y. Yin, J. Hao, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2D reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
[Crossref]

2016 (4)

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

N. Zeller, F. Quint, and U. Stilla, “Depth estimation and camera calibration of a focused plenoptic camera for visual odometry,” ISPRS J. Photogramm. Remote Sens. 118, 83–100 (2016).
[Crossref]

H. Sardemann and H.-G. Maas, “On the accuracy potential of focused plenoptic camera range determination in long distance operation,” ISPRS J. Photogramm. Remote Sens. 114, 1–9 (2016).
[Crossref]

N. C. Pegard, H.-Y. Liu, N. Antipa, M. Gerlock, H. Adesnik, and L. Waller, “Compressive light-field microscopy for 3D neural activity recording,” Optica 3(5), 517–524 (2016).
[Crossref]

2015 (2)

2014 (1)

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014).
[Crossref] [PubMed]

2013 (1)

2012 (2)

Z. Zhang, “Review of single-shot 3D shape measurement by phase calculation-based fringe projection techniques,” Opt. Lasers Eng. 50(8), 1097–1106 (2012).
[Crossref]

Y. Yin, X. Peng, A. Li, X. Liu, and B. Z. Gao, “Calibration of fringe projection profilometry with bundle adjustment strategy,” Opt. Lett. 37(4), 542–544 (2012).
[Crossref] [PubMed]

2010 (1)

C. Quan, W. Chen, and C. J. Tay, “Phase-retrieval techniques in fringe-projection profilometry,” Opt. Lasers Eng. 48(2), 235–243 (2010).
[Crossref]

2006 (1)

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
[Crossref]

1992 (1)

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992).
[Crossref]

1966 (1)

D. C. Brown, “Decentering distortion of lenses,” Photogramm. Eng. 32, 444–462 (1966).

1908 (1)

G. Lippmann, “Epreuves reversible. Photographies integrals,” Comptes Rendus Academie des Sciences 146(3), 446–451 (1908).

Adams, A.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
[Crossref]

Adelson, E. H.

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992).
[Crossref]

Adesnik, H.

Andalman, A.

Andrew, L.

L. Andrew and G. Todor, “Full resolution lightfield rendering,” Adobe Technical Report (2008).

Antipa, N.

Asundi, A.

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

Bok, Y.

Y. Bok, H.-G. Jeon, and I. S. Kweon, “Geometric calibration of micro-lens-based light field cameras using line features,” IEEE Trans. Pattern Anal. Mach. Intell. 39(2), 287–300 (2017).
[Crossref] [PubMed]

Boyden, E. S.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014).
[Crossref] [PubMed]

Brédif, M.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Technical Report CSTR, 1–11 (2005).

Brown, D. C.

D. C. Brown, “Decentering distortion of lenses,” Photogramm. Eng. 32, 444–462 (1966).

Broxton, M.

Cai, Z.

Chen, B.

B. Chen and B. Pan, “Full-field surface 3D shape and displacement measurements using an unfocused plenoptic camera,” Exp. Mech. 58(5), 831–845 (2018).
[Crossref]

Chen, Q.

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

Chen, W.

C. Quan, W. Chen, and C. J. Tay, “Phase-retrieval techniques in fringe-projection profilometry,” Opt. Lasers Eng. 48(2), 235–243 (2010).
[Crossref]

Cohen, M. F.

S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceeding of ACM SIGGRAPH (ACM, 1996), 43–54.

Cohen, N.

Dai, Q.

Dansereau, D. G.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in Conference on Computer Vision and Pattern Recognition (IEEE, 2013), 1027–1034.
[Crossref]

Deisseroth, K.

Duval, G.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Technical Report CSTR, 1–11 (2005).

Footer, M.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
[Crossref]

Gao, B. Z.

Gerlock, M.

Gortler, S. J.

S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceeding of ACM SIGGRAPH (ACM, 1996), 43–54.

Grosenick, L.

Grzeszczuk, R.

S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceeding of ACM SIGGRAPH (ACM, 1996), 43–54.

Hanrahan, P.

M. Levoy and P. Hanrahan, “Light field rendering,” in Proceeding of ACM SIGGRAPH (ACM, 1996), 31–42.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Technical Report CSTR, 1–11 (2005).

Hao, J.

X. Liu, Z. Cai, Y. Yin, J. Hao, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2D reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
[Crossref]

He, D.

X. Liu, Z. Cai, Y. Yin, J. Hao, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2D reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
[Crossref]

He, W.

X. Liu, Z. Cai, Y. Yin, J. Hao, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2D reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
[Crossref]

Heinze, C.

C. Heinze, S. Spyropoulos, S. Hussmann, and C. Perwass, “Automated robust metric calibration of multi-focus plenoptic cameras,” in International Instrumentation and Measurement Technology Conference (IEEE, 2015), 2038–2043.
[Crossref]

Hoffmann, M.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014).
[Crossref] [PubMed]

Horowitz, M.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
[Crossref]

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Technical Report CSTR, 1–11 (2005).

Huang, L.

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

Hussmann, S.

C. Heinze, S. Spyropoulos, S. Hussmann, and C. Perwass, “Automated robust metric calibration of multi-focus plenoptic cameras,” in International Instrumentation and Measurement Technology Conference (IEEE, 2015), 2038–2043.
[Crossref]

Jeon, H.-G.

Y. Bok, H.-G. Jeon, and I. S. Kweon, “Geometric calibration of micro-lens-based light field cameras using line features,” IEEE Trans. Pattern Anal. Mach. Intell. 39(2), 287–300 (2017).
[Crossref] [PubMed]

Kato, S.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014).
[Crossref] [PubMed]

Kweon, I. S.

Y. Bok, H.-G. Jeon, and I. S. Kweon, “Geometric calibration of micro-lens-based light field cameras using line features,” IEEE Trans. Pattern Anal. Mach. Intell. 39(2), 287–300 (2017).
[Crossref] [PubMed]

Levoy, M.

M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013).
[Crossref] [PubMed]

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
[Crossref]

M. Levoy and P. Hanrahan, “Light field rendering,” in Proceeding of ACM SIGGRAPH (ACM, 1996), 31–42.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Technical Report CSTR, 1–11 (2005).

Li, A.

Li, C.

C. Li, X. Zhang, and D. Tu, “Metric three-dimensional reconstruction model from a light field and its calibration,” Opt. Eng. 56(1), 013105 (2017).
[Crossref]

Lin, X.

Lippmann, G.

G. Lippmann, “Epreuves reversible. Photographies integrals,” Comptes Rendus Academie des Sciences 146(3), 446–451 (1908).

Liu, H.-Y.

Liu, X.

Maas, H.-G.

H. Sardemann and H.-G. Maas, “On the accuracy potential of focused plenoptic camera range determination in long distance operation,” ISPRS J. Photogramm. Remote Sens. 114, 1–9 (2016).
[Crossref]

Ng, R.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
[Crossref]

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Technical Report CSTR, 1–11 (2005).

Nobauer, T.

Pak, N.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014).
[Crossref] [PubMed]

Pan, B.

B. Chen and B. Pan, “Full-field surface 3D shape and displacement measurements using an unfocused plenoptic camera,” Exp. Mech. 58(5), 831–845 (2018).
[Crossref]

Pegard, N. C.

Peng, X.

Pernia-Andrade, A.

Perwass, C.

C. Heinze, S. Spyropoulos, S. Hussmann, and C. Perwass, “Automated robust metric calibration of multi-focus plenoptic cameras,” in International Instrumentation and Measurement Technology Conference (IEEE, 2015), 2038–2043.
[Crossref]

C. Perwass and L. Wietzke, “Single lens 3D-camera with extended depth-of-field,” in Conference on Human Vision and Electronic Imaging XVII (SPIE, 2012), 829108.
[Crossref]

Pizarro, O.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in Conference on Computer Vision and Pattern Recognition (IEEE, 2013), 1027–1034.
[Crossref]

Prevedel, R.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014).
[Crossref] [PubMed]

Quan, C.

C. Quan, W. Chen, and C. J. Tay, “Phase-retrieval techniques in fringe-projection profilometry,” Opt. Lasers Eng. 48(2), 235–243 (2010).
[Crossref]

Quint, F.

N. Zeller, F. Quint, and U. Stilla, “Depth estimation and camera calibration of a focused plenoptic camera for visual odometry,” ISPRS J. Photogramm. Remote Sens. 118, 83–100 (2016).
[Crossref]

Raskar, R.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014).
[Crossref] [PubMed]

Sardemann, H.

H. Sardemann and H.-G. Maas, “On the accuracy potential of focused plenoptic camera range determination in long distance operation,” ISPRS J. Photogramm. Remote Sens. 114, 1–9 (2016).
[Crossref]

Schlumm, F.

Schrödel, T.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014).
[Crossref] [PubMed]

Spyropoulos, S.

C. Heinze, S. Spyropoulos, S. Hussmann, and C. Perwass, “Automated robust metric calibration of multi-focus plenoptic cameras,” in International Instrumentation and Measurement Technology Conference (IEEE, 2015), 2038–2043.
[Crossref]

Stilla, U.

N. Zeller, F. Quint, and U. Stilla, “Depth estimation and camera calibration of a focused plenoptic camera for visual odometry,” ISPRS J. Photogramm. Remote Sens. 118, 83–100 (2016).
[Crossref]

Szeliski, R.

S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceeding of ACM SIGGRAPH (ACM, 1996), 43–54.

Tang, Q.

Tay, C. J.

C. Quan, W. Chen, and C. J. Tay, “Phase-retrieval techniques in fringe-projection profilometry,” Opt. Lasers Eng. 48(2), 235–243 (2010).
[Crossref]

Taylor, M. A.

Tian, L.

Todor, G.

L. Andrew and G. Todor, “Full resolution lightfield rendering,” Adobe Technical Report (2008).

Tu, D.

C. Li, X. Zhang, and D. Tu, “Metric three-dimensional reconstruction model from a light field and its calibration,” Opt. Eng. 56(1), 013105 (2017).
[Crossref]

Vaziri, A.

M. A. Taylor, T. Nobauer, A. Pernia-Andrade, F. Schlumm, and A. Vaziri, “Brain-wide 3D light-field imaging of neuronal activity with speckle-enhanced resolution,” Optica 5(4), 345–353 (2018).
[Crossref]

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014).
[Crossref] [PubMed]

Waller, L.

Wang, J. Y. A.

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992).
[Crossref]

Wetzstein, G.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014).
[Crossref] [PubMed]

Wietzke, L.

C. Perwass and L. Wietzke, “Single lens 3D-camera with extended depth-of-field,” in Conference on Human Vision and Electronic Imaging XVII (SPIE, 2012), 829108.
[Crossref]

Williams, S. B.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in Conference on Computer Vision and Pattern Recognition (IEEE, 2013), 1027–1034.
[Crossref]

Wu, J.

Yang, S.

Yin, Y.

X. Liu, Z. Cai, Y. Yin, J. Hao, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2D reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
[Crossref]

Y. Yin, X. Peng, A. Li, X. Liu, and B. Z. Gao, “Calibration of fringe projection profilometry with bundle adjustment strategy,” Opt. Lett. 37(4), 542–544 (2012).
[Crossref] [PubMed]

Yoon, Y. G.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014).
[Crossref] [PubMed]

Zeller, N.

N. Zeller, F. Quint, and U. Stilla, “Depth estimation and camera calibration of a focused plenoptic camera for visual odometry,” ISPRS J. Photogramm. Remote Sens. 118, 83–100 (2016).
[Crossref]

Zhang, M.

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

Zhang, X.

C. Li, X. Zhang, and D. Tu, “Metric three-dimensional reconstruction model from a light field and its calibration,” Opt. Eng. 56(1), 013105 (2017).
[Crossref]

Zhang, Z.

X. Liu, Z. Cai, Y. Yin, J. Hao, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2D reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
[Crossref]

Z. Zhang, “Review of single-shot 3D shape measurement by phase calculation-based fringe projection techniques,” Opt. Lasers Eng. 50(8), 1097–1106 (2012).
[Crossref]

Zheng, G.

Zimmer, M.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014).
[Crossref] [PubMed]

Zuo, C.

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

ACM Trans. Graph. (1)

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
[Crossref]

Biomed. Opt. Express (1)

Comptes Rendus Academie des Sciences (1)

G. Lippmann, “Epreuves reversible. Photographies integrals,” Comptes Rendus Academie des Sciences 146(3), 446–451 (1908).

Exp. Mech. (1)

B. Chen and B. Pan, “Full-field surface 3D shape and displacement measurements using an unfocused plenoptic camera,” Exp. Mech. 58(5), 831–845 (2018).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (2)

Y. Bok, H.-G. Jeon, and I. S. Kweon, “Geometric calibration of micro-lens-based light field cameras using line features,” IEEE Trans. Pattern Anal. Mach. Intell. 39(2), 287–300 (2017).
[Crossref] [PubMed]

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992).
[Crossref]

ISPRS J. Photogramm. Remote Sens. (2)

N. Zeller, F. Quint, and U. Stilla, “Depth estimation and camera calibration of a focused plenoptic camera for visual odometry,” ISPRS J. Photogramm. Remote Sens. 118, 83–100 (2016).
[Crossref]

H. Sardemann and H.-G. Maas, “On the accuracy potential of focused plenoptic camera range determination in long distance operation,” ISPRS J. Photogramm. Remote Sens. 114, 1–9 (2016).
[Crossref]

Nat. Methods (1)

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014).
[Crossref] [PubMed]

Opt. Eng. (1)

C. Li, X. Zhang, and D. Tu, “Metric three-dimensional reconstruction model from a light field and its calibration,” Opt. Eng. 56(1), 013105 (2017).
[Crossref]

Opt. Express (3)

Opt. Lasers Eng. (4)

Z. Zhang, “Review of single-shot 3D shape measurement by phase calculation-based fringe projection techniques,” Opt. Lasers Eng. 50(8), 1097–1106 (2012).
[Crossref]

X. Liu, Z. Cai, Y. Yin, J. Hao, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2D reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
[Crossref]

C. Quan, W. Chen, and C. J. Tay, “Phase-retrieval techniques in fringe-projection profilometry,” Opt. Lasers Eng. 48(2), 235–243 (2010).
[Crossref]

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

Opt. Lett. (2)

Optica (3)

Photogramm. Eng. (1)

D. C. Brown, “Decentering distortion of lenses,” Photogramm. Eng. 32, 444–462 (1966).

Other (10)

C. Heinze, S. Spyropoulos, S. Hussmann, and C. Perwass, “Automated robust metric calibration of multi-focus plenoptic cameras,” in International Instrumentation and Measurement Technology Conference (IEEE, 2015), 2038–2043.
[Crossref]

N. Zeller, C. A. Noury, F. Quint, C. Teuliere, U. Stilla, and M. Dhome, “Metric calibration of a focused plenoptic camera based on a 3D calibration target,” in International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences (ISPRS, 2016), 449–456.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in Conference on Computer Vision and Pattern Recognition (IEEE, 2013), 1027–1034.
[Crossref]

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Technical Report CSTR, 1–11 (2005).

L. Andrew and G. Todor, “Full resolution lightfield rendering,” Adobe Technical Report (2008).

C. Perwass and L. Wietzke, “Single lens 3D-camera with extended depth-of-field,” in Conference on Human Vision and Electronic Imaging XVII (SPIE, 2012), 829108.
[Crossref]

O. Johannsen, C. Heinze, B. Goldluecke, and C. Perwass, “On the Calibration of Focused Plenoptic Cameras,” in Time-of-Flight and Depth Imaging: Sensors, Algorithms and Applications (Springer, 2013), 302–317.

F. E. Ives, “Parallax stereogram and process of making same,” United States Patent Application 725,567, (1903).

M. Levoy and P. Hanrahan, “Light field rendering,” in Proceeding of ACM SIGGRAPH (ACM, 1996), 31–42.

S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceeding of ACM SIGGRAPH (ACM, 1996), 43–54.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1
Fig. 1 Light field parameterization.
Fig. 2
Fig. 2 Light field resampling.
Fig. 3
Fig. 3 Unfocused plenoptic metric calibration by using an auxiliary 3DMS.
Fig. 4
Fig. 4 Unfocused plenoptic imaging distortion.
Fig. 5
Fig. 5 Characteristic curves of the optimal radial distortion parameters concerning the computed depths in the specific spatial coordinates.
Fig. 6
Fig. 6 Characteristic curves of the optimal radial and depth distortion parameters concerning the computed depths in the specific spatial coordinates.
Fig. 7
Fig. 7 Overall flow chart of unfocused plenoptic metric calibration and 3D measurement.
Fig. 8
Fig. 8 Experimental system architecture.
Fig. 9
Fig. 9 Phase-encoded field: (a) visualization with two enlarged heat-maps; (b) curves of a cross-sectional segment indicated by the red line segment in (a); (c) and (d) phase distributions corresponding to the two angular coordinates, respectively; (e) curves of two cross-sectional segments indicated by the blue and green line segments in (c) and (d), respectively; (f) enlarged version of the curve segments within the black box in (e).
Fig. 10
Fig. 10 Visualization of the computation results listed in Table 1.
Fig. 11
Fig. 11 Light field ray calibration: (a) measured collinear spatial points along with the corresponding fitted light ray; (b) central light ray relative to the XY planes in the light field coordinate system and projector coordinate system.
Fig. 12
Fig. 12 Visualization of the effects of unfocused plenoptic imaging distortion on the angular coordinate error with depths of (a) 0.4; (b) 0.8; (c) 1.2; (d) 1.6.
Fig. 13
Fig. 13 Depth mapping calibration: measured depth pairs and fitted results.
Fig. 14
Fig. 14 Accuracy analysis: depth difference distributions (a) with and (b) without considering imaging distortion; depth difference histograms (c) with and (d) without considering imaging distortion.
Fig. 15
Fig. 15 Unfocused plenoptic 3D measurement: (a) far and (b) near refocused views; (c) nonmetric depth map; (d) metric depth map; (e) 3D point cloud; (d) 3D model.

Tables (3)

Tables Icon

Table 1 Computed depths in central local angular coordinates.

Tables Icon

Table 2 Calibrated internal parameters in specific spatial coordinates s = (74, 359)T.

Tables Icon

Table 3 Relevant MAX, MEAN, and RMS of depth differences (mm).

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

s β abs u abs s abs u abs = F β F ,
s β F s β rel = s F s rel 1 β + u F u rel ( 1 1 β ) ,
s β = s + u ( 1 1 β ) F u rel F s β rel
( 1 1 α ) = ( 1 1 β ) λ
s α = s + u ( 1 1 α )
1 β F + 1 D + Z f = 1 f
Z f = m 1 α + m 2 α + m 3 m 1 = D F λ + D f F f λ D f λ f λ F λ f , m 2 = D f f λ F λ f , m 3 = f f λ F λ f ,
{ s α = s + u ( 1 1 α ) Z f = m 1 α + m 2 α + m 3 ,
[ X p Y p ] = Z p θ p + a p
R = R Y ( ω Y ) R X ( ω X ) = [ cos ω Y sin ω X sin ω Y cos ω X sin ω Y 0 cos ω X sin ω X sin ω Y sin ω X cos ω Y cos ω Y ] .
R [ θ p 0 1 ] = λ f n Z f ,
{ ω X = arc sin φ p 0 φ p 0 2 + 1 ω Y = arc sin θ p 0 θ p 0 2 + φ p 0 2 + 1 .
{ ϕ I V ( s α , u ) = ϕ I V ( s , 0 ) ϕ I H ( s α , u ) = ϕ I H ( s , 0 ) .
Δ s = s α s = u ( 1 1 α ) ,
Δ s = r | 1 1 α | ,
s α = s + ( u + δ u ) ( 1 1 α )
δ r = u ( k 1 r 2 + k 2 r 4 + k 3 r 6 ) ,
arg τ min u Δ s i ( u + δ u ) ( 1 1 α i ) 2 ,
δ α = 1 q 1 ( α 1 ) + q 2 ( α 1 ) 2
{ s α = s + ( u + δ u ) ( 1 1 α ) δ u = u k 1 r 2 + k 2 r 4 + k 3 r 6 q 1 ( α 1 ) + q 2 ( α 1 ) 2 Z f = m 1 α + m 2 α + m 3 [ X f Y f ] = Z f θ f + a f
arg τ min i u Δ s i ( u + δ u ) ( 1 1 α i ) 2 ,

Metrics