## Abstract

The polarization properties of reflected light capture important information about the object’s inherent properties: material composition, i.e. index of refraction and scattering properties, and shape of the object, i.e. surface normal. Polarization information therefore has been used for surface reconstruction using a single-view camera with unpolarized incident light. However, this surface normal reconstruction technique suffers from a zenith angle ambiguity. In this paper, we have utilized circularly polarized light to solve for the zenith ambiguity by developing a detailed model using Mueller matrix formulism and division of focal plane polarization imaging technology. Experiment results validate our model for accurate surface reconstruction.

© 2015 Optical Society of America

## 1. Introduction

The topic of surface reconstruction from polarization has seen much interest from the computer vision community [1–11]. Light reflected from a surface changes its polarization state based on the angle of the reflection, so observing the polarization state allows for the recovery of the surface normal. The difficulty in the reconstruction problem lies in the fact that it is under constrained; the information acquired from using unpolarized light illumination from a single view is not sufficient to successfully resolve for the zenith and azimuth ambiguities.

Rahmann and Canterakis [1] constrained the problem by taking images from multiple viewpoints and applied their method to specular surfaces using an iterative optimization routine. Miyazaki et al. [2] took two images and tilted the target object at a small angle in their second image to constrain the zenith angle, and propagated information at the boundary of the object to the entire image to resolve the azimuth angle. Atkinson and Hancock [3] combined polarization and shading information from stereo images to recover the surface normals from diffuse reflections. Morel et al. [4] applied polarization images to metallic surfaces. They exploited the large index of refraction of these materials to constrain the zenith angle and developed an active lighting system to disambiguate the azimuth angle. Stolz et al. [5] use a multispectral system, relying on two different degrees of polarization for two different wavelengths, to solve the zenith ambiguity. Other methods [6,7] simultaneously recover the index of refraction and surface normal. However, both of these methods require numerous images to solve for the various ambiguities and are computationally expensive.

All of the existing methods for shape reconstructions from polarization only analyze the linear component of polarization. The contribution of this paper is to exploit the phase information of light, i.e. circular polarization properties of light, to determine shape. The phase difference between the two orthogonal components of the light wave determines the elliptical component of the polarization state. Using the elliptical component of polarization, the ambiguity of the zenith angle in Fresnel’s reflections can be resolved from a single viewpoint using a single imaging sensor. In addition this paper applies the powerful concept of Stokes vectors and Mueller matrices to analyze more complex input light sources and derive a closed-form solution for both the zenith and azimuth angles. To solve for the azimuth ambiguity, this paper uses a reconstruction method based on the one presented in [2]. Furthermore, this paper employs a division of focal plane polarimeter capable of acquiring polarization information at every imaged frame with high spatial resolution [12]. The imaging sensor is realized by monolithically integrating pixelated nanowire polarization filters with an array of pixels that use CCD technology.

## 2. Polarization properties of light and stokes parameters

Light is a transverse wave that is fully characterized by its intensity, wavelength, and polarization [10]. Polarization of light, which defines the orientation of light waves as they propagate through space and time, is undetectable with the unaided eye or with typical color or monochromatic imaging sensors. Nevertheless, polarization imaging is of great interest to various research communities, ranging from remote sensing [11,13,14], to marine biology [15–19] and medical imaging [20–23]. The wide research interest in polarization imaging is in part because the polarization state of light changes when it interacts with a tissue, whether through reflection, transmission, or scattering [16].

Sir George Stokes was the first to propose a mathematical model representing both the polarized and unpolarized states of light in terms of observable quantities [24]. He found that the properties of light can be described in terms of four quantities, termed the Stokes parameters, and are modeled via Eq. (1):

This equation describes the intensity of a light beam observed through a linear polarizer rotated by an angle α with respect to a reference x-axis and a retarder with a phase shift ϕ and its fast axis aligned with the reference x-axis, as is shown in Fig. 1.

In Eq. (1), S_{0}, describes the total intensity of the light; S_{1} describes the light that is horizontally or vertically polarized; S_{2} describes the light that is linearly polarized at 45° or 135°, and S_{3} describes the light that is right- or left-handed circularly polarized.

Together these four parameters form the Stokes vector S = [S_{0} S_{1} S_{2} S_{3}]^{T}, which can describe the polarized, partially polarized, or unpolarized state of light. From Eq. (1), we can conclude that a minimum of four measurements must be conducted with linear polarization filters and a retarder to determine the Stokes vector. If the retardance, ϕ, is set to either 0° (i.e. there is no retarder in the optical path) or 90° and the angle of linear polarization, α, is set to 0°, 45°, 90°, and 135°, the Stokes vector can be uniquely computed as presented by Eqs. (2) through (5).

Three additional quantities are computed from the Stokes vector: the Degree of Linear Polarization (DoLP), the Degree of Circular Polarization (DoCP), and the Angle of Polarization (AoP), which are defined by Eqs. (6) through (9).

The DoLP describes the fraction of the light which is linearly polarized, where 0 indicates unpolarized light; 1 represents linearly polarized light and any number between 0 and 1 represents partially linearly polarized light. The DoCP describes the fraction of the light that is circularly polarized: left-handed circularly polarized light has negative values, while right-handed circularly polarized light has positive value. For values between −1 and 1, light is elliptically polarized with an exception of DoCP = 0 indicating no circular or elliptical component in the light. The AoP describes the orientation of the major axis of the polarization ellipse, formed by linearly and circularly polarized light, relative to the reference axis.

## 3. Shape from polarization

#### 3.1 Geometrical model

Light reflected from the surface of an object changes its polarization state based on the angle of the reflection and the index of refraction of the object [10]. We are interested in uniquely determining the surface normal orientation with respect to the camera position by analyzing the polarization state of the reflected light from an object. The surface normal orientation is given by the zenith angle, θ_{i}, which is equivalent to the angle of incidence, and the azimuth angle, φ, as shown in Fig. 2.

The Snell-Descarte’s law, presented by Eq. (9), describes the relation between the ratio of the angles of incidence θ_{i} and refraction θ_{r} and the ratio of the indices of refraction of the object n_{2} and the incident medium n_{1}. If medium one is air, then n_{1} is approximately 1.

#### 3.2 Mueller matrices

When a light beam interacts with an object or medium, the polarization state of the output light (reflected or refracted) is always changed. The polarization signature can change in the amplitude of the orthogonal components of the electromagnetic wave, as well as their relative phases and directions. This interaction can also change the total amount of light that is polarized, either linearly or circularly, by transferring energy from a polarized state to an unpolarized state and the other way around. This interaction can be modeled by a set of linear equations represented by a 4-by-4 matrix called the Mueller matrix [10]. This means that Mueller matrices are designed to represent any changes in the polarization signature caused by a physical interaction or a geometrical transformation, such as coordinate system rotations. Equation (10) shows this interaction, where M is the Mueller matrix, S_{in} is the Stokes vector of the incident light, and S_{out} is the Stokes vector of the resulting light.

The Mueller matrix for reflection at an air-object interface is given by Eq. (11). Detailed discussion on Muller matrix for light reflections and refraction can be found elsewhere [10,12].

In order to compute the Stokes vector of the reflected light from an object at any orientation, it is necessary to first rotate the input Stokes vector to the same coordinate system as the object tangent plane by an angle θ. This rotation is achieved via the Muller matrix presented by Eq. (12) [10]. When light is reflected this rotation is represented by the azimuth angle, φ, of the surface.

Using Eqs. (10) through (12) we can write the relation between a light beam with a Stokes vector S_{in} that is reflected on a rotated tangent surface by an angle of θ with an incident angle θ_{i} and refraction angle θ_{r} and the output Stokes vector captured by the camera S_{out} as shown in Eq. (13).

The use of Mueller matrices and Stokes vectors to represent the change in polarization from the light beam coming from a light source that is reflected on an object and then captured by the polarimeter allows us to have a simple closed form representation of how the geometrical object model interacts with the metrics of polarization of light. Furthermore, this approach allows analysis of polarization of light using more complex light sources rather than completely unpolarized light as has been the case in previous works [1–7].

#### 3.3 Zenith ambiguity

If the input Stokes vector, S_{in}, happens to be unpolarized, Sin = {1,0,0,0}, then using Eqs. (6) through (8) we can find the DoLP and AoP of the output Stokes vector, S_{out}, as a function of θ, θ_{i}, and the object index of refraction as shown in Eqs. (14) and (15). However the DoCP metric for S_{out} will always be zero regardless of the object properties.

The first problem with using unpolarized light as the input light source is that the zenith angle cannot be mapped to a DoLP function, since there is a double ambiguity on θ_{i} everywhere in the zenith domain but the Brewster angle (BA), which is defined as the point where the reflected light beam has a DoLP of 1, as shown in Fig. 3. For this reason, previous works have been restricted to work with objects with a high index of refraction to move the Brewster angle closer to 90° [25], reconstruct convex objects with small zenith angles in order to stay on the left side of the ambiguity [26], or use more than one scene view [2,3,5].

In Fig. 3. there is an ambiguity in solving for the zenith angle, θ_{i}, using the DoLP (dashed red line) but none with the DoCP (solid blue line). The Brewster angle is located at 57.99°, where the DoCP equals to zero.

If the input Stokes vector, S_{in}, is partially circularly polarized, S_{in} = {1, 0, 0, p} where p is the normalized S_{3} component of the light source, the equations for the AoP and DoLP metrics remain the same as in Eqs. (14) and (15), when we use completely unpolarized light. However, the DoCP has an interesting curve over the zenith angle with no ambiguity present in the figure. The DoCP expression as a function of the zenith angle and a particular index of refraction is shown in Eq. (16). For Eq. (16) the DoCP has a range from –p to p with p ϵ [-1,1], and it is zero at the Brewster Angle.

Equation (16) gives a closed-form solution for the zenith angle without any ambiguity as long as the index of refraction of the object material is known and the light source has a known polarization signature.

#### 3.4 Azimuth ambiguity

The azimuth angle, φ, represents the slant on the normal surface, or the orientation of the plane of incidence. That orientation has a direct relation to the angle that the plane has been rotated as expressed by Eq. (17).

If light is partially circularly polarized or completely unpolarized then using Eq. (14) we can express the azimuth angle as an expression of the AoP as shown in Eq. (18).

We can notice that the ambiguity is symmetrical by 180° and that if we have a continuous surface the change in ambiguity should happen when the AoP is 90° or the azimuth angle is 0° or 180°. In order to solve for this ambiguity a similar approach to the one in [2] is taken; one pixel in each column of the image is initialized to a known solution of the ambiguity. This pixel serves as a seed pixel, and the algorithm propagates across the rest of the pixels in the same column in the following manner: the azimuth angle, φ, is calculated in its vertical neighbors using both hypotheses, adding 90° or subtracting 90°, and it’s compared to the previous known φ value, where the algorithm starts at the seed point. Since we assume that the surface of interest is smooth and continuous, the azimuth angle with the smaller square difference gets selected and the algorithm continues to the next neighbor. The reason we are choosing to initialize one pixel in each column rather than a pixel in each row is that the camera x-axis should be parallel to the imager rows, therefore making the change of ambiguity across the vertical direction. Certainly there could be some noise in the image that could result in a wrong estimation of φ but this can be compensated for by initializing more than one contiguous vertical pixel in each column to compute the difference in a neighborhood tail rather than a single pixel and accurately specifying the border or discontinuities of the object, which can be done with a grow region algorithm in the intensity domain.

#### 3.5 From azimuth and zenith angles to surface reconstruction

In order to reconstruct the object surface we have to make the assumption that the surface is a Cartesian one, defined by Eq. (19).

If a transformation from spherical to Cartesian coordinates is performed, using the azimuth and zenith angles, the surface normal can be expressed via Eq. (20).

The goal is to reconstruct the objective function defined by Eq. (19) by integrating the surface normals at each pixel location. In order to perform this step, surface is assumed to be continuous and derivable over the x and y domain. We also have to consider the noise induced by the polarimeter measurements, where if we assume that the noise is Gaussian, we need to minimize the least-square cost function, shown in Eq. (21) in its continuous form:

Equation (21) represents the volume of the squared differences of the functions, therefore the discrete form over a rectangular grid, such as the one represented by the imager grid is shown in Eq. (22):

Where the subscript “F” denotes the Frobenius norm. The algorithm to minimize the least-square cost function is explained in [27]. There are several other algorithms that can solve the surface normals problem to output a reconstructed surface, the most common being the Frankot-Chellapa algorithm [28]. The Frankot-Chellapa algorithm is computationally fast since it relays on computationally efficient Fast Fourier transforms; however it suffers from being restricted to surfaces with a zero mean slant [29], while the solution proposed in [27] does not make this assumption. In addition this algorithm minimizes the error induced by the noise by the polarimeter, which leads to aberrations that propagate over the whole surface.

## 4. Polarimeter technology and experimental setup

A typical polarization imaging setup is composed of rotating linear polarization filters and/or rotating quarter-wave retarders. This imaging apparatus, known as a division of time polarimeter [30], provides very accurate polarization information at the cost of reduced frame rate. Furthermore, if there is motion in the scene, the object/target of interest will move across the several different sample images with different polarization filters. Hence, the polarization information will be inaccurate due to pixel misregistration between frames.

Division of focal plane (DoFP) imaging sensors combine pixelated polarization filters monolithically with an array of imaging elements [8]. This imaging sensor has the benefit of capturing all polarization information in a single frame and does not suffer from motion artifacts as do division of time polarimeters. The DoFP image sensor works by grouping its pixels into 2x2 super-pixels and covering the four pixels with linear polarizers at orientations of 0° (upper left), 45° (upper right), 90° (lower right), and 135° (lower left) as shown in Fig. 4. Loss in spatial resolution caused by the DoFP filter pattern can be recovered through interpolation and denoising algorithms, where the inherent noise of the sensor is included in the model for improved reconstruction results [31,32]. Another DoFP approach is to include a pixelated quarter wave retarder (QWR) within the super-pixel configuration in order to extract all four Stokes parameters in a single snapshot [33].

For our surface reconstruction work in this paper, a partially complete Stokes vector DoFP imaging sensor was utilized. The sensor is achived by monolithic integration of pixelated aluminum nanowire linear polarization filters with an array of CCD imaging elements. The pixel pitch of the aluminum nanowire filters is matched to the pixel pitch of the CCD camera, which is 7.4 microns. The aluminum nanowires are 70nm wide with 70nm air gap between adjacent wires and they are 140nm tall. The nanowires are ~6.4 microns long, leaving 1 micron “dead” space between adjacent pixels. The CCD polarization imager is 1900 by 1600 pixels and operates at 40 frames per second. Detailed characterization of the opto-electronic performance of the imager is presented in [34]. The sensor is calibrated using a Muller matrix approach as described in [35].

In order to capture the circular polarization properties of light, i.e. the fourth Stokes parameter S3, an achromatic QWR is mounted in front of the imaging sensor with its fast axis aligned to the camera’s x-axis as is shown in Fig. 5. To avoid problems in intensity normalization between measurements caused by the non-perfect transmission ratio of the QWR, S3 is normalized using the computed total intensity S0 when ϕ is 90°, as presented by Eq. (2).

The object of interest is placed inside of a cylindrical structure made of high quality achromatic circular polarizer sheets (API APNCP37-035-60G). The cylinder has a small aperture for the camera to look inside the imaging area. The cylinder is inside of a diffusing cloth which also has an opening from one side to place the imaging polarimeter. The light first passes through the diffusing cloth, to avoid light-saturated spots on the surface, and then through a left circular polarizer sheet before hitting the object and being reflected back to the camera. These circular polarizer sheets have similar DoCP response regardless of the angle of incidence as measured with our imaging polarimeter. The DoCP and DoLP components of light going through a circular polarizer sheet were measured to be −0.86 and 0 respectively, creating an incident Stokes vector of Sin = [1 0 0 −0.86]. This Stokes vector has an unpolarized linear component as well as circularly polarized component. To avoid artifacts caused by shadows or reflections the imaged object is suspended in air. The DoFP polarimeter uses Canon EF lenses, with focal lengths ranging from 18 mm to 100 mm.

Figure 6 presents a monochromatic image, angle of polarization (AoP), degree of linear polarization (DoLP), and degree of circular polarization (DoCP) obtained from the polarization sensor. The polarization information is calibrated using a Mueller matrix calibration procedure described in [35]. The scene shows a PET plastic bottle, with an index of refraction of 1.64.

The angle of polarization image is presented using a false color scheme, where red represents horizontally polarized light, i.e. 0 degree of the angle of light oscillations and blue color represents vertically polarized light, i.e. 90 degree angle of light oscillations. The angle of polarization maps directly to the azimuth angle after providing the seeding pixels in the image.

The degree of linear polarization image is presented using false color scheme, where blue depicts low degree of linear polarization and red depicts high degree of linear polarization. The degree of linear polarization has the highest value at and around the Brewster angle of the object. Figure 7 depicts a profile plot of the DoLP across a single line traversing on the bottle as shown in Fig. 6(c). The peak of the DoLP can be observed as well as the ambiguity of uniquely solving for the zenith angle from the DoLP.

The degree of circular polarization image is depicted in false color as well, where blue color represents left circularly polarized light and red color represents right circularly polarized light. The Brewster angle of the object is depicted by a black contour plot around the bottle by examining where the DoCP is zero. Figure 7 shows a profile plot of the DoCP on the same line traversing the bottle, shown in Fig. 6(d), demonstrating the unique mapping of zenith angle from the DoCP measurement.

Two regions, A and B, shown in Fig. 6(b) and 6(d) are selected for performing surface reconstruction. Figure 8(a) and Fig. 9(a) show the computed azimuth angle in regions B and A respectively, where region B crosses the ambiguity region. Figure 8(b) and 9(b) show the computed zenith angle in regions B and A respectively, with a black contour plot showing the location of the Brewster angle. Figure 8(c) and 9(c) show the 3-D reconstruction in regions B and A respectively from of the plastic bottle. The smooth variations of the reconstructed surface are indicative that the surface normal computed from the polarization information is accurate.

Figure 10 presents surface reconstruction of a spherical HDPE object with an index of refraction of 1.54 using our method. Spherical objects have continuous variations in AoP, DoLP, and DoCP, covering most of the function domain described by Eq. (19). The region C selected to reconstruct is shown in Fig. 10(b) and 10(d). Figure 11(a) shows the computed azimuth angle, where the algorithm previously described successfully resolves the ambiguity. Figure 11(b) shows the computed zenith angle where again the slant angle is correctly characterized. Figure 11(c) shows the reconstructed sphere on the region C selected.

The algorithm used to disambiguate the azimuth angle depends on the surface being continuous (with the same index of refraction) and having a seed pixel on each column where the ambiguity is known. A figure showing all disambiguated azimuth angles is not possible unless the surface to be reconstructed is larger than the camera field of view. For Fig. 6, the PET plastic bottle, the scene contains a bottle cap (with a different index of refraction) in the middle, which makes the surface not continuous.

## 6. Error calculation

To evaluate the accuracy of our method combined with the polarimeter technology used we designed an experiment to measure the root-mean-square errors (RMSE) of the DoCP and zenith angle estimations. The experiment consisted of mounting a flat black plastic surface, with an index of refraction of approximately 1.4, on a rotational stage (Thorlabs NR360S) inside of our experimental setup as shown on Fig. 12. The stage’s rotational axis was parallel to the camera’s x-axis, and the plastic plane was orthogonal to the stage’s plane. The stage and plastic plane were aligned such that the plastic plane would be parallel to the camera’s sensor plane at the initial stage position (zero degrees).

The experiment consisted on rotating the stage, via computer, to change the incident angle on the plastic plane viewed by the camera and estimating the DoCP and zenith angle at each rotation using our method. The stage rotated from 0° to 85° in steps of 5°. Figure 13(a) shows the DoCP vs true incident angle response with an RMSE of 0.0005 while Fig. 13(b) shows the zenith angle measured error with an RMSE of 0.1150°.

## 7. Conclusion

We have introduced a method for estimating surface normals from a single camera view using circular polarization information if the index of refraction of the imaged object is known. Conversely, our imaging method can estimate the index of refraction of an object is the shape is known a priori. Future work will address the limitations of reconstructing surface normal reconstruction for objects with known index of refraction and include objects with multiple indices of refractions or unknown index of refraction. Also detailed modeling of the scattering properties of the object can further expand this method to broader class of objects.

The main contributions of this paper are twofold: a mathematical framework for polarization based surface normal reconstruction and utilizing circular polarization to uniquely solve the zenith angle of the surface normal. The mathematical framework presented in this paper utilizes Mueller matrices and Stokes vectors to model polarization properties of the incident light source, the object’s inherent properties (surface and index of refraction), and the polarization state of the reflected light.

Utilizing circularly polarized incident light and capturing the circular polarization properties of the reflected light, a unique solution for the zenith angle of the surface normals is computed. Experimental images are obtained using a unique imaging sensor composed of a CCD pixel array with aluminum nanowires for polarization sensitivity.

## Acknowledgment

This work was supported by National Science Foundation grant number OCE-1130897, and Air Force Office of Scientific Research grant numbers FA9550-10-1-0121 and FA9550-12-1-0321.

## References and links

**1. **S. Rahmann and N. Canterakis, “Reconstruction of specular surfaces using polarization imaging,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2001), pp. I-149. [CrossRef]

**2. **D. Miyazaki, M. Kagesawa, and K. Ikeuchi, “Transparent surface modeling from a pair of polarization images,” IEEE Trans. Pattern Anal. Mach. Intell. **26**(1), 73–82 (2004). [CrossRef] [PubMed]

**3. **G. A. Atkinson and E. R. Hancock, “Shape estimation using polarization and shading from two views,” IEEE Trans. Pattern Anal. Mach. Intell. **29**(11), 2001–2017 (2007). [CrossRef] [PubMed]

**4. **O. Morel, C. Stolz, F. Meriaudeau, and P. Gorria, “Active lighting applied to three-dimensional reconstruction of specular metallic surfaces by polarization imaging,” Appl. Opt. **45**(17), 4062–4068 (2006). [CrossRef] [PubMed]

**5. **C. Stolz, M. Ferraton, and F. Meriaudeau, “Shape from polarization: a method for solving zenithal angle ambiguity,” Opt. Lett. **37**(20), 4218–4220 (2012). [CrossRef] [PubMed]

**6. **V. Thilak, D. G. Voelz, and C. D. Creusere, “Polarization-based index of refraction and reflection angle estimation for remote sensing applications,” Appl. Opt. **46**(30), 7527–7536 (2007). [CrossRef] [PubMed]

**7. **C. P. Huynh, A. Robles-Kelly, and E. Hancock, “Shape and refractive index recovery from single-view polarization images,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 1229–1236.

**8. **V. Gruev, R. Perkins, and T. York, “CCD polarization imaging sensor with aluminum nanowire optical filters,” Opt. Express **18**(18), 19087–19094 (2010). [CrossRef] [PubMed]

**9. **R. Perkins and V. Gruev, “Signal-to-noise analysis of Stokes parameters in division of focal plane polarimeters,” Opt. Express **18**(25), 25815–25824 (2010). [CrossRef] [PubMed]

**10. **D. Goldstein, *Polarized Light* (Marcel Dekker, 2003).

**11. **L. J. Deuzé, F. M. Bréon, C. Devaux, P. Goloub, M. Herman, B. Lafrance, F. Maignan, A. Marchand, F. Nadal, G. Perry, and D. Tanré, “Remote sensing of aerosols over land surfaces from POLDER‐ADEOS‐1 polarized measurements,” J. Geophys. Res., D, Atmospheres **106**(D5), 4913 (2001). [CrossRef]

**12. **T. York, S. B. Powell, S. Gao, L. Kahan, T. Charanya, and D. Saha, “Bioinspired polarization imaging sensors: from circuits and optics to signal processing algorithms and biomedical applications,” in *Proceedings of IEEE* (IEEE, 2014), pp. 1450–1469.

**13. **E. Puttonen, J. Suomalainen, T. Hakala, and J. Peltoniemi, “Measurement of reflectance properties of asphalt surfaces and their usability as reference targets for aerial photos,” IEEE Trans. Geosci. Rem. Sens. **47**(7), 2330–2339 (2009). [CrossRef]

**14. **T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Mach. Intell. **31**(3), 385–399 (2009). [CrossRef] [PubMed]

**15. **P. Brady and M. Cummings, “Differential response to circularly polarized light by the jewel scarab beetle Chrysina gloriosa,” Am. Nat. **175**(5), 614–620 (2010). [CrossRef] [PubMed]

**16. **T. W. Cronin, N. Shashar, R. L. Caldwell, J. Marshall, A. G. Cheroske, and T. H. Chiou, “Polarization vision and its role in biological signaling,” Integr. Comp. Biol. **43**(4), 549–558 (2003). [CrossRef] [PubMed]

**17. **N. Shashar, R. Hagan, J. G. Boal, and R. T. Hanlon, “Cuttlefish use polarization sensitivity in predation on silvery fish,” Vision Res. **40**(1), 71–75 (2000). [CrossRef] [PubMed]

**18. **A. Sweeney, C. Jiggins, and S. Johnsen, “Insect communication: polarized light as a butterfly mating signal,” Nature **423**(6935), 31–32 (2003). [CrossRef] [PubMed]

**19. **G. Horváth and D. Varjú, *Polarized Light in Animal Vision: Polarization Patterns in Nature (*Springer, 2004).

**20. **C. Paddock, T. Youngs, E. Eriksen, and R. Boyce, “Validation of wall thickness estimates obtained with polarized light microscopy using multiple fluorochrome labels: correlation with erosion depth estimates obtained by lamellar counting,” Bone **16**(3), 381–383 (1995). [CrossRef] [PubMed]

**21. **E. Salomatina-Motts, V. A. Neel, and A. N. Yaroslavskaya, “Multimodal polarization system for imaging skin cancer,” Opt. Spectrosc. **107**(6), 884–890 (2009). [CrossRef]

**22. **Y. Liu, T. York, W. Akers, G. Sudlow, V. Gruev, and S. Achilefu, “Complementary fluorescence-polarization microscopy using division-of-focal-plane polarization imaging sensor,” J. Biomed. Opt. **17**(11), 116001 (2012). [CrossRef] [PubMed]

**23. **V. V. Tuchin, L. Wang, and D. A. Zimnyakov, *Optical Polarization in Biomedical Applications* (Springer, 2006).

**24. **G. G. Stokes, “On the numerical calculation of a class of definite integrals and infinite series,” Transactions of the Cambridge Philosophical Society. **9**, 329(1851).

**25. **O. Morel, F. Meriaudeau, C. Stolz, and P. Gorria, “Polarization imaging applied to 3D reconstruction of specular metallic surfaces,” Proc. SPIE **2005**, 178–186 (2005). [CrossRef]

**26. **D. Miyazaki, M. Saito, Y. Sato, and K. Ikeuchi, “Determining surface orientations of transparent objects based on polarization degrees in visible and infrared wavelengths,” J. Opt. Soc. Am. A **19**(4), 687–694 (2002). [CrossRef] [PubMed]

**27. **M. Harker and P. O’Leary, “Least squares surface reconstruction from measured gradient fields,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–7. [CrossRef]

**28. **R. T. Frankot and R. Chellappa, “A method for enforcing integrability in shape from shading algorithms,” IEEE Trans. Pattern Anal. Mach. Intell. **10**(4), 439–451 (1988). [CrossRef]

**29. **M. Olivier, M. Fabrice, S. Christophe, and G. Patrick, “Active Lighting Applied to 3D Reconstruction of Specular Metallic Surfaces by Polarization Imaging,” Appl. Opt. **45**, 4062(2005).

**30. **J. S. Tyo, D. L. Goldstein, D. B. Chenault, and J. A. Shaw, “Review of passive imaging polarimetry for remote sensing applications,” Appl. Opt. **45**(22), 5453–5469 (2006). [CrossRef] [PubMed]

**31. **S. Gao and V. Gruev, “Gradient-based interpolation method for division-of-focal-plane polarimeters,” Opt. Express **21**(1), 1137–1151 (2013). [CrossRef] [PubMed]

**32. **E. Gilboa, J. P. Cunningham, A. Nehorai, and V. Gruev, “Image interpolation and denoising for division of focal plane sensors using Gaussian processes,” Opt. Express **22**(12), 15277–15291 (2014). [CrossRef] [PubMed]

**33. **G. Myhre, W. L. Hsu, A. Peinado, C. LaCasse, N. Brock, R. A. Chipman, and S. Pau, “Liquid crystal polymer full-stokes division of focal plane polarimeter,” Opt. Express **20**(25), 27393–27409 (2012). [CrossRef] [PubMed]

**34. **T. York and V. Gruev, “Characterization of a visible spectrum division-of-focal-plane polarimeter,” Appl. Opt. **51**(22), 5392–5400 (2012). [CrossRef] [PubMed]

**35. **S. B. Powell and V. Gruev, “Calibration methods for division-of-focal-plane polarimeters,” Opt. Express **21**(18), 21039–21055 (2013). [CrossRef] [PubMed]