Division-of-focal plane (DoFP) imaging polarimeters are useful instruments for measuring polarization information for a variety of applications. Recent advances in nanofabrication have enabled the practical manufacture of DoFP sensors for the visible spectrum. These sensors are made by integrating nanowire polarization filters directly with an imaging array, and size variations of the nanowires due to fabrication can cause the optical properties of the filters to vary up to 20% across the imaging array. If left unchecked, these variations introduce significant errors when reconstructing the polarization image. Calibration methods offer a means to correct these errors. This work evaluates a scalar and matrix calibration derived from a mathematical model of the polarimeter behavior. The methods are evaluated quantitatively with an existing DoFP polarimeter under varying illumination intensity and angle of linear polarization.
© 2013 Optical Society of America
Polarization imaging is the process of recording the polarization state of light in a scene. Typically this means recording the full or partial Stokes vectors of light across an image plane . Applications of polarization imaging include remote sensing and general contrast enhancement [2–6]; the study of species that display or sense polarization [7–12]; and biomedical imaging applications [13–18]. Thus improvements to the quality of polarization imagers can have a tremendous impact on advancing the fields of remote sensing, marine biology, and biomedical imaging.
Polarization imagers generally work by: (1) modulating or splitting light into components that can be captured with conventional intensity-measuring image sensors and (2) reconstructing the original polarization image from the measured components. Popular modulation schemes include the division-of-time (DoT) polarimeter, where a filter or filters placed in front of the imager change over time [19, 20]; the division-of-amplitude (DoA) polarimeter where a prism splits the image into multiple paths and each path has its own polarization optics and image plane [21–24]; and the division-of-focal-plane (DoFP) polarimeter, which splits the image by placing a repeating pattern of pixel-pitch-matched polarization filters directly onto the pixels of an imager [25–35].
DoFP polarimeters have several advantages over competing imaging polarimeter architectures. Most notably, they capture all of the components needed to reconstruct the incident polarization state simultaneously—which avoids the motion-blur inherent to DoT polarimeters. In addition, their monolithic design makes them more compact and robust than either DoT or DoA polarimeters; this makes them the ideal choice for field work.
However, DoFP sensors have several notable sources of errors. Instantaneous field of view (IFOV) errors result from the incomplete reconstruction of the spatially modulated polarization image. Recent research has shown Fourier transform based and interpolation based reconstruction techniques can mitigate these errors [36–38].
A second source of error in DoFP polarimeters is the fixed pattern noise (FPN), which represents the spatial variations in the optical responses between all pixels across the imaging sensor to a uniform polarization signature (or target). Since a target with a uniform polarization signature is imaged across the entire sensor array, i.e. the spatial frequency of the stimulus target is zero, the IFOV errors are zero. Hence, the FPN error can be evaluated separately from the IFOV errors. The FPN is due to manufacturing variations across the focal plane in the photodetectors, read-out amplifiers within each pixels or at a chip-level [39, 40], and the polarization nanowire filters [41–43]. Techniques such as correlated-double-sampling (CDS) or difference double sampling (DDS) effectively correct the FPN components due to photodetector and amplifier variations [39, 40].
Correcting for FPN due to the nanowire filter variations has not been previously investigated to the best of the authors’ knowledge. This source of error deserves special attention because of the nominal size of the nanowires. Even as nanotechnology matures, variations from the nominal values of the thickness (140 nm), width (70 nm) and pitch (140 nm) of the nanowires can easily approach 5nm to 20nm [41, 42]. These variations can have a major impact to the collective optical response of a group of nanowires comprising a single pixelated polarization filter . Spatial variations in the optical response of 20% between polarization filters have been previously reported for a DoFP nanowire polarimeter . Reducing the variations of the nanowires through more advance nanofabrication techniques can lead to prohibitively expensive filters and imaging devices. Hence, a computational method for correcting optical variations between pixelated-polarization filter due to variations at the nanoscale should be carefully explored and is the prime motivation for this work.
In this paper, we describe two calibration techniques tailored to mitigate variations in the optical response between pixelated-polarization filters across an imaging array in order to improve the accuracy of the captured polarization information. The first calibration technique assumes that the optical properties for each pixel in the imaging array are independent from its neighbors, while the second calibration technique utilizes a small neighborhood of pixels in order to calibrate the collective optical responses. The rest of the paper is organized as follows. In Section 2, we describe our mathematical model for the division of focal plane polarimeter which is the basis for the two calibration methods. Section 3 describes in detail the two calibration methods used to minimize FPN. Section 4 presents detailed experimental measurements which are used to assess the accuracy of the two calibration methods. Real-life images obtained from a DoFP polarimeter and corrected with the two calibration methods are presented in this section. Concluding remarks are presented in Section 5.
2. DoFP polarimeter model
The DoFP polarimeter is an imaging sensor composed of polarization sensitive pixels. Each polarization pixel is comprised of a polarization filter followed by a photodetector. The pitch of the pixelated polarization filter is matched to the pitch of the photo pixel in the imaging array and is between 2 μm and 10 μm for typical CMOS or CCD image sensors. The pixels are tiled into small non-overlapping neighborhoods called “super-pixels.” Figure 1 shows a typical linear DoFP polarimeter capable of measuring the first three Stokes parameters. The pixels are grouped into 2 by 2 super-pixels and each super-pixel has 0°, 45°, 90°, and 135° linear polarization filters .
The optical behavior of each pixel is modeled as the composition of the pixel’s conversion function acting on the intensity of the light transmitted through the polarization filter. Using Mueller calculus, the light transmitted by the polarization filter is represented by the Stokes vector . The Stokes vector is equal to the product of the filter’s Mueller matrix and the incident light as shown in Eq. (1).
For example, the Mueller matrix for a linear polarization filter with electric field amplitude transmission coefficients and along the filter’s and axes, respectively, and rotated by an angle is shown in Eq. (2). The sine and cosine of in Eq. (2) are shortened to and , respectively. These parameters encapsulate all of the filter non-idealities due to manufacturing flaws in wire thickness, width, and pitch.
The intensity component of the filtered Stokes vector is then passed to the conversion function of the underlying pixel. The conversion function represents the conversion of photons to real values (i.e. digital value) by the image sensor. Since we are interested in the FPN components introduced by the nanowire polarization filters, we assume that the conversion function is linear or has already been linearized, and has no temporal or quantization noise. The resulting pixel model is shown in Eq. (3).
In Eq. (3), is the real intensity value measured by the pixel and the and parameters are the gain and dark offset of the pixel, respectively. The row vector selects the intensity component of the incident light. This further simplifies Eq. (3) by combining the pixel’s gain and the first row of the filter’s Mueller matrix into the row vector , which is the polarization pixel’s analysis vector.
In a super-pixel configuration, the responses of the constituent pixels are stacked into a column vector as shown in Eq. (4).
The individual analysis vectors and dark offsets for each of the pixels in the super-pixel are combined into an analysis matrix, , and a dark offset vector, . This model assumes that either the incident illumination is uniform across the super-pixel or that all of the constituent pixels are co-located.
3. Calibration techniques
The purpose of a polarimeter calibration technique is to transform the non-ideal response of the pixelated filters into an ideal response, independently of the incident Stokes vector. In other words, an effective calibration technique is a function that transforms polarimeter measurements so that they are as close to the ideal as possible, without reconstructing the vector. Equations (5) and (6) express this concept in terms of finding a calibration function, for the single pixel model and for the super-pixel model, that minimizes the square error between the calibrated response and the ideal response.
In order to perform these minimizations, the ideal responses of the polarization pixel and the calibration functions must be specified. In general, the ideal dark offset for all pixels is zero. For pixels with linear polarization filters, the ideal analysis vector is the first row of the Mueller matrix given in Eq. (2), with parameters , , and equal to the rotation of the transmission axis of the filter. Equations (7) and (8) show the ideal pixel and super-pixel responses, respectively.
Finding an appropriate form for and is computed as follows. Since both ideal and non-ideal responses of the pixelated filters are linear, a linear transformation is used to convert one response to the other response. The two functions are shown in Eqs. (9) and (10).
In Eq. (9), the optical response of a single pixel, , is first compensated for the dark offset by subtracting , followed by the application of a scalar gain, . The super-pixel calibration response presented in Eq. (10) utilizes the vector for compensating the dark offsets and the matrix for correcting the gains of all pixels in the super-pixel.
Both minimizations are convex and can be completed by taking the partial derivatives with respect to the calibration gains and calibration dark offsets, setting them to zero, and solving for the parameters. Sections 3.2 and 3.3 describe such solutions to Eqs. (11) and (12). It is also possible to solve the minimizations by supplying various known values and the corresponding or values to an ordinary least squares solver. The apparatus described in section 4 could generate data for that purpose.
3.2 Single-pixel solution
A solution for the single-pixel case is presented in Eq. (13).
The calibration dark offset is set to the pixel’s dark offset, d, and the calibration gain is the ratio of the two projections. When substituted back into Eq. (9), we see that the dark offsets will cancel, and the calibration gain scales the projection of onto to the length of the projection of onto . This results in an ideal response of the single-pixel polarization as seen in Eq. (14).
The dependence of on in Eq. (13) is a problem—a single value of will not be valid for all values of . One solution to this problem is to make the assumption that is a scalar multiple of , or in other words, assume that they both point in the same direction in the Stokes space. This is equivalent to assuming that all of the filter parameters are ideal except for its transmission coefficient. This assumption makes constant and Eq. (13) simplifies to the expressions of Eq. (15), which can be used practically.
Equation (16) is obtained by substituting the results from Eq. (15) into Eq. (9). The calibration dark offset still cancels the pixel’s dark offset, but the gain simply rescales the vector to the same length as the vector. If they point in different directions, this method will not be able to completely calibrate the response.
3.3 Super-pixel solution
A solution for the super-pixel case is presented in Eq. (17).
Here indicates the pseudo-inverse of , which is computed such that the value for in Eq. (17) satisfies . As long as the pseudo-inverse of exists, will transform all of its constituent vectors, by scaling and rotating, exactly into those of . Equation (18) is obtained by substituting the results from Eq. (17) into Eq. (10). Equation (18) shows that using the super-pixel based approach perfectly calibrates the response, as long as the model’s assumptions hold.
4. Evaluation with visible-spectrum linear DoFP polarimeter
4.1 Experimental setup
The two calibration functions presented in Eqs. (9) and (10) are evaluated on data collected from the apparatus shown in Fig. 2. A Sylvania EHJ64655HLX, 250 W, tungsten-halogen bulb provides light for the system. The light passes through Edmund Optics’ Heat Absorbing Glass to block unwanted IR components, then optionally through 1 of three narrow-band spectral filters: Thorlabs FB450-10, Newport 10LF10-515, or Thorlabs FB600-10; which pass 450, 515, and 600 nm light, respectively. An adjustable shutter controls the amount of light that passes into a 4” integrating sphere, which produces nominally uniform unpolarized light at its outputs. A Thorlabs S120VC calibrated photodiode placed at one output port of the integrating sphere measures relative light intensities. Light from the other output port passes through a Newport 20LP-VIS-B linear polarizer mounted on a motorized rotation stage, and finally passes into the visible-spectrum, linear, DoFP polarimeter described in [32, 44]. The apparatus generates fully linearly polarized light with arbitrary intensity and polarization angle. It can be switched between “white” light directly from the lamp or one of the several narrow-band spectra provided by the spectral filters. Since the polarimeter being used for evaluation only measures linear polarization, there is no need for circularly polarizing optics. The capability to control the degree of linear polarization will be included for future works.
Data was generated with unfiltered, 450 nm, 515 nm, and 650 nm light respectively. For each spectrum, 100 images at 6 different intensities and 36 polarization angles were collected from a 300300 pixel (2.2 mm2) sub-region of the polarimeter. The small sub-region was selected to maximize the uniformity of the incident light and to limit the amount of data collected. The coefficient of variation of a non-polarimetric image taken over the same area was 0.0106, which will contribute to the final reconstruction errors. Each intensity and polarization angle was sampled 100 times to reduce the effects of temporal noise on the final results. The 6 intensities followed a roughly exponential sequence based on the dynamic range of the polarimeter. The maximum intensity was set as high as possible without saturating any pixels at any angle of the polarizer. The remaining intensities were set at 50%, 25%, 10%, 5%, and 2.5% of the maximum intensity for that wavelength. This procedure minimized the effects of wavelength-dependent intensity variations of the photodiode’s quantum efficiency. The 36 different polarization angles were uniformly distributed every 5° from 0° to 180°, which covers the full range of linear polarization angles. The output of the integrating sphere was 3% linearly polarized, which is easily compensated for as shown in the following section. Only the images taken with white (unfiltered) light and polarization angles every 20° were used as training data to determine calibration function parameters. The remainder of the data was used for testing the performance of the functions.
4.2 Determining model and calibration parameters
The optimal gains and offsets for the single-pixel and super-pixel calibration functions, Eqs. (15) and (17) respectively, are computed from the analysis vector, , and dark offset, , for each pixel. These parameters can be determined from the training data samples collected for each pixel as shown in Eqs. (19) and (20). The values for each must include all of the polarization effects of the apparatus, including the polarization of the output of the integrating sphere.
Equation (20) was evaluated for each pixel using a least-squares solver. The coefficients of determination, , for all of the pixels are above 99.73% and have a median of 99.93%. This indicates that the model explains most of the variation in the training data. The large number of samples essentially guarantees that the results are statistically significant.
The pixel dark offsets are summarized in Fig. 3. The dark offsets are small compared to the dynamic range of the polarimeter (maximum digital value of 4095), but predominantly negative. This is not a problem, but indicates that the dark offsets are being over-corrected within the polarimeter itself. The dark offsets are set by the camera manufacturer and cannot be reprogrammed.
Figure 4 displays the measured pixel analysis vectors, . Since these measurements are from pixels with linear polarization filters, is always zero and is not included in the figure. The spatial variation of the filter transmissions is about 20% and can be attributed to the variations in the thickness and width of the aluminum nanowires comprising the pixelated polarization filters. The measurements show a constant angular offset of approximately 5° from the ideal, which is most likely due to alignment errors during the interference lithography fabrication step of the nanowire polarization filters . Most of the filters have diattenuations of about 0.9, which corresponds to an extinction ratio of about 26 dB. This is less than the values reported for the polarimeter in  and is attributed to the increased optical cross-talk due to the lack of collimation in this work’s optical apparatus. It is worth noting that any cross-talk effects are measured as part of the pixel model parameters. However, this means that the pixel parameters depend on the incident light beam’s F-number, and that the parameters must be re-measured for each F-number that the polarimeter uses.
Since the analysis vector, , and dark offset, , are determined for each pixel, computing the single-pixel and super-pixel calibration function parameters requires solving Eqs. (15) and (17) respectively. In order to illustrate the capabilities of the two calibration functions, the products of the single-pixel and super-pixel calibration gains with the analysis vectors are shown in Figs. 5 and 6, respectively. The single-pixel calibration normalizes the length of each pixel’s to that of the corresponding . This results in a drastic decrease in transmission variation to ~2%, but does not correct any errors due to diattenuation or orientation (see Fig. 5). On the other hand, the super-pixel calibration completely transforms the analysis vectors to the ideal vector and corrects for variations in transmission, diattenuation and orientation between individual pixelated polarization filters across the imaging array. The transmission variations between all pixels in the imaging array are less than 0.1% after the super-pixel calibration as demonstrated in Fig. 6.
4.3 Test results
The difference between the single- and super-pixel calibrations is also evident when the calibration functions are applied to the test data. Figures 7 and 8 show histograms of the optical responses for the uncalibrated, single-pixel-calibrated, and super-pixel-calibrated methods. The polarimeter is illuminated with linearly polarized white light at an incident angle of 15°. In Fig. 7, the left sub-plot presents the histogram response of the 0°-oriented pixels before and after the two calibration methods are applied. The right sub-plot presents the uncalibrated response of all pixels in the imaging array grouped by the filters orientations, i.e. 0, 45, 90 and 135 degrees. The FPN (i.e. spatial variations) of the uncalibrated pixels with 0-degree filter, computed as the ratio of the standard variation over the mean value, is 11.6%. The FPN of the CCD imaging array without the polarization filter is 0.5%, which was measured before depositing the nanowire polarization filters on the surface of the sensor.
The single-pixel calibration and super-pixel calibration reduce the FPN for the zero degree oriented pixels down to 0.15% and 0.11% respectively (see Fig. 7). The large reduction in spatial variation across all pixels in the imaging array after the two calibration methods are applies is evident from Fig. 8. The variations in the spatial response for the four groups of pixels are reduced from ~11% down to 0.1% for the two calibration methods. The super-pixel calibration method also adjusts the transmission of the filters to their nominal value. This is very critical in order to minimize the error in the computed Stokes parameters, degree of linear polarization and angle of polarization.
Figures 9 and 10 examine the responses of the two calibration methods, in addition to the uncalibratied pixels’ response, when the polarimeter is illuminated with linearly polarized white light and the angle of linear polarization is swept between 0° to 180°.
The uncalibrated responses follow Malus’ law but the amplitudes of the squared cosines between the four pixelated polarization filters vary widely. There is a constant offset from zero, and the peaks do not occur at their nominal angles. Furthermore, the spatial variation for each incident angle across the imaging array is relatively high as demonstrated by the histogram plots in Fig. 7. The single-pixel calibration makes the amplitudes uniform between the four filters, but does not correct the problem that the peak values for individual filters do not occur at the correct angle. The super-pixel calibration corrects the amplitude uniformity between the four pixel responses and aligns the maximum responses of each pixel to the appropriate angle. The re-alignment of the sinusoids such that they exhibit maximum values at the nominal angles is critical for the accuracy of the reconstructed angle and degree of linear polarization.
Figures 11, 12, and 13 show the RMS reconstruction error of the incident intensity, degree, and angle of polarization, respectively, as the incident angle of polarization and intensity are swept through their ranges.
The reconstruction errors for the uncalibrated pixels’ responses (in terms of S0, DoLP and AoP) show a large dependence on the incident angle of polarization. The maximum RMS error for DoLP at maximum and minimum illumination is ~20% and ~35% respectively. This is a result of the mismatched amplitudes of the four super-pixel responses as indicated in Fig. 9.
The single-pixel calibration method removes the RMS reconstruction error dependence on the incident polarization angle and the reconstruction error is constant for both light intensities. The maximum RMS error after employing single-pixel calibration for DoLP at maximum and minimum illumination is ~10% and ~32% respectively.
The super-pixel calibration method further reduces the RMS reconstruction error and for light intensities of 100% of the dynamic range, the error is decreased by a factor of ~10 compared to the single-pixel calibration method. The maximum RMS error after employing super-pixel calibration for DoLP at maximum and minimum illumination is ~0.5% and ~26% respectively. The only deviation from this behavior is the reconstruction of the total intensity, i.e. S0, at low light illuminations—in this case the errors after employing either single-pixel or super pixel calibrations are approximately equal.
The RMS reconstruction errors do not reach zero for several reasons. The non-uniformity of approximately 1% in the flat-field produced by the apparatus limits the accuracy of the pixel model parameter measurements, which in turn produce errors in the calibration parameters. Additionally, the image sensor’s specifications indicate a maximum non-linearity of 2% in pixel photo-responses, which is not included in our model . Finally, we have not included any noise sources in the model—both photon shot noise and the image sensor’s readout noise cause significant reductions in SNR at low light intensities. Using approximate calculations based on figures from the image sensor’s specifications, the photon shot noise accounts for about 84% of the noise power and readout noise for about 16% of the noise power at 10% illumination . A thorough noise analysis and error propagation would be required to determine how much each of these unaddressed error sources contribute to the final reconstruction errors.
Figure 14 shows the RMS reconstruction error for the single-pixel (left sub-plot) and super-pixel (right sub-plot) calibrations for three single-wavelength test data sets. Since the transmission coefficients, and , for the orthogonal electric field components in Eq. (2) and the quantum efficiency of the image sensor are dependent on wavelength, the RMS errors are a function of wavelength.
Since the extinction ratios are around 10 at 450 nm, 30 at 550nm and 38 at 650nm, the RMS error is highest at 450 nm and around 6% for light intensities above 10% of the imager’s dynamic range. The RMS error for the green and red LEDs is around 4% for the same intensity levels. Although the analysis vectors for each pixel in the imaging array were obtained with broad-band white light, the RMS errors for the reconstructed intensity, i.e. S0, are similar across the entire visible spectrum. Similar results were obtained for the RMS errors for angle and degree of linear polarization and are not shown for brevity.
4.4 Calibration on real life images from division of focal plane polarimeter
Real-life images obtained from a division of focal plane polarimeter on a rainy day are presented in Fig. 15. The first column of images presents intensity, S0, the second column of images presents the degree of linear polarization and the third column presents the angle of polarization. False color is used to depict the degree of linear polarization, where blue color represents low degree of linear polarization and red color represents high degree of linear polarization.
Uncalibrated images are presented in the first row of Fig. 15 and they contain large errors from the expected values. For example, the angle of polarization for the road is expected to be zero degree because of the horizontal surface orientation and the incident illumination is unpolarized due to the cloudy/rainy weather. The angle of polarization for the road obtained from the uncalibrated image is around 165 degrees. The degree of linear polarization image has a pronounce gradient in the~45 degree orientation and light blue lines can be observed throughout the image. This is due the fact that the 45 degree pixels had higher transmission values as well as these pixels had higher spatial variations compared to the other three pixels. The forest in the background is not very visible in both the angle and degree of linear polarization images.
The images in the second row are obtained with the utilization of the single-pixel calibration method. In this set of images, the uniformity of the angle and degree of linear polarization images is higher compared to the uncalibrated images. For example, the polarization signatures across the road are more uniform compared to the first row of images. Nevertheless, the angle of polarization for the road is around 15 degrees which is incorrect. The single-pixel calibration does not correct for the nominal response of individual pixels, which leads to large errors in the angle and degree of linear polarization.
The images in the third row are obtained with the utilization of the super-pixel calibration method. In this image, the expected angle of polarization for the road is zero as well as the uniformity of the angle of polarization across the road is further improved. Due to the curved shape of the incoming car’s windshield, the angle of polarization image has a gradient. This gradient is more pronounced in the super-pixel calibrated image than in the single-pixel calibrated image.
In this paper, we have presented two calibration methods for division-of-focal-plane polarimeters. Typical division-of-focal-plane polarimeters for the visible spectrum employ nanowires in order to construct linear polarization filters. Mismatches in the size of the nanowires will lead to optical variations at the macro scale and we outline two calibration methods which mitigate these effects. Both methods were developed from the same linear model for polarization pixels, but one treats each pixel independently, and the other treats super-pixel groups together. We showed that the super-pixel approach is mathematically more powerful than the single-pixel approach and can correct both the typical photodetector gain and offset non-idealities in addition to polarization sensitive flaws such as non-ideal filter orientations and non-ideal filter diattenuation coefficients. The single-pixel approach can only correct for non-ideal gains and offsets.
The measurements of our visible-spectrum linear DoFP polarimeter show that a majority of the non-uniformity between pixels is in their gains and offsets, but a significant amount of variation occurs in the model parameters that the single-pixel approach cannot correct, including a constant rotational offset and moderate variations in filter diattenuations. Thus we have shown that calibrating each pixel independently reduces DoLP reconstruction errors from 12% to 10% for moderate light illuminations. Calibrating each super-pixel as a unit reduces the RMSE to approximately 1%. Similar reductions in error occur for reconstructing the intensity and AoP images. These figures indicate that the super-pixel calibration method is worth the extra computational effort, but there are still some un-addressed sources of error. These unaddressed sources of error include the image sensor’s non-linear response, temporal noise, including the photon shot noise and readout noise, and the non-uniformities in the flat-field that the calibration apparatus produces.
Finally, we showed that though the calibration parameters were determined using a tungsten-halogen lamp for illumination with only an IR blocking filter in place, they performed well across the visible spectral range of the polarimeter. It is also worth noting that the optical properties of the polarimeter are stable enough that the same calibration parameters have been used with no measurable difference for about two years during the development of this work. The improvements in the quality of real-life images obtained from a division of focal plane polarimeter for the visible spectrum after applying the two calibration methods are also demonstrated in this paper.
This work was supported by National Science Foundation grant number OCE-1130897, and Air Force Office of Scientific Research grant numbers FA9550-10-1-0121 and FA9550-12-1-0321.
References and links
1. D. H. Goldstein, Polarized light, 3rd ed. (CRC Press, Boca Raton, FL, 2011), pp. xxi, 770 p.
3. S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind Haze Separation,” in Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, 2006), 1984–1991. [CrossRef]
5. J. L. Deuzé, F. M. Bréon, C. Devaux, P. Goloub, M. Herman, B. Lafrance, F. Maignan, A. Marchand, F. Nadal, G. Perry, and D. Tanré, “Remote sensing of aerosols over land surfaces from POLDER-ADEOS-1 polarized measurements,” J. Geophys. Res., D, Atmospheres 106(D5), 4913–4926 (2001). [CrossRef]
6. E. Puttonen, J. Suomalainen, T. Hakala, and J. Peltoniemi, “Measurement of Reflectance Properties of Asphalt Surfaces and Their Usability as Reference Targets for Aerial Photos,” IEEE Trans. Geosci. Remote Sens. 47(7), 2330–2339 (2009). [CrossRef]
9. T. W. Cronin, N. Shashar, R. L. Caldwell, J. Marshall, A. G. Cheroske, and T.-H. Chiou, “Polarization Vision and Its Role in Biological Signaling,” Integr. Comp. Biol. 43(4), 549–558 (2003). [CrossRef] [PubMed]
12. G. Horváth and D. Varjú, Polarized light in animal vision: polarization patterns in nature (Springer, 2004).
13. C. Paddock, T. Youngs, E. Eriksen, and R. Boyce, “Validation of wall thickness estimates obtained with polarized light microscopy using multiple fluorochrome labels: correlation with erosion depth estimates obtained by lamellar counting,” Bone 16(3), 381–383 (1995). [CrossRef] [PubMed]
14. P. B. Canham, H. M. Finlay, J. G. Dixon, and S. E. Ferguson, “Layered collagen fabric of cerebral aneurysms quantitatively assessed by the universal stage and polarized light microscopy,” Anat. Rec. 231(4), 579–592 (1991). [CrossRef] [PubMed]
15. E. Salomatina-Motts, V. Neel, and A. Yaroslavskaya, “Multimodal polarization system for imaging skin cancer,” Opt. Spectrosc. 107(6), 884–890 (2009). [CrossRef]
16. M. Anastasiadou, A. D. Martino, D. Clement, F. Liège, B. Laude‐Boulesteix, N. Quang, J. Dreyfuss, B. Huynh, A. Nazac, L. Schwartz, and H. Cohen, “Polarimetric imaging for the diagnosis of cervical cancer,” Phys. Status Solidi 5(5c), 1423–1426 (2008). [CrossRef]
17. Y. Liu, T. York, W. Akers, G. Sudlow, V. Gruev, and S. Achilefu, “Complementary fluorescence-polarization microscopy using division-of-focal-plane polarization imaging sensor,” J. Biomed. Opt. 17(11), 116001 (2012). [CrossRef] [PubMed]
18. V. V. Tuchin, L. V. Wang, and D. A. Zimnyakov, Optical polarization in biomedical applications (Springer, 2006).
19. R. Walraven, “Polarization imagery,” Opt. Eng. 20(1), 200114 (1981). [CrossRef]
22. C. A. Farlow, D. B. Chenault, J. L. Pezzaniti, K. D. Spradley, and M. G. Gulley, “Imaging polarimeter development and applications,” in Proc. SPIE, 2002), 118–125.
23. J. D. Barter, P. H. Lee, H. Thompson, Jr., and T. Schneider, “Stokes parameter imaging of scattering surfaces,” in Optical Science, Engineering and Instrumentation'97, (International Society for Optics and Photonics, 1997), 314–320.
24. M. W. Kudenov, J. L. Pezzaniti, and G. R. Gerhart, “Microbolometer-infrared imaging Stokes polarimeter,” Opt. Eng. 48, 063201 (2009).
26. G. P. Nordin, J. T. Meier, P. C. Deguzman, and M. W. Jones, “Diffractive optical element for Stokes vector measurement with a focal plane array,” in SPIE's International Symposium on Optical Science, Engineering, and Instrumentation, (International Society for Optics and Photonics, 1999), 169–177. [CrossRef]
27. M. Sarkar, D. San Segundo Bello, C. Van Hoof, and A. Theuwissen, “Integrated polarization analyzing CMOS image sensor for material classification,” IEEE Sens. J. 11(8), 1692–1703 (2011). [CrossRef]
28. J. S. Tyo, “Hybrid division of aperture/division of a focal-plane polarimeter for real-time polarization imagery without an instantaneous field-of-view error,” Opt. Lett. 31(20), 2984–2986 (2006). [CrossRef] [PubMed]
30. T. Tokuda, S. Sato, H. Yamada, K. Sasagawa, and J. Ohta, “Polarisation-analysing CMOS photosensor with monolithically embedded wire grid polariser,” Electron. Lett. 45(4), 228–230 (2009). [CrossRef]
35. G. Myhre, W.-L. Hsu, A. Peinado, C. LaCasse, N. Brock, R. A. Chipman, and S. Pau, “Liquid crystal polymer full-stokes division of focal plane polarimeter,” Opt. Express 20(25), 27393–27409 (2012). [CrossRef] [PubMed]
36. J. S. Tyo, C. F. LaCasse, and B. M. Ratliff, “Total elimination of sampling errors in polarization imagery obtained with integrated microgrid polarimeters,” Opt. Lett. 34(20), 3187–3189 (2009). [CrossRef] [PubMed]
38. X. Xu, M. Kulkarni, A. Nehorai, and V. Gruev, “A correlation-based interpolation algorithm for division-of-focal-plane polarization sensors,” Proc. SPIE 8364, 83640L–83640L (2012). [CrossRef]
39. A. El Gamal, B. A. Fowler, H. Min, and X. Liu, “Modeling and estimation of FPN components in CMOS image sensors,” Proc. SPIE 3301, 168–177 (1998). [CrossRef]
40. V. Gruev, Z. Yang, J. Van der Spiegel, and R. Etienne-Cummings, “Current mode image sensor with two transistors per pixel,” Circuits and Systems I: Regular Papers, IEEE Transactions on 57, 1154–1165 (2010). [CrossRef]
42. J. J. Wang, F. Walters, X. Liu, P. Sciortino, and X. Deng, “High-performance, large area, deep ultraviolet to infrared polarizers based on 40 nm line/78 nm space nanowire grids,” Appl. Phys. Lett. 90, 061104 (2007).
45. “KAI-2020 Image Sensor Device Performance Specification,” (Eastman Kodak Company, 2010).