Current division-of-focal-plane polarization imaging sensors can perceive intensity and polarization in real time with high spatial resolution, but are oblivious to spectral information. We present the design of such a sensor, which is also spectrally selective in the visible regime. We describe its extensive spectral and polarimetric characterization. The sensor has a pixel pitch of 5 µm and an imaging array of 168 by 256 elements. Each element comprises spectrally sensitive vertically stacked photodetectors integrated with a 140 nm pitch nanowire linear polarizer. The sensor has a maximum measured SNR of 45 dB, extinction ratio of ~3.5, QE of 12%, and linearity error of 1% in the green channel. We present sample spectral-polarization images.
©2012 Optical Society of America
To the human eye, spectral information in a scene is distinguishable as color, and light intensity is perceived as brightness. However, the human eye is blind to polarization, the third aspect of light that describes the orientation of its plane of oscillation and thus offers useful additional information. Indeed, having knowledge of polarization has been beneficial in diverse applications, such as the identification of materials , non-contact fingerprint detection , image dehazing , and recovery of underwater visibility . Integrated spectral and polarization imaging promises an enhanced, wide-ranging insight, particularly for applications in remote sensing , astronomy , non-invasive medical procedures , and computer vision .
To this end, various spectral-polarimetric sensors have been reported in the literature. For example, division-of-time spectropolarimeters unite a traditional polarimeter with a rotating spectral filter wheel . However, division-of-time systems are only suitable for imaging relatively static scenes as image acquisition time is long and any motion in the scene will induce unwanted artifacts. Such systems may also have moving parts, which are not suitable for imaging in rugged terrains. Tunable acousto-optic  and liquid crystal  spectral filters have also been used in the division-of-time spectropolarimeter paradigm, but spectral and polarization information is still not acquired concurrently . The Computed Tomography Imaging Channeled Spectropolarimeter (CTICS) system allows the acquisition of a spectral-polarization “hypercube.” Here, the polarization parameters are captured as a function of the spectrum in a snapshot . However, this system does have drawbacks, such as tradeoffs between spatial and spectral resolution , a bulky optical setup, and heavy computational requirements.
In this paper, we describe a novel spectral-polarization imaging sensor  which uses the division-of-focal-plane paradigm [15, 16]. We have designed the sensor by the monolithic integration of pixelated nanowire linear polarization filters with a spectrally selective imaging array. The sensor can therefore simultaneously acquire spectral and polarization information in a scene with high spatial and temporal resolution. Furthermore, both spectral and polarization information is co-registered in hardware by virtue of the imaging sensor architecture. The sensor, pictured in Fig. 1 , also has the benefits of being compact, lightweight and robust.
The spectral and polarimetric capabilities of the integrated sensor have been thoroughly assessed using a carefully designed optoelectronic setup and are presented in this paper. The following metrics are used to evaluate the sensor: quantum efficiency of the vertically stacked photodiodes, linearity of the pixel output voltage, and signal-to-noise ratio (SNR). We gauge the polarization selectivity of the sensor using a metric known as the extinction ratio.
The remaining sections of the paper are organized as follows: Section 2 provides an overview of the design and the underlying theory of the integrated sensor; Section 3 describes the optoelectronic characterization of the sensor; and Section 4 presents real-life images acquired using the sensor. Concluding remarks are presented in Section 5.
2. Theoretical overview of integrated sensor
2.1 Principle of operation of spectral imaging array
The absorption of incident light in a material depends on the material properties, the depth to which the light travels, and the wavelength of the incident light. This behavior is governed by the physical relationship given in Eq. (1).
Here, if is the incident light intensity at the surface of the silicon photodetector, I is the light intensity absorbed at depth x. I (, x) has an exponential dependence on both the distance and the absorption coefficient α, which in turn is wavelength ()-dependent.
Figure 2 shows simulated wavelength dependence of the depth to which light is absorbed in silicon . There are three curves in Fig. 2, corresponding to 50%, 70%, and 99% absorption of the incident light across different wavelengths. For instance, 50% of incident light at 550 nm is absorbed within 1 µm of penetration into silicon, 70% within 1.9 µm and 99% within 7.2 µm. Furthermore, Fig. 2 indicates that longer wavelength light is absorbed deeper in silicon, and shorter wavelength light is absorbed at shallower depths. For example, 99% of incident light at 650 nm is absorbed within 16 µm of the silicon, whereas 99% of incident light at 450 nm is absorbed within 1.8 µm of the silicon.
The above discussion lends itself to the design of the spectral imaging sensor that forms the substrate of the integrated division-of-focal-plane spectral-polarization sensor. Each pixel in the spectral imaging sensor array comprises vertically stacked photodetectors. The fundamental physical principle demonstrated in Fig. 2 helps determine the optimal position and spectral responsivity of these photodetectors.
Figure 3 shows a single pixel of the sensor, which contains stacked photodetectors and the associated readout electronics, such as a source follower transistor, a reset transistor and an access transistor . The three vertically stacked photodetectors are fabricated by selectively changing the doping of silicon. The silicon wafer substrate, initially positively doped, is used to define all the transistors in the sensor as well as the vertically stacked photodiodes. The first step in the fabrication process is to define a deep n-well region in the p-substrate that serves to capture longer wavelengths of the incident light. In order to do so, the silicon wafer is doped with a high concentration of arsenic atoms; by controlling the doping time and concentration, a 2 µm deep n-well is formed in the p-type silicon substrate. Next, a small region within the n-well region is doped with a high concentration of boron atoms, effectively reversing its polarity in this region. Hence, a p-well region is formed within the n-well region and has a depth of ~0.6 µm. Finally, an n-doped region is formed within the p-well region by doping the silicon with a high concentration of arsenic atoms to a depth of 0.2 µm .
A thermal annealing process follows the alternating doping of the silicon. During this procedure, the dopant atoms diffuse and expand each junction by ~10 nm. Since monolayer doping technique is used for forming the alternating junctions, sharp spatial decay of less than 20 nm between junctions is achieved . These small changes in the junction depth can be used to further compute the spectral performance of the junctions as well as model the electrical crosstalk in the photodiodes.
The vertically alternating regions of the silicon doping form three vertically stacked photodiodes. The top photodiode, which is formed by the n-type and p-well region, is most sensitive to short wavelengths such as blue light. The middle photodiode, which is formed by the p-well and n-well region, is most sensitive to green light and the bottom photodiode, which is formed by the n-well and p-substrate region, is most sensitive to red light. These three photodiodes are referred to as the blue, green and red channels of the sensor respectively, given their dominant sensitivities.
2.2 Nanowire polarization filter array
Pixelated polarization filters are fabricated using a combination of interference lithography and typical microfabrication steps such as deposition, spin coating, UV lithography and reactive ion etching [20, 21]. The pixelated filters are composed of periodic aluminum nanowires with the following dimensions: 70 nm wide, 70 nm high and 140 nm pitch. The orientation of the metallic nanowires determines the transmission axis of the filter.
The pixelated polarization filters are arranged in a super pixels configuration. A super pixel is a 2 by 2 array of polarization pixels, where each pixel has a transmission axis at 0°, 45°, 90° or 135°. The super pixel pattern is repeated across the entire imaging array and thus four subsampled intensity arrays are generated, denoted by and. A process of interpolation can be applied in order to reconstruct the full intensity arrays [22–24].
Given the intensity through each pixelated polarization filter, the first three Stokes’ parameters () are obtained . , shown in Eq. (2), gives the total light intensity through the super pixel array. , shown in Eq. (3), indicates the amount of horizontal or vertical linear polarization and , shown in Eq. (4), gives the amount of linear polarization in the 45° or 135° direction.
Using the Stokes’ parameters, two quantities that describe the linear polarization of incident light are computed: the angle of polarization (AoP) and the degree of linear polarization (DoLP). The DoLP ranges from 0 to 1 and describes the amount of linear polarization of the incident light. A DoLP of 1 indicates completely linearly polarized light, and a DoLP of 0 indicates no linear polarization. DoLP is computed via Eq. (5).
The AoP gives the orientation of the plane of oscillation of the incident light. AoP is computed via Eq. (6).
Figure 4 shows a block diagram of the integrated sensor.
The integrated sensor combines the polarization selectivity of the nanowire polarization filters with the spectral selectivity of the vertically stacked photodiode array. The polarization array filters the incoming light according to its polarization, and each photodiode registers the spectral content of the incoming filtered light in the form of a 10-bit digital intensity value (DV) per channel. Using the intensity values from a neighborhood of 2 by 2 pixels, the AoP and DoLP parameters are computed for each channel. Simultaneously, the sensor records a spectral image of the imaged scene. Because of the architecture of the sensor, a single image concurrently records spectral and polarization information. That is, each pixel in the spectral image has the same instantaneous field-of-view as the corresponding pixel in the polarization image. Therefore, there is no need for offline image registration, which is common in division-of-time and division-of-aperture systems . The array has 168 by 256 spectral and polarization sensitive elements.
3. Optoelectronic characterization of integrated spectral-polarization sensor
3.1.1 Experimental setup for optoelectronic characterization of sensor
The experimental setup shown in Fig. 5 is used to characterize the sensor in terms of quantum efficiency of the three spectral channels, linearity of the three output voltages from the pixel, and SNR across the visible spectrum. A uniform, unpolarized, narrowband source of light of known irradiance is generated for these tests . The sensor is tested without any lens mounted in front of the camera body. A Princeton Instruments SP2150 monochromator in conjunction with a Princeton Instruments TS-428 light source forms a spectrally tunable light source. The output slit of the monochromator is 2 mm wide; therefore, the spectral bandpass (FWHM around desired wavelength) of the light at the output slit is ~8 nm. The monochromator uses a 1200 grooves/mm ruled grating, blazed at 500 nm. A Thorlabs IS200-4 integrating sphere placed at the output slit of the monochromator ensures that the output narrowband light is unpolarized and uniform. Additionally, a FEL0400 Thorlabs long-pass, or order-sorting, filter with cut-on wavelength at 400 nm eliminates unwanted higher-order outputs from the monochromator. Both the power supply to the light source and central wavelength of the monochromator are computer-controlled.
By adjusting the light source output power through a feedback mechanism, we ensure that the light striking the sensor has a constant irradiance of regardless of the wavelength to which the monochromator has been tuned. The results reported in Subsections 3.1.2 through 3.1.4 are from a 50 by 50 pixel region of the imaging array. This setup is modified for the polarimetric characterization of the sensor, and the relevant setups and experiments are described in Subsections 3.2.1 through 3.2.5.
3.1.2 Quantum efficiency
In this experiment, we confirm and characterize the spectral sensitivity of each channel of the sensor in terms of quantum efficiency (QE). QE is the ratio of the number of photons at a particular wavelength striking the surface of the photodetector () to the number of electron-hole pairs registered by the photodetector () and is shown by Eq. (7) .
The number of photons striking the surface of a photosensitive element of the sensor is given in Eq. (8). is calculated using the incident irradiance I, the incident wavelength λ, the integration time of the photodetector , and the area of the photodiode. is calculated given the pixel pitch is 5 µm and fill factor is 75% (considering microlenses). The hc is the product of the Planck constant h and the speed of light c.
For this experiment, the wavelength of the incident light is swept from 400 nm to 700 nm in steps of 10 nm and the digital value (DV) from each channel is recorded. The integration time of the sensor is set to 440 ms. The quantum efficiency is then calculated for each channel for each incident light wavelength using Eq. (9).
Figure 6 presents the quantum efficiency thus calculated for the three channels. The red, blue and green channel responses demonstrate spectral selectivity in agreement with the theoretical analysis in Section 2.1. Each response is maximum at the part of the spectrum where the most incident photons are expected to be absorbed. For example, the blue channel has highest response between the 420 nm and 460 nm range; the green channel has a peak response in the 520 nm to 560 nm range; and the red channel has a peak response from 650 nm and 700 nm. It is noted that the typical QE response of traditional spectral sensors, which use color filter arrays (CFA), has much steeper cutoffs for each channel and very little overlap [28, 29]. For comparison, quantum efficiency curves for a typical, commercially available Kodak CFA sensor can be seen in . In this CFA sensor, the blue channel response ranges from 370 nm to 550 nm, with a peak quantum efficiency of 41% at 460 nm. The green channel has a response from 460 nm to 620 nm, with a peak quantum efficiency of 36% at 520 nm. The red channel has a spectral response from 580 nm to 750 nm, which is maximum at around 620 nm with a quantum efficiency of 31%. Both CFA and stacked photodetector type spectral sensors require further image processing in order to map the spectral response on to, for example, sRGB or CIEXYZ color space for accurate color reproduction .
For polarization imaging sensors, it is crucial that the transformation of the input light signal to an output voltage signal be linear. Linearity ensures that the image sensor output is an accurate representation of the incoming polarized light wave. Therefore, there is no distortion in the subsequent AoP and DoLP computation.
The experiment described in this section calculates the error in the linearity of the sensor outputs for incident light at 550 nm. The integration time of the sensor is swept between 0 and 800 ms. Therefore, the number of photons integrated by the sensor is swept, as described by Eq. (8). Figure 7(a) through (c) show the output response with respect to the changing integration time for each channel, along with the linear fit and the linear fit equation corresponding to the response. Figure 7(d) through (f) show the residual errors from the linear fit to each curve. The output response of each spectral channel is linear, with small residual errors. The norm of residuals is 20.49, 11.91, and 14.56 on a 10-bit scale for the red, green and blue channels respectively. Thus, the error in linearity is 1-2%. This is a typical figure for present-day imaging sensors [32, 33]. Nonlinearity can be decreased via calibration techniques; this discussion is beyond the scope of this paper.
3.1.4 Signal-to-noise ratio
The previous sections discussed the sensitivity of the imaging sensor. QE describes the spectral response of each channel and linearity describes the change in the output signal with respect to the input signal. The SNR of the sensor is evaluated next . SNR is the ratio of the desired signal compared to the unwanted noise at the output. The SNR measurements indicate the lowest light intensity for which the sensor can provide a useful output.
For this experiment, the SNR of each channel is calculated for 550 nm light. The number of integrated electron-hole pairs by the underlying photodetectors is varied by varying the integration time of the sensor. For each integration time, 100 samples are recorded in succession from the sensor. Dividing each pixel’s mean digital value (DV) by its standard deviation across the 100 temporal samples gives the SNR.
The mean SNR across the 50 by 50 pixel array, as a function of incident photons, is shown in Fig. 8(a) . The error bars indicate the spatial spread in SNR across the array. The SNR increases with the number of captured incident photons. At lower incident intensity, there is a large proportion of noise in the measured output signal (DV) and therefore a lower SNR. In this region, the noise is primarily due to the thermal noise in the readout electronics, such as the source follower, biasing current sources, access transistor and reset transistor . At higher light intensities, shot noise is the main source of noise and leads to a comparatively less steep increase in SNR with the incident light intensity .
The maximum measured SNR is around 45 dB. In Fig. 8(a), the green channel achieves a higher SNR quicker than the blue and red channels because it has higher quantum efficiency at 550 nm, and therefore captures more of the incident light. Figure 8(b) shows the mean SNR shown in Fig. 8(a) normalized by the quantum efficiency at 550 nm; this plot shows that the SNR as a function of generated electron-hole pairs is very similar for all the three channels, regardless of the incident wavelength.
3.2.1 Experimental setup for polarimetric characterization of sensor
Figure 9 presents the experimental setup used for the polarimetric characterization of the sensor. The setup and conditions are very similar to those described in Section 3.1. Additionally, a Newport 20-LP-VISB linear polarizer on a Thorlabs PRM1 motorized rotating stage has been introduced into the setup [26, 35]. Thus, in addition to the conditions in Section 3.1, the incident light on the sensor is also linearly polarized, with a computer-controlled rotating linear polarization filter. The results shown in Subsections 3.2.2 through 3.2.4 are from a central 50 by 50 portion of the imaging array.
3.2.2 Polarization response
For this experiment, the incident light’s center wavelength is set to 550 nm. Its angle of polarization is swept from 0° to 180° in steps of 10° and the response from the sensor is recorded. The integration time of the sensor is set to 440 ms. Figure 10 shows the responses from the 0°, 45°, 90° and 135° polarization pixel arrays for each spectral channel. We observe the polarization selectivity of each array in terms of Malus’ law for an ideal linear polarizer (Eq. (10)).
In Eq. (10), I is the initial light intensity, is the light intensity transmitted through a polarization filter with transmission axis θ, and ϕ is the incident angle of polarization. Thus, the maximum response through the linear polarizer is at ϕ = θ.
Figure 10(a) presents the mean pixel responses for each polarization array at 550nm, with error bars indicating the spatial standard deviation. Each spectral channel displays polarization selectivity as dictated by Malus’ law; however, the responses do not peak at the expected points. We believe that this is due to the effects of both focal plane array based electrical crosstalk, and optical crosstalk due to the polarization filters [16, 35]. The amplitude responses for the different polarization pixels are also mismatched. These mismatches stem from physical variations in the polarizers’ nanowires. Calibration techniques [36, 37] may compensate for the effects of crosstalk and variations in the nanowires.
Figure 10(b) and 10(c) show the polarization responses of each channel at 650 and 480 nm respectively. In these cases, one can see the effect of reduced quantum efficiency. At 650 nm the blue channel demonstrates some polarization selectivity despite the reduced signal due to low quantum efficiency. Similarly, at 480 nm, the red channel response has smaller amplitude and is more affected by noise; the large error bars indicate the high variation across the array. However, the mean response of the red channel still behaves as predicted by Malus’ law.
3.2.3 Extinction ratio vs. wavelength
An important metric relating to the preceding discussion is the extinction ratio. The extinction ratio is defined as the maximum response of a polarizer to its minimum response. The same metric applies to the characterization of polarization sensors [16, 26, 35] and is explored in this and the following subsection. The polarization responses for each pixel are recorded for each incident angle of polarization, which is swept from 0° to 180° in steps of 10°.The extinction ratio is calculated by using a cosine regression model for each polarization pixel’s response. The maximum value of the cosine fit is divided by its minimum value to obtain the extinction ratio.
Figure 11 shows the average value of the extinction ratio for each type of polarization pixel (in a 50 by 50 pixel region) across the visible spectrum. The integration time is set to 440 ms and the sensor response recorded from 400 nm to 700 nm in steps of 10 nm. It is seen that the extinction ratio of all three channels is relatively flat across the spectrum for the different polarizer pixels. The points where the extinction ratio is ~1 indicate no polarization sensitivity. This result is compatible with the quantum efficiency curves shown in Fig. 6. As the quantum efficiency is negligible at these points (e.g. the red channel from 400 nm to 450 nm), there is not enough input signal to make a reasonable conclusion regarding the incoming light’s polarization properties.
We found that the red channel typically performs worse than the green or blue channels in terms of extinction ratio. This behavior is due to the optical crosstalk, which increases with distance from the focal plane array and therefore affects the red channel more than the green and blue channels. In addition, the red channel is more prone to the electrical crosstalk due to the diffusion of charges deep in the silicon substrate. Crosstalk adds an undesirable offset to the minimum response, therefore reducing the overall extinction ratio .
3.2.4 Extinction ratio vs. number of integrated photons
The extinction ratio is measured as a function of the number of integrated photons for incident light at 550 nm. The linear polarization filter is rotated 180 degrees in front of the sensor in steps of 10° and the integration time of the sensor is swept between 350 ms and 1500 ms. The extinction ratio is calculated for each integration time. The results are presented in Fig. 12 for the four intensity arrays. The extinction ratio is independent of the integration time for all three channels in the pixel. Therefore, irrespective of the number of integrated photons, except in very low light conditions where the SNR is poor, the sensor will be able to discern polarization information.
3.2.5 Error in degree of linear polarization measurement
In order to gauge how accurately the integrated spectral-polarization sensor captures the degree of polarization in the scene, the experimental setup shown in Fig. 13 is used. It is a variation on the setup described in Section 3.2.1, with the addition of a quarter-wave retarder on a rotational stage. Uniform 515 nm light from a OVG01TLGAGS LED source is passed through a fixed Newport 20-LP-VISB linear polarizer. This linearly polarized light is input to a Newport 20RP34-514.5 quarter-wave retarder. The relative angle between the linear polarizer and retarder is swept by rotating the retarder; therefore the light incident on the sensor varies from linear to circular polarization, i.e. the DoLP is swept between 0 and 1. The DoLP is measured using a high-precision CCD division-of-focal-plane polarization sensor [15, 26, 35], thereby providing a reference DoLP measurement. The DoLP is then measured using the integrated spectral-polarization imaging sensor.
The plot of the DoLP, measured for a single super pixel of the imaging array, against the reference DoLP is shown in Fig. 14(a) . Also shown in Fig. 14(a) is the reference response. The DoLP measured from the integrated spectral-polarization sensor ranges from 0.05 to 0.73. Figure 14(b) shows the absolute error in the measured value. The error in DoLP measurement is a consequence of the mismatched polarization pixel responses, seen in Section 3.2.2. Although the DoLP computation for this experiment accounts for the relative phase differences in the polarizer pixel responses within the super pixel, it does not account for the variations in transmission and extinction ratios among them. Furthermore, the maximum measured DoLP is limited by the extinction ratio . Additional calibration would be necessary to ameliorate the error in measured DoLP.
4. Real-life images
The image shown in Fig. 15 through Fig. 17 is taken under the uniform illumination of a 5500K color temperature light source. It is important to note that both the spectral and polarimetric data in Fig. 15 through Fig. 17 are captured in a snapshot, i.e. the scene is spatially and temporally registered. An 8.5 mm lens is used for this experiment. The imaged scene contains a Macbeth color checker chart on the right, a polarization filter wheel with six linear polarizers (each at a different orientation: 0°, 30°, 60°, 90°, 120°, 150°) on the left, and a cone-shaped silicon ingot in the middle.
Figure 15 shows the color image captured by the sensor, after processing to convert the raw image to sRGB format. Figures 16 and 17 show the polarization information captured by the green channel. The DoLP image in Fig. 16 uses the jet color map with a 0 to 1 range. The DoLP image demonstrates the high level of linear polarization of the polarization filters as well as the ingot. The black portions of the color checker chart have an artificially inflated DoLP due to their low intensity, which can be removed via proper thresholding techniques.
Figure 17 shows the angle of polarization, with a threshold such that if the degree of linear polarization is less than 0.5, the corresponding AoP image pixel is set to zero, i.e. black. The AoP image uses the HSV color scale, which goes from 0° to 180°. The image demonstrates that the sensor can correctly identify the various orientations of the polarization filter wheel. It also gives an insight into the conical shape of the silicon ingot, which is not obvious from the color image in Fig. 15. The information from the AoP image can be used to reconstruct the shape of the object  and this information can be complemented with the color information recorded by the same sensor.
This paper describes a novel spectral-polarization imaging sensor designed by the monolithic integration of aluminum nanowires with an array of vertically stacked photodetectors. The aluminum nanowires are fabricated via interference lithography and standard microfabrication techniques. The sensor is compact, has no moving parts, and provides spatially and temporally registered spectral and polarization information in real time.
The sensor combines a spectrally sensitive imaging array where each pixel is composed of vertically stacked photodetectors with pixel-pitched matched nanowire linear polarizers. Each stacked photodetector responds dominantly to the red, green or blue portion of the visible spectrum. The spectral response of each channel of the sensor has been characterized in terms of its quantum efficiency. Each channel has also been described in terms of linearity and SNR, important metrics for polarization imaging sensors. Extinction ratio measurements confirmed and quantified its polarization sensitivity. Table 1 provides a summary of the sensor.
The authors would like to thank Timothy York for his valuable input on experiments, Samuel Powell for helpful discussions on alignment and sharing his calibration technique, and Raphael Njuguna for valued discussions regarding the hardware design of the sensor. This work was supported by National Science Foundation grant number 1130897 and Air Force Office of Scientific Research grant number FA9550-10-1-0121.
References and links
1. H. Chen and L. B. Wolff, “Polarization phase-based method for material classification in computer vision,” Int. J. Comput. Vis. 28(1), 73–83 (1998). [CrossRef]
2. S. S. Lin, K. M. Yemelyanov, E. N. Pugh Jr, and N. Engheta, “Polarization-based and specular-reflection-based noncontact latent fingerprint imaging and lifting,” J. Opt. Soc. Am. A 23(9), 2137–2153 (2006). [CrossRef] [PubMed]
3. S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind haze separation,” Proc. IEEE Comp. Vision and Pat. Recog. 2, 1984–1991 (2006).
4. Y. Y. Schechner and N. Karpel, “Recovery of underwater visibility and structure by polarization analysis,” IEEE J. Oceanic Eng. 30(3), 570–587 (2005). [CrossRef]
6. R. Antonucci and J. Miller, “Spectropolarimetry and the nature of NGC 1068,” Astrophys. J. 297, 621–632 (1985). [CrossRef]
7. A. N. Yaroslavsky, V. Neel, and R. R. Anderson, “Demarcation of nonmelanoma skin cancer margins in thick excisions using multispectral polarized light imaging,” J. Invest. Dermatol. 121(2), 259–266 (2003). [CrossRef] [PubMed]
8. S. K. Nayar, X. S. Fang, and T. Boult, “Separation of reflection components using color and polarization,” Int. J. Comput. Vis. 21(3), 163–186 (1997). [CrossRef]
9. D. Lemke, F. Garzon, H. P. Gemuend, U. Groezinger, I. Heinrichsen, U. Klaas, W. Kraetschmer, E. Kreysa, P. Luetzow-Wentzky, and J. Schubert, “Far-infrared imaging, polarimetry, and spectrophotometry on the Infrared Space Observatory,” Opt. Eng. 33(1), 20–25 (1994). [CrossRef]
10. R. S. Loe and M. J. Duggin, “Hyperspectral imaging polarimeter design and calibration,” Proc. SPIE 4481, 195–205 (2002). [CrossRef]
12. D. Sabatke, A. Locke, E. L. Dereniak, M. Descour, J. Garcia, T. Hamilton, and R. W. McMillan, “Snapshot imaging spectropolarimeter,” Opt. Eng. 41(5), 1048–1054 (2002). [CrossRef]
14. M. Kulkarni and V. Gruev, “A division-of-focal-plane spectral-polarization imaging sensor,” Proc. SPIE 8364, 83640K, 83640K-11 (2012). [CrossRef]
16. D. A. Miller, D. W. Wilson, and E. L. Dereniak, “Novel design and alignment of wire-grid diffraction gratings on a visible focal plane array,” Opt. Eng. 51(1), 014001 (2012). [CrossRef]
17. M. A. Green and M. J. Keevers, “Optical properties of intrinsic silicon at 300 K,” Prog. Photovolt. Res. Appl. 3(3), 189–192 (1995). [CrossRef]
18. R. B. Merrill, “Color separation in an active pixel cell imaging array using a triple-well structure,” (US Patent 1999).
19. B. G. Streetman and S. Banerjee, Solid State Electronic Devices (Prentice Hall, 1995).
21. J. J. Wang, F. Walters, X. Liu, P. Sciortino, and X. Deng, “High-performance, large area, deep ultraviolet to infrared polarizers based on 40 nm line/78 nm space nanowire grids,” Appl. Phys. Lett. 90(6), 061104 (2007). [CrossRef]
23. X. Xu, M. Kulkarni, A. Nehorai, and V. Gruev, “A correlation-based interpolation algorithm for division-of-focal-plane polarization sensors,” Proc. SPIE 8364, 83640L, 83640L-8 (2012). [CrossRef]
25. D. H. Goldstein, Polarized Light (CRC Press, 2010).
27. J. Nakamura, Image Sensors and Signal Processing forDigital Still Cameras (Taylor & Francis, 2006).
28. D. L. Gilblom, S. K. Yoo, and P. Ventura, “Operation and performance of a color image sensor with layered photodiodes,” Proc. SPIE 5074, 318–331 (2003). [CrossRef]
29. D. L. Gilblom, S. K. Yoo, and P. Ventura, “Real-time color imaging with a CMOS sensor having stacked photodiodes,” Proc. SPIE 5210, 105–115 (2004). [CrossRef]
30. Kodak, “KAI-1020 Datasheet,” http://www.truesenseimaging.com/products/interline-transfer-ccd/27-KAI-1020.
31. A. Rush and P. Hubel, “X3 sensor characteristics,” J. Soc. Photogr. Sci. Technol. Jpn. 66, 57–60 (2003).
32. J. R. Janesick, Photon Transfer (SPIE Press, 2007).
33. V. Gruev, Z. Yang, J. Van der Spiegel, and R. Etienne-Cummings, “Current mode image sensor with two transistors per pixel,” IEEE Trans. Circuits Syst. I 57(6), 1154–1165 (2010). [CrossRef]
34. X. Liu, “CMOS image sensors dynamic range and SNR enhancement via statistical signal processing, ” (Stanford University, 2002).
35. T. York and V. Gruev, “Optical characterization of a polarization imager,” IEEE International Symposium on Circuits and Systems, 1576–1579 (2011).
36. D. L. Bowers, J. K. Boger, L. D. Wellems, S. E. Ortega, M. P. Fetrow, J. E. Hubbs, W. T. Black, B. M. Ratliff, and J. S. Tyo, “Unpolarized calibration and nonuniformity correction for long-wave infrared microgrid imaging polarimeters,” Opt. Eng. 47(4), 046403 (2008). [CrossRef]
37. T. York and V. Gruev, “Calibration method for division of focal plane polarimeters in the optical and near-infrared regime,” Proc. SPIE 8012, 80120H, 80120H-7 (2011). [CrossRef]
38. D. Miyazaki, R. T. Tan, K. Hara, and K. Ikeuchi, “Polarization-based inverse rendering from a single view,” Proc. IEEE Comp. Vision and Pat. Recog. 9, 982–987 (2003).