Abstract

Current division-of-focal-plane polarization imaging sensors can perceive intensity and polarization in real time with high spatial resolution, but are oblivious to spectral information. We present the design of such a sensor, which is also spectrally selective in the visible regime. We describe its extensive spectral and polarimetric characterization. The sensor has a pixel pitch of 5 µm and an imaging array of 168 by 256 elements. Each element comprises spectrally sensitive vertically stacked photodetectors integrated with a 140 nm pitch nanowire linear polarizer. The sensor has a maximum measured SNR of 45 dB, extinction ratio of ~3.5, QE of 12%, and linearity error of 1% in the green channel. We present sample spectral-polarization images.

© 2012 OSA

1. Introduction

To the human eye, spectral information in a scene is distinguishable as color, and light intensity is perceived as brightness. However, the human eye is blind to polarization, the third aspect of light that describes the orientation of its plane of oscillation and thus offers useful additional information. Indeed, having knowledge of polarization has been beneficial in diverse applications, such as the identification of materials [1], non-contact fingerprint detection [2], image dehazing [3], and recovery of underwater visibility [4]. Integrated spectral and polarization imaging promises an enhanced, wide-ranging insight, particularly for applications in remote sensing [5], astronomy [6], non-invasive medical procedures [7], and computer vision [8].

To this end, various spectral-polarimetric sensors have been reported in the literature. For example, division-of-time spectropolarimeters unite a traditional polarimeter with a rotating spectral filter wheel [9]. However, division-of-time systems are only suitable for imaging relatively static scenes as image acquisition time is long and any motion in the scene will induce unwanted artifacts. Such systems may also have moving parts, which are not suitable for imaging in rugged terrains. Tunable acousto-optic [5] and liquid crystal [10] spectral filters have also been used in the division-of-time spectropolarimeter paradigm, but spectral and polarization information is still not acquired concurrently [11]. The Computed Tomography Imaging Channeled Spectropolarimeter (CTICS) system allows the acquisition of a spectral-polarization “hypercube.” Here, the polarization parameters are captured as a function of the spectrum in a snapshot [12]. However, this system does have drawbacks, such as tradeoffs between spatial and spectral resolution [13], a bulky optical setup, and heavy computational requirements.

In this paper, we describe a novel spectral-polarization imaging sensor [14] which uses the division-of-focal-plane paradigm [15, 16]. We have designed the sensor by the monolithic integration of pixelated nanowire linear polarization filters with a spectrally selective imaging array. The sensor can therefore simultaneously acquire spectral and polarization information in a scene with high spatial and temporal resolution. Furthermore, both spectral and polarization information is co-registered in hardware by virtue of the imaging sensor architecture. The sensor, pictured in Fig. 1 , also has the benefits of being compact, lightweight and robust.

 

Fig. 1 Spectral-polarization imaging sensor. A ruler and a US quarter are placed next to the camera in order to provide a sense of scale.

Download Full Size | PPT Slide | PDF

The spectral and polarimetric capabilities of the integrated sensor have been thoroughly assessed using a carefully designed optoelectronic setup and are presented in this paper. The following metrics are used to evaluate the sensor: quantum efficiency of the vertically stacked photodiodes, linearity of the pixel output voltage, and signal-to-noise ratio (SNR). We gauge the polarization selectivity of the sensor using a metric known as the extinction ratio.

The remaining sections of the paper are organized as follows: Section 2 provides an overview of the design and the underlying theory of the integrated sensor; Section 3 describes the optoelectronic characterization of the sensor; and Section 4 presents real-life images acquired using the sensor. Concluding remarks are presented in Section 5.

2. Theoretical overview of integrated sensor

2.1 Principle of operation of spectral imaging array

The absorption of incident light in a material depends on the material properties, the depth to which the light travels, and the wavelength of the incident light. This behavior is governed by the physical relationship given in Eq. (1).

I(λ,x)=Iincident×eαx

Here, if Iincident is the incident light intensity at the surface of the silicon photodetector, I is the light intensity absorbed at depth x. I (λ, x) has an exponential dependence on both the distance x and the absorption coefficient α, which in turn is wavelength (λ)-dependent.

Figure 2 shows simulated wavelength dependence of the depth to which light is absorbed in silicon [17]. There are three curves in Fig. 2, corresponding to 50%, 70%, and 99% absorption of the incident light across different wavelengths. For instance, 50% of incident light at 550 nm is absorbed within 1 µm of penetration into silicon, 70% within 1.9 µm and 99% within 7.2 µm. Furthermore, Fig. 2 indicates that longer wavelength light is absorbed deeper in silicon, and shorter wavelength light is absorbed at shallower depths. For example, 99% of incident light at 650 nm is absorbed within 16 µm of the silicon, whereas 99% of incident light at 450 nm is absorbed within 1.8 µm of the silicon.

 

Fig. 2 The wavelength dependence of the absorption depth of light in silicon, for varying levels of absorption.

Download Full Size | PPT Slide | PDF

The above discussion lends itself to the design of the spectral imaging sensor that forms the substrate of the integrated division-of-focal-plane spectral-polarization sensor. Each pixel in the spectral imaging sensor array comprises vertically stacked photodetectors. The fundamental physical principle demonstrated in Fig. 2 helps determine the optimal position and spectral responsivity of these photodetectors.

Figure 3 shows a single pixel of the sensor, which contains stacked photodetectors and the associated readout electronics, such as a source follower transistor, a reset transistor and an access transistor [18]. The three vertically stacked photodetectors are fabricated by selectively changing the doping of silicon. The silicon wafer substrate, initially positively doped, is used to define all the transistors in the sensor as well as the vertically stacked photodiodes. The first step in the fabrication process is to define a deep n-well region in the p-substrate that serves to capture longer wavelengths of the incident light. In order to do so, the silicon wafer is doped with a high concentration of arsenic atoms; by controlling the doping time and concentration, a 2 µm deep n-well is formed in the p-type silicon substrate. Next, a small region within the n-well region is doped with a high concentration of boron atoms, effectively reversing its polarity in this region. Hence, a p-well region is formed within the n-well region and has a depth of ~0.6 µm. Finally, an n-doped region is formed within the p-well region by doping the silicon with a high concentration of arsenic atoms to a depth of 0.2 µm [18].

 

Fig. 3 A pixel of the spectral image sensor containing vertically stacked photodiodes and associated circuitry.

Download Full Size | PPT Slide | PDF

A thermal annealing process follows the alternating doping of the silicon. During this procedure, the dopant atoms diffuse and expand each junction by ~10 nm. Since monolayer doping technique is used for forming the alternating junctions, sharp spatial decay of less than 20 nm between junctions is achieved [19]. These small changes in the junction depth can be used to further compute the spectral performance of the junctions as well as model the electrical crosstalk in the photodiodes.

The vertically alternating regions of the silicon doping form three vertically stacked photodiodes. The top photodiode, which is formed by the n-type and p-well region, is most sensitive to short wavelengths such as blue light. The middle photodiode, which is formed by the p-well and n-well region, is most sensitive to green light and the bottom photodiode, which is formed by the n-well and p-substrate region, is most sensitive to red light. These three photodiodes are referred to as the blue, green and red channels of the sensor respectively, given their dominant sensitivities.

2.2 Nanowire polarization filter array

Pixelated polarization filters are fabricated using a combination of interference lithography and typical microfabrication steps such as deposition, spin coating, UV lithography and reactive ion etching [20, 21]. The pixelated filters are composed of periodic aluminum nanowires with the following dimensions: 70 nm wide, 70 nm high and 140 nm pitch. The orientation of the metallic nanowires determines the transmission axis of the filter.

The pixelated polarization filters are arranged in a super pixels configuration. A super pixel is a 2 by 2 array of polarization pixels, where each pixel has a transmission axis at 0°, 45°, 90° or 135°. The super pixel pattern is repeated across the entire imaging array and thus four subsampled intensity arrays are generated, denoted by I0,I45,I90 andI135. A process of interpolation can be applied in order to reconstruct the full intensity arrays [2224].

Given the intensity through each pixelated polarization filter, the first three Stokes’ parameters (S0,S1andS2) are obtained [25]. S0, shown in Eq. (2), gives the total light intensity through the super pixel array. S1, shown in Eq. (3), indicates the amount of horizontal or vertical linear polarization and S2, shown in Eq. (4), gives the amount of linear polarization in the 45° or 135° direction.

S0=0.5×(I0+I45+I90+I135)
S1=I0I90
S2=I45I135

Using the Stokes’ parameters, two quantities that describe the linear polarization of incident light are computed: the angle of polarization (AoP) and the degree of linear polarization (DoLP). The DoLP ranges from 0 to 1 and describes the amount of linear polarization of the incident light. A DoLP of 1 indicates completely linearly polarized light, and a DoLP of 0 indicates no linear polarization. DoLP is computed via Eq. (5).

DoLP=S12+S22S0

The AoP gives the orientation of the plane of oscillation of the incident light. AoP is computed via Eq. (6).

AoP=12tan1(S2S1)

Figure 4 shows a block diagram of the integrated sensor.

 

Fig. 4 Block diagram of spectral-polarization sensor array. Each pixel of the sensor is integrated with nanowire polarization filters with transmission axis at 0°, 45°, 90° or 135°, enabling the capture of spectral and polarization information simultaneously.

Download Full Size | PPT Slide | PDF

The integrated sensor combines the polarization selectivity of the nanowire polarization filters with the spectral selectivity of the vertically stacked photodiode array. The polarization array filters the incoming light according to its polarization, and each photodiode registers the spectral content of the incoming filtered light in the form of a 10-bit digital intensity value (DV) per channel. Using the intensity values from a neighborhood of 2 by 2 pixels, the AoP and DoLP parameters are computed for each channel. Simultaneously, the sensor records a spectral image of the imaged scene. Because of the architecture of the sensor, a single image concurrently records spectral and polarization information. That is, each pixel in the spectral image has the same instantaneous field-of-view as the corresponding pixel in the polarization image. Therefore, there is no need for offline image registration, which is common in division-of-time and division-of-aperture systems [11]. The array has 168 by 256 spectral and polarization sensitive elements.

3. Optoelectronic characterization of integrated spectral-polarization sensor

3.1.1 Experimental setup for optoelectronic characterization of sensor

The experimental setup shown in Fig. 5 is used to characterize the sensor in terms of quantum efficiency of the three spectral channels, linearity of the three output voltages from the pixel, and SNR across the visible spectrum. A uniform, unpolarized, narrowband source of light of known irradiance is generated for these tests [26]. The sensor is tested without any lens mounted in front of the camera body. A Princeton Instruments SP2150 monochromator in conjunction with a Princeton Instruments TS-428 light source forms a spectrally tunable light source. The output slit of the monochromator is 2 mm wide; therefore, the spectral bandpass (FWHM around desired wavelength) of the light at the output slit is ~8 nm. The monochromator uses a 1200 grooves/mm ruled grating, blazed at 500 nm. A Thorlabs IS200-4 integrating sphere placed at the output slit of the monochromator ensures that the output narrowband light is unpolarized and uniform. Additionally, a FEL0400 Thorlabs long-pass, or order-sorting, filter with cut-on wavelength at 400 nm eliminates unwanted higher-order outputs from the monochromator. Both the power supply to the light source and central wavelength of the monochromator are computer-controlled.

 

Fig. 5 Experimental setup for spectral characterization of integrated spectral-polarization sensor.

Download Full Size | PPT Slide | PDF

By adjusting the light source output power through a feedback mechanism, we ensure that the light striking the sensor has a constant irradiance of 0.5±0.02μW/m2 regardless of the wavelength to which the monochromator has been tuned. The results reported in Subsections 3.1.2 through 3.1.4 are from a 50 by 50 pixel region of the imaging array. This setup is modified for the polarimetric characterization of the sensor, and the relevant setups and experiments are described in Subsections 3.2.1 through 3.2.5.

3.1.2 Quantum efficiency

In this experiment, we confirm and characterize the spectral sensitivity of each channel of the sensor in terms of quantum efficiency (QE). QE is the ratio of the number of photons at a particular wavelength striking the surface of the photodetector (Nph) to the number of electron-hole pairs registered by the photodetector (Ne) and is shown by Eq. (7) [27].

QE(λ)=NeNph

The number of photons Nphstriking the surface of a photosensitive element of the sensor is given in Eq. (8). Nphis calculated using the incident irradiance I, the incident wavelength λ, the integration time of the photodetector tint, and the area of the photodiodeApd. Apd is calculated given the pixel pitch is 5 µm and fill factor is 75% (considering microlenses). The hc is the product of the Planck constant h and the speed of light c.

Nph=I×λ×tint×Apdhc

Given the relationship in Eq. (8) and that the electron sensitivity of the sensor is0.06DV/electron, Eq. (7) is modified to explicitly show the wavelength dependence, given in Eq. (9).

QE(λ)=hc×(DV/0.06)I×λ×tint×Apd

For this experiment, the wavelength of the incident light is swept from 400 nm to 700 nm in steps of 10 nm and the digital value (DV) from each channel is recorded. The integration time of the sensor is set to 440 ms. The quantum efficiency is then calculated for each channel for each incident light wavelength using Eq. (9).

Figure 6 presents the quantum efficiency thus calculated for the three channels. The red, blue and green channel responses demonstrate spectral selectivity in agreement with the theoretical analysis in Section 2.1. Each response is maximum at the part of the spectrum where the most incident photons are expected to be absorbed. For example, the blue channel has highest response between the 420 nm and 460 nm range; the green channel has a peak response in the 520 nm to 560 nm range; and the red channel has a peak response from 650 nm and 700 nm. It is noted that the typical QE response of traditional spectral sensors, which use color filter arrays (CFA), has much steeper cutoffs for each channel and very little overlap [28, 29]. For comparison, quantum efficiency curves for a typical, commercially available Kodak CFA sensor can be seen in [30]. In this CFA sensor, the blue channel response ranges from 370 nm to 550 nm, with a peak quantum efficiency of 41% at 460 nm. The green channel has a response from 460 nm to 620 nm, with a peak quantum efficiency of 36% at 520 nm. The red channel has a spectral response from 580 nm to 750 nm, which is maximum at around 620 nm with a quantum efficiency of 31%. Both CFA and stacked photodetector type spectral sensors require further image processing in order to map the spectral response on to, for example, sRGB or CIEXYZ color space for accurate color reproduction [31].

 

Fig. 6 Measured quantum efficiency of each spectral channel of the sensor.

Download Full Size | PPT Slide | PDF

3.1.3 Linearity

For polarization imaging sensors, it is crucial that the transformation of the input light signal to an output voltage signal be linear. Linearity ensures that the image sensor output is an accurate representation of the incoming polarized light wave. Therefore, there is no distortion in the subsequent AoP and DoLP computation.

The experiment described in this section calculates the error in the linearity of the sensor outputs for incident light at 550 nm. The integration time of the sensor is swept between 0 and 800 ms. Therefore, the number of photons integrated by the sensor is swept, as described by Eq. (8). Figure 7(a) through (c) show the output response with respect to the changing integration time for each channel, along with the linear fit and the linear fit equation corresponding to the response. Figure 7(d) through (f) show the residual errors from the linear fit to each curve. The output response of each spectral channel is linear, with small residual errors. The norm of residuals is 20.49, 11.91, and 14.56 on a 10-bit scale for the red, green and blue channels respectively. Thus, the error in linearity is 1-2%. This is a typical figure for present-day imaging sensors [32, 33]. Nonlinearity can be decreased via calibration techniques; this discussion is beyond the scope of this paper.

 

Fig. 7 Linear fit for (a) red, (b) green and (c) blue channel responses as integration time is increased, for 550 nm incident light. The corresponding residual errors are shown in (d), (e) and (f) respectively.

Download Full Size | PPT Slide | PDF

3.1.4 Signal-to-noise ratio

The previous sections discussed the sensitivity of the imaging sensor. QE describes the spectral response of each channel and linearity describes the change in the output signal with respect to the input signal. The SNR of the sensor is evaluated next [27]. SNR is the ratio of the desired signal compared to the unwanted noise at the output. The SNR measurements indicate the lowest light intensity for which the sensor can provide a useful output.

For this experiment, the SNR of each channel is calculated for 550 nm light. The number of integrated electron-hole pairs by the underlying photodetectors is varied by varying the integration time of the sensor. For each integration time, 100 samples are recorded in succession from the sensor. Dividing each pixel’s mean digital value (DV) by its standard deviation across the 100 temporal samples gives the SNR.

The mean SNR across the 50 by 50 pixel array, as a function of incident photons, is shown in Fig. 8(a) . The error bars indicate the spatial spread in SNR across the array. The SNR increases with the number of captured incident photons. At lower incident intensity, there is a large proportion of noise in the measured output signal (DV) and therefore a lower SNR. In this region, the noise is primarily due to the thermal noise in the readout electronics, such as the source follower, biasing current sources, access transistor and reset transistor [33]. At higher light intensities, shot noise is the main source of noise and leads to a comparatively less steep increase in SNR with the incident light intensity [34].

 

Fig. 8 Mean SNR across a 50 by 50 array (a) plotted against number of incident photons and (b) normalized with quantum efficiency, for 550 nm incident light.

Download Full Size | PPT Slide | PDF

The maximum measured SNR is around 45 dB. In Fig. 8(a), the green channel achieves a higher SNR quicker than the blue and red channels because it has higher quantum efficiency at 550 nm, and therefore captures more of the incident light. Figure 8(b) shows the mean SNR shown in Fig. 8(a) normalized by the quantum efficiency at 550 nm; this plot shows that the SNR as a function of generated electron-hole pairs is very similar for all the three channels, regardless of the incident wavelength.

3.2.1 Experimental setup for polarimetric characterization of sensor

Figure 9 presents the experimental setup used for the polarimetric characterization of the sensor. The setup and conditions are very similar to those described in Section 3.1. Additionally, a Newport 20-LP-VISB linear polarizer on a Thorlabs PRM1 motorized rotating stage has been introduced into the setup [26, 35]. Thus, in addition to the conditions in Section 3.1, the incident light on the sensor is also linearly polarized, with a computer-controlled rotating linear polarization filter. The results shown in Subsections 3.2.2 through 3.2.4 are from a central 50 by 50 portion of the imaging array.

 

Fig. 9 Experimental setup for polarimetric characterization of integrated sensor.

Download Full Size | PPT Slide | PDF

3.2.2 Polarization response

For this experiment, the incident light’s center wavelength is set to 550 nm. Its angle of polarization is swept from 0° to 180° in steps of 10° and the response from the sensor is recorded. The integration time of the sensor is set to 440 ms. Figure 10 shows the responses from the 0°, 45°, 90° and 135° polarization pixel arrays for each spectral channel. We observe the polarization selectivity of each array in terms of Malus’ law for an ideal linear polarizer (Eq. (10)).

 

Fig. 10 Polarization responses of the integrated sensor at (a) 550 nm, (b) 650 nm and (c) 480 nm for the three spectral channels over a 50 by 50 pixel region.

Download Full Size | PPT Slide | PDF

Iθ=Icos2(θ-ϕ)

In Eq. (10), I is the initial light intensity, Iθ is the light intensity transmitted through a polarization filter with transmission axis θ, and ϕ is the incident angle of polarization. Thus, the maximum response through the linear polarizer is at ϕ = θ.

Figure 10(a) presents the mean pixel responses for each polarization array at 550nm, with error bars indicating the spatial standard deviation. Each spectral channel displays polarization selectivity as dictated by Malus’ law; however, the responses do not peak at the expected points. We believe that this is due to the effects of both focal plane array based electrical crosstalk, and optical crosstalk due to the polarization filters [16, 35]. The amplitude responses for the different polarization pixels are also mismatched. These mismatches stem from physical variations in the polarizers’ nanowires. Calibration techniques [36, 37] may compensate for the effects of crosstalk and variations in the nanowires.

Figure 10(b) and 10(c) show the polarization responses of each channel at 650 and 480 nm respectively. In these cases, one can see the effect of reduced quantum efficiency. At 650 nm the blue channel demonstrates some polarization selectivity despite the reduced signal due to low quantum efficiency. Similarly, at 480 nm, the red channel response has smaller amplitude and is more affected by noise; the large error bars indicate the high variation across the array. However, the mean response of the red channel still behaves as predicted by Malus’ law.

3.2.3 Extinction ratio vs. wavelength

An important metric relating to the preceding discussion is the extinction ratio. The extinction ratio is defined as the maximum response of a polarizer to its minimum response. The same metric applies to the characterization of polarization sensors [16, 26, 35] and is explored in this and the following subsection. The polarization responses for each pixel are recorded for each incident angle of polarization, which is swept from 0° to 180° in steps of 10°.The extinction ratio is calculated by using a cosine regression model for each polarization pixel’s response. The maximum value of the cosine fit is divided by its minimum value to obtain the extinction ratio.

Figure 11 shows the average value of the extinction ratio for each type of polarization pixel (in a 50 by 50 pixel region) across the visible spectrum. The integration time is set to 440 ms and the sensor response recorded from 400 nm to 700 nm in steps of 10 nm. It is seen that the extinction ratio of all three channels is relatively flat across the spectrum for the different polarizer pixels. The points where the extinction ratio is ~1 indicate no polarization sensitivity. This result is compatible with the quantum efficiency curves shown in Fig. 6. As the quantum efficiency is negligible at these points (e.g. the red channel from 400 nm to 450 nm), there is not enough input signal to make a reasonable conclusion regarding the incoming light’s polarization properties.

 

Fig. 11 Extinction ratio across wavelength, averaged over a 50 by 50 array for (a) 0°, (b) 45°, (c) 90°, (d) 135° polarization pixels.

Download Full Size | PPT Slide | PDF

We found that the red channel typically performs worse than the green or blue channels in terms of extinction ratio. This behavior is due to the optical crosstalk, which increases with distance from the focal plane array and therefore affects the red channel more than the green and blue channels. In addition, the red channel is more prone to the electrical crosstalk due to the diffusion of charges deep in the silicon substrate. Crosstalk adds an undesirable offset to the minimum response, therefore reducing the overall extinction ratio [35].

3.2.4 Extinction ratio vs. number of integrated photons

The extinction ratio is measured as a function of the number of integrated photons for incident light at 550 nm. The linear polarization filter is rotated 180 degrees in front of the sensor in steps of 10° and the integration time of the sensor is swept between 350 ms and 1500 ms. The extinction ratio is calculated for each integration time. The results are presented in Fig. 12 for the four intensity arrays. The extinction ratio is independent of the integration time for all three channels in the pixel. Therefore, irrespective of the number of integrated photons, except in very low light conditions where the SNR is poor, the sensor will be able to discern polarization information.

 

Fig. 12 Extinction ratio across integration time @ 550 nm, averaged over a 50 by 50 array for (a) 0°, (b) 45°, (c) 90°, (d) 135° polarization pixels.

Download Full Size | PPT Slide | PDF

3.2.5 Error in degree of linear polarization measurement

In order to gauge how accurately the integrated spectral-polarization sensor captures the degree of polarization in the scene, the experimental setup shown in Fig. 13 is used. It is a variation on the setup described in Section 3.2.1, with the addition of a quarter-wave retarder on a rotational stage. Uniform 515 nm light from a OVG01TLGAGS LED source is passed through a fixed Newport 20-LP-VISB linear polarizer. This linearly polarized light is input to a Newport 20RP34-514.5 quarter-wave retarder. The relative angle between the linear polarizer and retarder is swept by rotating the retarder; therefore the light incident on the sensor varies from linear to circular polarization, i.e. the DoLP is swept between 0 and 1. The DoLP is measured using a high-precision CCD division-of-focal-plane polarization sensor [15, 26, 35], thereby providing a reference DoLP measurement. The DoLP is then measured using the integrated spectral-polarization imaging sensor.

 

Fig. 13 Experimental setup for testing error in DoLP measurement. A rotating quarter-wave retarder is included in the setup.

Download Full Size | PPT Slide | PDF

The plot of the DoLP, measured for a single super pixel of the imaging array, against the reference DoLP is shown in Fig. 14(a) . Also shown in Fig. 14(a) is the reference response. The DoLP measured from the integrated spectral-polarization sensor ranges from 0.05 to 0.73. Figure 14(b) shows the absolute error in the measured value. The error in DoLP measurement is a consequence of the mismatched polarization pixel responses, seen in Section 3.2.2. Although the DoLP computation for this experiment accounts for the relative phase differences in the polarizer pixel responses within the super pixel, it does not account for the variations in transmission and extinction ratios among them. Furthermore, the maximum measured DoLP is limited by the extinction ratio [35]. Additional calibration would be necessary to ameliorate the error in measured DoLP.

 

Fig. 14 (a) Plot of DoLP as measured by a single super pixel of the green channel of the integrated sensor against a reference measurement, (b) Absolute error in measured DoLP.

Download Full Size | PPT Slide | PDF

4. Real-life images

The image shown in Fig. 15 through Fig. 17 is taken under the uniform illumination of a 5500K color temperature light source. It is important to note that both the spectral and polarimetric data in Fig. 15 through Fig. 17 are captured in a snapshot, i.e. the scene is spatially and temporally registered. An 8.5 mm lens is used for this experiment. The imaged scene contains a Macbeth color checker chart on the right, a polarization filter wheel with six linear polarizers (each at a different orientation: 0°, 30°, 60°, 90°, 120°, 150°) on the left, and a cone-shaped silicon ingot in the middle.

 

Fig. 15 Spectral image recorded by the integrated sensor. From right to left: a Macbeth color checker chart, silicon ingot, and polarization filter wheel form the imaged scene.

Download Full Size | PPT Slide | PDF

Figure 15 shows the color image captured by the sensor, after processing to convert the raw image to sRGB format. Figures 16 and 17 show the polarization information captured by the green channel. The DoLP image in Fig. 16 uses the jet color map with a 0 to 1 range. The DoLP image demonstrates the high level of linear polarization of the polarization filters as well as the ingot. The black portions of the color checker chart have an artificially inflated DoLP due to their low intensity, which can be removed via proper thresholding techniques.

 

Fig. 16 Degree of linear polarization image recorded by the integrated sensor.

Download Full Size | PPT Slide | PDF

 

Fig. 17 Angle of polarization image recorded by the integrated sensor, for when there is significant polarization information in the scene (DoLP > 0.5).

Download Full Size | PPT Slide | PDF

Figure 17 shows the angle of polarization, with a threshold such that if the degree of linear polarization is less than 0.5, the corresponding AoP image pixel is set to zero, i.e. black. The AoP image uses the HSV color scale, which goes from 0° to 180°. The image demonstrates that the sensor can correctly identify the various orientations of the polarization filter wheel. It also gives an insight into the conical shape of the silicon ingot, which is not obvious from the color image in Fig. 15. The information from the AoP image can be used to reconstruct the shape of the object [38] and this information can be complemented with the color information recorded by the same sensor.

5. Summary

This paper describes a novel spectral-polarization imaging sensor designed by the monolithic integration of aluminum nanowires with an array of vertically stacked photodetectors. The aluminum nanowires are fabricated via interference lithography and standard microfabrication techniques. The sensor is compact, has no moving parts, and provides spatially and temporally registered spectral and polarization information in real time.

The sensor combines a spectrally sensitive imaging array where each pixel is composed of vertically stacked photodetectors with pixel-pitched matched nanowire linear polarizers. Each stacked photodetector responds dominantly to the red, green or blue portion of the visible spectrum. The spectral response of each channel of the sensor has been characterized in terms of its quantum efficiency. Each channel has also been described in terms of linearity and SNR, important metrics for polarization imaging sensors. Extinction ratio measurements confirmed and quantified its polarization sensitivity. Table 1 provides a summary of the sensor.

Tables Icon

Table 1. Summary of Sensor Characteristics

Acknowledgments

The authors would like to thank Timothy York for his valuable input on experiments, Samuel Powell for helpful discussions on alignment and sharing his calibration technique, and Raphael Njuguna for valued discussions regarding the hardware design of the sensor. This work was supported by National Science Foundation grant number 1130897 and Air Force Office of Scientific Research grant number FA9550-10-1-0121.

References and links

1. H. Chen and L. B. Wolff, “Polarization phase-based method for material classification in computer vision,” Int. J. Comput. Vis. 28(1), 73–83 (1998). [CrossRef]  

2. S. S. Lin, K. M. Yemelyanov, E. N. Pugh Jr, and N. Engheta, “Polarization-based and specular-reflection-based noncontact latent fingerprint imaging and lifting,” J. Opt. Soc. Am. A 23(9), 2137–2153 (2006). [CrossRef]   [PubMed]  

3. S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind haze separation,” Proc. IEEE Comp. Vision and Pat. Recog. 2, 1984–1991 (2006).

4. Y. Y. Schechner and N. Karpel, “Recovery of underwater visibility and structure by polarization analysis,” IEEE J. Oceanic Eng. 30(3), 570–587 (2005). [CrossRef]  

5. D. A. Glenar, J. J. Hillman, B. Saif, and J. Bergstralh, “Acousto-optic imaging spectropolarimetry for remote sensing,” Appl. Opt. 33(31), 7412–7424 (1994). [CrossRef]   [PubMed]  

6. R. Antonucci and J. Miller, “Spectropolarimetry and the nature of NGC 1068,” Astrophys. J. 297, 621–632 (1985). [CrossRef]  

7. A. N. Yaroslavsky, V. Neel, and R. R. Anderson, “Demarcation of nonmelanoma skin cancer margins in thick excisions using multispectral polarized light imaging,” J. Invest. Dermatol. 121(2), 259–266 (2003). [CrossRef]   [PubMed]  

8. S. K. Nayar, X. S. Fang, and T. Boult, “Separation of reflection components using color and polarization,” Int. J. Comput. Vis. 21(3), 163–186 (1997). [CrossRef]  

9. D. Lemke, F. Garzon, H. P. Gemuend, U. Groezinger, I. Heinrichsen, U. Klaas, W. Kraetschmer, E. Kreysa, P. Luetzow-Wentzky, and J. Schubert, “Far-infrared imaging, polarimetry, and spectrophotometry on the Infrared Space Observatory,” Opt. Eng. 33(1), 20–25 (1994). [CrossRef]  

10. R. S. Loe and M. J. Duggin, “Hyperspectral imaging polarimeter design and calibration,” Proc. SPIE 4481, 195–205 (2002). [CrossRef]  

11. J. S. Tyo, D. L. Goldstein, D. B. Chenault, and J. A. Shaw, “Review of passive imaging polarimetry for remote sensing applications,” Appl. Opt. 45(22), 5453–5469 (2006). [CrossRef]   [PubMed]  

12. D. Sabatke, A. Locke, E. L. Dereniak, M. Descour, J. Garcia, T. Hamilton, and R. W. McMillan, “Snapshot imaging spectropolarimeter,” Opt. Eng. 41(5), 1048–1054 (2002). [CrossRef]  

13. N. Hagen and E. L. Dereniak, “Analysis of computed tomographic imaging spectrometers. I. Spatial and spectral resolution,” Appl. Opt. 47(28), F85–F95 (2008). [CrossRef]   [PubMed]  

14. M. Kulkarni and V. Gruev, “A division-of-focal-plane spectral-polarization imaging sensor,” Proc. SPIE 8364, 83640K, 83640K-11 (2012). [CrossRef]  

15. V. Gruev, R. Perkins, and T. York, “CCD polarization imaging sensor with aluminum nanowire optical filters,” Opt. Express 18(18), 19087–19094 (2010). [CrossRef]   [PubMed]  

16. D. A. Miller, D. W. Wilson, and E. L. Dereniak, “Novel design and alignment of wire-grid diffraction gratings on a visible focal plane array,” Opt. Eng. 51(1), 014001 (2012). [CrossRef]  

17. M. A. Green and M. J. Keevers, “Optical properties of intrinsic silicon at 300 K,” Prog. Photovolt. Res. Appl. 3(3), 189–192 (1995). [CrossRef]  

18. R. B. Merrill, “Color separation in an active pixel cell imaging array using a triple-well structure,” (US Patent 1999).

19. B. G. Streetman and S. Banerjee, Solid State Electronic Devices (Prentice Hall, 1995).

20. V. Gruev, A. Ortu, N. Lazarus, J. Van der Spiegel, and N. Engheta, “Fabrication of a dual-tier thin film micropolarization array,” Opt. Express 15(8), 4994–5007 (2007). [CrossRef]   [PubMed]  

21. J. J. Wang, F. Walters, X. Liu, P. Sciortino, and X. Deng, “High-performance, large area, deep ultraviolet to infrared polarizers based on 40 nm line/78 nm space nanowire grids,” Appl. Phys. Lett. 90(6), 061104 (2007). [CrossRef]  

22. S. Gao and V. Gruev, “Bilinear and bicubic interpolation methods for division of focal plane polarimeters,” Opt. Express 19(27), 26161–26173 (2011). [CrossRef]   [PubMed]  

23. X. Xu, M. Kulkarni, A. Nehorai, and V. Gruev, “A correlation-based interpolation algorithm for division-of-focal-plane polarization sensors,” Proc. SPIE 8364, 83640L, 83640L-8 (2012). [CrossRef]  

24. B. M. Ratliff, C. F. LaCasse, and J. S. Tyo, “Interpolation strategies for reducing IFOV artifacts in microgrid polarimeter imagery,” Opt. Express 17(11), 9112–9125 (2009). [CrossRef]   [PubMed]  

25. D. H. Goldstein, Polarized Light (CRC Press, 2010).

26. T. York and V. Gruev, “Characterization of a visible spectrum division-of-focal-plane polarimeter,” Appl. Opt. 51(22), 5392–5400 (2012). [CrossRef]   [PubMed]  

27. J. Nakamura, Image Sensors and Signal Processing forDigital Still Cameras (Taylor & Francis, 2006).

28. D. L. Gilblom, S. K. Yoo, and P. Ventura, “Operation and performance of a color image sensor with layered photodiodes,” Proc. SPIE 5074, 318–331 (2003). [CrossRef]  

29. D. L. Gilblom, S. K. Yoo, and P. Ventura, “Real-time color imaging with a CMOS sensor having stacked photodiodes,” Proc. SPIE 5210, 105–115 (2004). [CrossRef]  

30. Kodak, “KAI-1020 Datasheet,” http://www.truesenseimaging.com/products/interline-transfer-ccd/27-KAI-1020.

31. A. Rush and P. Hubel, “X3 sensor characteristics,” J. Soc. Photogr. Sci. Technol. Jpn. 66, 57–60 (2003).

32. J. R. Janesick, Photon Transfer (SPIE Press, 2007).

33. V. Gruev, Z. Yang, J. Van der Spiegel, and R. Etienne-Cummings, “Current mode image sensor with two transistors per pixel,” IEEE Trans. Circuits Syst. I 57(6), 1154–1165 (2010). [CrossRef]  

34. X. Liu, “CMOS image sensors dynamic range and SNR enhancement via statistical signal processing, ” (Stanford University, 2002).

35. T. York and V. Gruev, “Optical characterization of a polarization imager,” IEEE International Symposium on Circuits and Systems, 1576–1579 (2011).

36. D. L. Bowers, J. K. Boger, L. D. Wellems, S. E. Ortega, M. P. Fetrow, J. E. Hubbs, W. T. Black, B. M. Ratliff, and J. S. Tyo, “Unpolarized calibration and nonuniformity correction for long-wave infrared microgrid imaging polarimeters,” Opt. Eng. 47(4), 046403 (2008). [CrossRef]  

37. T. York and V. Gruev, “Calibration method for division of focal plane polarimeters in the optical and near-infrared regime,” Proc. SPIE 8012, 80120H, 80120H-7 (2011). [CrossRef]  

38. D. Miyazaki, R. T. Tan, K. Hara, and K. Ikeuchi, “Polarization-based inverse rendering from a single view,” Proc. IEEE Comp. Vision and Pat. Recog. 9, 982–987 (2003).

References

  • View by:
  • |
  • |
  • |

  1. H. Chen and L. B. Wolff, “Polarization phase-based method for material classification in computer vision,” Int. J. Comput. Vis. 28(1), 73–83 (1998).
    [Crossref]
  2. S. S. Lin, K. M. Yemelyanov, E. N. Pugh, and N. Engheta, “Polarization-based and specular-reflection-based noncontact latent fingerprint imaging and lifting,” J. Opt. Soc. Am. A 23(9), 2137–2153 (2006).
    [Crossref] [PubMed]
  3. S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind haze separation,” Proc. IEEE Comp. Vision and Pat. Recog. 2, 1984–1991 (2006).
  4. Y. Y. Schechner and N. Karpel, “Recovery of underwater visibility and structure by polarization analysis,” IEEE J. Oceanic Eng. 30(3), 570–587 (2005).
    [Crossref]
  5. D. A. Glenar, J. J. Hillman, B. Saif, and J. Bergstralh, “Acousto-optic imaging spectropolarimetry for remote sensing,” Appl. Opt. 33(31), 7412–7424 (1994).
    [Crossref] [PubMed]
  6. R. Antonucci and J. Miller, “Spectropolarimetry and the nature of NGC 1068,” Astrophys. J. 297, 621–632 (1985).
    [Crossref]
  7. A. N. Yaroslavsky, V. Neel, and R. R. Anderson, “Demarcation of nonmelanoma skin cancer margins in thick excisions using multispectral polarized light imaging,” J. Invest. Dermatol. 121(2), 259–266 (2003).
    [Crossref] [PubMed]
  8. S. K. Nayar, X. S. Fang, and T. Boult, “Separation of reflection components using color and polarization,” Int. J. Comput. Vis. 21(3), 163–186 (1997).
    [Crossref]
  9. D. Lemke, F. Garzon, H. P. Gemuend, U. Groezinger, I. Heinrichsen, U. Klaas, W. Kraetschmer, E. Kreysa, P. Luetzow-Wentzky, and J. Schubert, “Far-infrared imaging, polarimetry, and spectrophotometry on the Infrared Space Observatory,” Opt. Eng. 33(1), 20–25 (1994).
    [Crossref]
  10. R. S. Loe and M. J. Duggin, “Hyperspectral imaging polarimeter design and calibration,” Proc. SPIE 4481, 195–205 (2002).
    [Crossref]
  11. J. S. Tyo, D. L. Goldstein, D. B. Chenault, and J. A. Shaw, “Review of passive imaging polarimetry for remote sensing applications,” Appl. Opt. 45(22), 5453–5469 (2006).
    [Crossref] [PubMed]
  12. D. Sabatke, A. Locke, E. L. Dereniak, M. Descour, J. Garcia, T. Hamilton, and R. W. McMillan, “Snapshot imaging spectropolarimeter,” Opt. Eng. 41(5), 1048–1054 (2002).
    [Crossref]
  13. N. Hagen and E. L. Dereniak, “Analysis of computed tomographic imaging spectrometers. I. Spatial and spectral resolution,” Appl. Opt. 47(28), F85–F95 (2008).
    [Crossref] [PubMed]
  14. M. Kulkarni and V. Gruev, “A division-of-focal-plane spectral-polarization imaging sensor,” Proc. SPIE 8364, 83640K, 83640K-11 (2012).
    [Crossref]
  15. V. Gruev, R. Perkins, and T. York, “CCD polarization imaging sensor with aluminum nanowire optical filters,” Opt. Express 18(18), 19087–19094 (2010).
    [Crossref] [PubMed]
  16. D. A. Miller, D. W. Wilson, and E. L. Dereniak, “Novel design and alignment of wire-grid diffraction gratings on a visible focal plane array,” Opt. Eng. 51(1), 014001 (2012).
    [Crossref]
  17. M. A. Green and M. J. Keevers, “Optical properties of intrinsic silicon at 300 K,” Prog. Photovolt. Res. Appl. 3(3), 189–192 (1995).
    [Crossref]
  18. R. B. Merrill, “Color separation in an active pixel cell imaging array using a triple-well structure,” (US Patent 1999).
  19. B. G. Streetman and S. Banerjee, Solid State Electronic Devices (Prentice Hall, 1995).
  20. V. Gruev, A. Ortu, N. Lazarus, J. Van der Spiegel, and N. Engheta, “Fabrication of a dual-tier thin film micropolarization array,” Opt. Express 15(8), 4994–5007 (2007).
    [Crossref] [PubMed]
  21. J. J. Wang, F. Walters, X. Liu, P. Sciortino, and X. Deng, “High-performance, large area, deep ultraviolet to infrared polarizers based on 40 nm line/78 nm space nanowire grids,” Appl. Phys. Lett. 90(6), 061104 (2007).
    [Crossref]
  22. S. Gao and V. Gruev, “Bilinear and bicubic interpolation methods for division of focal plane polarimeters,” Opt. Express 19(27), 26161–26173 (2011).
    [Crossref] [PubMed]
  23. X. Xu, M. Kulkarni, A. Nehorai, and V. Gruev, “A correlation-based interpolation algorithm for division-of-focal-plane polarization sensors,” Proc. SPIE 8364, 83640L, 83640L-8 (2012).
    [Crossref]
  24. B. M. Ratliff, C. F. LaCasse, and J. S. Tyo, “Interpolation strategies for reducing IFOV artifacts in microgrid polarimeter imagery,” Opt. Express 17(11), 9112–9125 (2009).
    [Crossref] [PubMed]
  25. D. H. Goldstein, Polarized Light (CRC Press, 2010).
  26. T. York and V. Gruev, “Characterization of a visible spectrum division-of-focal-plane polarimeter,” Appl. Opt. 51(22), 5392–5400 (2012).
    [Crossref] [PubMed]
  27. J. Nakamura, Image Sensors and Signal Processing forDigital Still Cameras (Taylor & Francis, 2006).
  28. D. L. Gilblom, S. K. Yoo, and P. Ventura, “Operation and performance of a color image sensor with layered photodiodes,” Proc. SPIE 5074, 318–331 (2003).
    [Crossref]
  29. D. L. Gilblom, S. K. Yoo, and P. Ventura, “Real-time color imaging with a CMOS sensor having stacked photodiodes,” Proc. SPIE 5210, 105–115 (2004).
    [Crossref]
  30. Kodak, “KAI-1020 Datasheet,” http://www.truesenseimaging.com/products/interline-transfer-ccd/27-KAI-1020 .
  31. A. Rush and P. Hubel, “X3 sensor characteristics,” J. Soc. Photogr. Sci. Technol. Jpn. 66, 57–60 (2003).
  32. J. R. Janesick, Photon Transfer (SPIE Press, 2007).
  33. V. Gruev, Z. Yang, J. Van der Spiegel, and R. Etienne-Cummings, “Current mode image sensor with two transistors per pixel,” IEEE Trans. Circuits Syst. I 57(6), 1154–1165 (2010).
    [Crossref]
  34. X. Liu, “CMOS image sensors dynamic range and SNR enhancement via statistical signal processing, ” (Stanford University, 2002).
  35. T. York and V. Gruev, “Optical characterization of a polarization imager,” IEEE International Symposium on Circuits and Systems, 1576–1579 (2011).
  36. D. L. Bowers, J. K. Boger, L. D. Wellems, S. E. Ortega, M. P. Fetrow, J. E. Hubbs, W. T. Black, B. M. Ratliff, and J. S. Tyo, “Unpolarized calibration and nonuniformity correction for long-wave infrared microgrid imaging polarimeters,” Opt. Eng. 47(4), 046403 (2008).
    [Crossref]
  37. T. York and V. Gruev, “Calibration method for division of focal plane polarimeters in the optical and near-infrared regime,” Proc. SPIE 8012, 80120H, 80120H-7 (2011).
    [Crossref]
  38. D. Miyazaki, R. T. Tan, K. Hara, and K. Ikeuchi, “Polarization-based inverse rendering from a single view,” Proc. IEEE Comp. Vision and Pat. Recog. 9, 982–987 (2003).

2012 (4)

M. Kulkarni and V. Gruev, “A division-of-focal-plane spectral-polarization imaging sensor,” Proc. SPIE 8364, 83640K, 83640K-11 (2012).
[Crossref]

D. A. Miller, D. W. Wilson, and E. L. Dereniak, “Novel design and alignment of wire-grid diffraction gratings on a visible focal plane array,” Opt. Eng. 51(1), 014001 (2012).
[Crossref]

X. Xu, M. Kulkarni, A. Nehorai, and V. Gruev, “A correlation-based interpolation algorithm for division-of-focal-plane polarization sensors,” Proc. SPIE 8364, 83640L, 83640L-8 (2012).
[Crossref]

T. York and V. Gruev, “Characterization of a visible spectrum division-of-focal-plane polarimeter,” Appl. Opt. 51(22), 5392–5400 (2012).
[Crossref] [PubMed]

2011 (2)

S. Gao and V. Gruev, “Bilinear and bicubic interpolation methods for division of focal plane polarimeters,” Opt. Express 19(27), 26161–26173 (2011).
[Crossref] [PubMed]

T. York and V. Gruev, “Calibration method for division of focal plane polarimeters in the optical and near-infrared regime,” Proc. SPIE 8012, 80120H, 80120H-7 (2011).
[Crossref]

2010 (2)

V. Gruev, Z. Yang, J. Van der Spiegel, and R. Etienne-Cummings, “Current mode image sensor with two transistors per pixel,” IEEE Trans. Circuits Syst. I 57(6), 1154–1165 (2010).
[Crossref]

V. Gruev, R. Perkins, and T. York, “CCD polarization imaging sensor with aluminum nanowire optical filters,” Opt. Express 18(18), 19087–19094 (2010).
[Crossref] [PubMed]

2009 (1)

2008 (2)

N. Hagen and E. L. Dereniak, “Analysis of computed tomographic imaging spectrometers. I. Spatial and spectral resolution,” Appl. Opt. 47(28), F85–F95 (2008).
[Crossref] [PubMed]

D. L. Bowers, J. K. Boger, L. D. Wellems, S. E. Ortega, M. P. Fetrow, J. E. Hubbs, W. T. Black, B. M. Ratliff, and J. S. Tyo, “Unpolarized calibration and nonuniformity correction for long-wave infrared microgrid imaging polarimeters,” Opt. Eng. 47(4), 046403 (2008).
[Crossref]

2007 (2)

J. J. Wang, F. Walters, X. Liu, P. Sciortino, and X. Deng, “High-performance, large area, deep ultraviolet to infrared polarizers based on 40 nm line/78 nm space nanowire grids,” Appl. Phys. Lett. 90(6), 061104 (2007).
[Crossref]

V. Gruev, A. Ortu, N. Lazarus, J. Van der Spiegel, and N. Engheta, “Fabrication of a dual-tier thin film micropolarization array,” Opt. Express 15(8), 4994–5007 (2007).
[Crossref] [PubMed]

2006 (2)

2005 (1)

Y. Y. Schechner and N. Karpel, “Recovery of underwater visibility and structure by polarization analysis,” IEEE J. Oceanic Eng. 30(3), 570–587 (2005).
[Crossref]

2004 (1)

D. L. Gilblom, S. K. Yoo, and P. Ventura, “Real-time color imaging with a CMOS sensor having stacked photodiodes,” Proc. SPIE 5210, 105–115 (2004).
[Crossref]

2003 (3)

A. Rush and P. Hubel, “X3 sensor characteristics,” J. Soc. Photogr. Sci. Technol. Jpn. 66, 57–60 (2003).

D. L. Gilblom, S. K. Yoo, and P. Ventura, “Operation and performance of a color image sensor with layered photodiodes,” Proc. SPIE 5074, 318–331 (2003).
[Crossref]

A. N. Yaroslavsky, V. Neel, and R. R. Anderson, “Demarcation of nonmelanoma skin cancer margins in thick excisions using multispectral polarized light imaging,” J. Invest. Dermatol. 121(2), 259–266 (2003).
[Crossref] [PubMed]

2002 (2)

R. S. Loe and M. J. Duggin, “Hyperspectral imaging polarimeter design and calibration,” Proc. SPIE 4481, 195–205 (2002).
[Crossref]

D. Sabatke, A. Locke, E. L. Dereniak, M. Descour, J. Garcia, T. Hamilton, and R. W. McMillan, “Snapshot imaging spectropolarimeter,” Opt. Eng. 41(5), 1048–1054 (2002).
[Crossref]

1998 (1)

H. Chen and L. B. Wolff, “Polarization phase-based method for material classification in computer vision,” Int. J. Comput. Vis. 28(1), 73–83 (1998).
[Crossref]

1997 (1)

S. K. Nayar, X. S. Fang, and T. Boult, “Separation of reflection components using color and polarization,” Int. J. Comput. Vis. 21(3), 163–186 (1997).
[Crossref]

1995 (1)

M. A. Green and M. J. Keevers, “Optical properties of intrinsic silicon at 300 K,” Prog. Photovolt. Res. Appl. 3(3), 189–192 (1995).
[Crossref]

1994 (2)

D. Lemke, F. Garzon, H. P. Gemuend, U. Groezinger, I. Heinrichsen, U. Klaas, W. Kraetschmer, E. Kreysa, P. Luetzow-Wentzky, and J. Schubert, “Far-infrared imaging, polarimetry, and spectrophotometry on the Infrared Space Observatory,” Opt. Eng. 33(1), 20–25 (1994).
[Crossref]

D. A. Glenar, J. J. Hillman, B. Saif, and J. Bergstralh, “Acousto-optic imaging spectropolarimetry for remote sensing,” Appl. Opt. 33(31), 7412–7424 (1994).
[Crossref] [PubMed]

1985 (1)

R. Antonucci and J. Miller, “Spectropolarimetry and the nature of NGC 1068,” Astrophys. J. 297, 621–632 (1985).
[Crossref]

Anderson, R. R.

A. N. Yaroslavsky, V. Neel, and R. R. Anderson, “Demarcation of nonmelanoma skin cancer margins in thick excisions using multispectral polarized light imaging,” J. Invest. Dermatol. 121(2), 259–266 (2003).
[Crossref] [PubMed]

Antonucci, R.

R. Antonucci and J. Miller, “Spectropolarimetry and the nature of NGC 1068,” Astrophys. J. 297, 621–632 (1985).
[Crossref]

Bergstralh, J.

Black, W. T.

D. L. Bowers, J. K. Boger, L. D. Wellems, S. E. Ortega, M. P. Fetrow, J. E. Hubbs, W. T. Black, B. M. Ratliff, and J. S. Tyo, “Unpolarized calibration and nonuniformity correction for long-wave infrared microgrid imaging polarimeters,” Opt. Eng. 47(4), 046403 (2008).
[Crossref]

Boger, J. K.

D. L. Bowers, J. K. Boger, L. D. Wellems, S. E. Ortega, M. P. Fetrow, J. E. Hubbs, W. T. Black, B. M. Ratliff, and J. S. Tyo, “Unpolarized calibration and nonuniformity correction for long-wave infrared microgrid imaging polarimeters,” Opt. Eng. 47(4), 046403 (2008).
[Crossref]

Boult, T.

S. K. Nayar, X. S. Fang, and T. Boult, “Separation of reflection components using color and polarization,” Int. J. Comput. Vis. 21(3), 163–186 (1997).
[Crossref]

Bowers, D. L.

D. L. Bowers, J. K. Boger, L. D. Wellems, S. E. Ortega, M. P. Fetrow, J. E. Hubbs, W. T. Black, B. M. Ratliff, and J. S. Tyo, “Unpolarized calibration and nonuniformity correction for long-wave infrared microgrid imaging polarimeters,” Opt. Eng. 47(4), 046403 (2008).
[Crossref]

Chen, H.

H. Chen and L. B. Wolff, “Polarization phase-based method for material classification in computer vision,” Int. J. Comput. Vis. 28(1), 73–83 (1998).
[Crossref]

Chenault, D. B.

Deng, X.

J. J. Wang, F. Walters, X. Liu, P. Sciortino, and X. Deng, “High-performance, large area, deep ultraviolet to infrared polarizers based on 40 nm line/78 nm space nanowire grids,” Appl. Phys. Lett. 90(6), 061104 (2007).
[Crossref]

Dereniak, E. L.

D. A. Miller, D. W. Wilson, and E. L. Dereniak, “Novel design and alignment of wire-grid diffraction gratings on a visible focal plane array,” Opt. Eng. 51(1), 014001 (2012).
[Crossref]

N. Hagen and E. L. Dereniak, “Analysis of computed tomographic imaging spectrometers. I. Spatial and spectral resolution,” Appl. Opt. 47(28), F85–F95 (2008).
[Crossref] [PubMed]

D. Sabatke, A. Locke, E. L. Dereniak, M. Descour, J. Garcia, T. Hamilton, and R. W. McMillan, “Snapshot imaging spectropolarimeter,” Opt. Eng. 41(5), 1048–1054 (2002).
[Crossref]

Descour, M.

D. Sabatke, A. Locke, E. L. Dereniak, M. Descour, J. Garcia, T. Hamilton, and R. W. McMillan, “Snapshot imaging spectropolarimeter,” Opt. Eng. 41(5), 1048–1054 (2002).
[Crossref]

Duggin, M. J.

R. S. Loe and M. J. Duggin, “Hyperspectral imaging polarimeter design and calibration,” Proc. SPIE 4481, 195–205 (2002).
[Crossref]

Engheta, N.

Etienne-Cummings, R.

V. Gruev, Z. Yang, J. Van der Spiegel, and R. Etienne-Cummings, “Current mode image sensor with two transistors per pixel,” IEEE Trans. Circuits Syst. I 57(6), 1154–1165 (2010).
[Crossref]

Fang, X. S.

S. K. Nayar, X. S. Fang, and T. Boult, “Separation of reflection components using color and polarization,” Int. J. Comput. Vis. 21(3), 163–186 (1997).
[Crossref]

Fetrow, M. P.

D. L. Bowers, J. K. Boger, L. D. Wellems, S. E. Ortega, M. P. Fetrow, J. E. Hubbs, W. T. Black, B. M. Ratliff, and J. S. Tyo, “Unpolarized calibration and nonuniformity correction for long-wave infrared microgrid imaging polarimeters,” Opt. Eng. 47(4), 046403 (2008).
[Crossref]

Gao, S.

Garcia, J.

D. Sabatke, A. Locke, E. L. Dereniak, M. Descour, J. Garcia, T. Hamilton, and R. W. McMillan, “Snapshot imaging spectropolarimeter,” Opt. Eng. 41(5), 1048–1054 (2002).
[Crossref]

Garzon, F.

D. Lemke, F. Garzon, H. P. Gemuend, U. Groezinger, I. Heinrichsen, U. Klaas, W. Kraetschmer, E. Kreysa, P. Luetzow-Wentzky, and J. Schubert, “Far-infrared imaging, polarimetry, and spectrophotometry on the Infrared Space Observatory,” Opt. Eng. 33(1), 20–25 (1994).
[Crossref]

Gemuend, H. P.

D. Lemke, F. Garzon, H. P. Gemuend, U. Groezinger, I. Heinrichsen, U. Klaas, W. Kraetschmer, E. Kreysa, P. Luetzow-Wentzky, and J. Schubert, “Far-infrared imaging, polarimetry, and spectrophotometry on the Infrared Space Observatory,” Opt. Eng. 33(1), 20–25 (1994).
[Crossref]

Gilblom, D. L.

D. L. Gilblom, S. K. Yoo, and P. Ventura, “Real-time color imaging with a CMOS sensor having stacked photodiodes,” Proc. SPIE 5210, 105–115 (2004).
[Crossref]

D. L. Gilblom, S. K. Yoo, and P. Ventura, “Operation and performance of a color image sensor with layered photodiodes,” Proc. SPIE 5074, 318–331 (2003).
[Crossref]

Glenar, D. A.

Goldstein, D. L.

Green, M. A.

M. A. Green and M. J. Keevers, “Optical properties of intrinsic silicon at 300 K,” Prog. Photovolt. Res. Appl. 3(3), 189–192 (1995).
[Crossref]

Groezinger, U.

D. Lemke, F. Garzon, H. P. Gemuend, U. Groezinger, I. Heinrichsen, U. Klaas, W. Kraetschmer, E. Kreysa, P. Luetzow-Wentzky, and J. Schubert, “Far-infrared imaging, polarimetry, and spectrophotometry on the Infrared Space Observatory,” Opt. Eng. 33(1), 20–25 (1994).
[Crossref]

Gruev, V.

M. Kulkarni and V. Gruev, “A division-of-focal-plane spectral-polarization imaging sensor,” Proc. SPIE 8364, 83640K, 83640K-11 (2012).
[Crossref]

X. Xu, M. Kulkarni, A. Nehorai, and V. Gruev, “A correlation-based interpolation algorithm for division-of-focal-plane polarization sensors,” Proc. SPIE 8364, 83640L, 83640L-8 (2012).
[Crossref]

T. York and V. Gruev, “Characterization of a visible spectrum division-of-focal-plane polarimeter,” Appl. Opt. 51(22), 5392–5400 (2012).
[Crossref] [PubMed]

S. Gao and V. Gruev, “Bilinear and bicubic interpolation methods for division of focal plane polarimeters,” Opt. Express 19(27), 26161–26173 (2011).
[Crossref] [PubMed]

T. York and V. Gruev, “Calibration method for division of focal plane polarimeters in the optical and near-infrared regime,” Proc. SPIE 8012, 80120H, 80120H-7 (2011).
[Crossref]

V. Gruev, Z. Yang, J. Van der Spiegel, and R. Etienne-Cummings, “Current mode image sensor with two transistors per pixel,” IEEE Trans. Circuits Syst. I 57(6), 1154–1165 (2010).
[Crossref]

V. Gruev, R. Perkins, and T. York, “CCD polarization imaging sensor with aluminum nanowire optical filters,” Opt. Express 18(18), 19087–19094 (2010).
[Crossref] [PubMed]

V. Gruev, A. Ortu, N. Lazarus, J. Van der Spiegel, and N. Engheta, “Fabrication of a dual-tier thin film micropolarization array,” Opt. Express 15(8), 4994–5007 (2007).
[Crossref] [PubMed]

Hagen, N.

Hamilton, T.

D. Sabatke, A. Locke, E. L. Dereniak, M. Descour, J. Garcia, T. Hamilton, and R. W. McMillan, “Snapshot imaging spectropolarimeter,” Opt. Eng. 41(5), 1048–1054 (2002).
[Crossref]

Heinrichsen, I.

D. Lemke, F. Garzon, H. P. Gemuend, U. Groezinger, I. Heinrichsen, U. Klaas, W. Kraetschmer, E. Kreysa, P. Luetzow-Wentzky, and J. Schubert, “Far-infrared imaging, polarimetry, and spectrophotometry on the Infrared Space Observatory,” Opt. Eng. 33(1), 20–25 (1994).
[Crossref]

Hillman, J. J.

Hubbs, J. E.

D. L. Bowers, J. K. Boger, L. D. Wellems, S. E. Ortega, M. P. Fetrow, J. E. Hubbs, W. T. Black, B. M. Ratliff, and J. S. Tyo, “Unpolarized calibration and nonuniformity correction for long-wave infrared microgrid imaging polarimeters,” Opt. Eng. 47(4), 046403 (2008).
[Crossref]

Hubel, P.

A. Rush and P. Hubel, “X3 sensor characteristics,” J. Soc. Photogr. Sci. Technol. Jpn. 66, 57–60 (2003).

Karpel, N.

Y. Y. Schechner and N. Karpel, “Recovery of underwater visibility and structure by polarization analysis,” IEEE J. Oceanic Eng. 30(3), 570–587 (2005).
[Crossref]

Keevers, M. J.

M. A. Green and M. J. Keevers, “Optical properties of intrinsic silicon at 300 K,” Prog. Photovolt. Res. Appl. 3(3), 189–192 (1995).
[Crossref]

Klaas, U.

D. Lemke, F. Garzon, H. P. Gemuend, U. Groezinger, I. Heinrichsen, U. Klaas, W. Kraetschmer, E. Kreysa, P. Luetzow-Wentzky, and J. Schubert, “Far-infrared imaging, polarimetry, and spectrophotometry on the Infrared Space Observatory,” Opt. Eng. 33(1), 20–25 (1994).
[Crossref]

Kraetschmer, W.

D. Lemke, F. Garzon, H. P. Gemuend, U. Groezinger, I. Heinrichsen, U. Klaas, W. Kraetschmer, E. Kreysa, P. Luetzow-Wentzky, and J. Schubert, “Far-infrared imaging, polarimetry, and spectrophotometry on the Infrared Space Observatory,” Opt. Eng. 33(1), 20–25 (1994).
[Crossref]

Kreysa, E.

D. Lemke, F. Garzon, H. P. Gemuend, U. Groezinger, I. Heinrichsen, U. Klaas, W. Kraetschmer, E. Kreysa, P. Luetzow-Wentzky, and J. Schubert, “Far-infrared imaging, polarimetry, and spectrophotometry on the Infrared Space Observatory,” Opt. Eng. 33(1), 20–25 (1994).
[Crossref]

Kulkarni, M.

M. Kulkarni and V. Gruev, “A division-of-focal-plane spectral-polarization imaging sensor,” Proc. SPIE 8364, 83640K, 83640K-11 (2012).
[Crossref]

X. Xu, M. Kulkarni, A. Nehorai, and V. Gruev, “A correlation-based interpolation algorithm for division-of-focal-plane polarization sensors,” Proc. SPIE 8364, 83640L, 83640L-8 (2012).
[Crossref]

LaCasse, C. F.

Lazarus, N.

Lemke, D.

D. Lemke, F. Garzon, H. P. Gemuend, U. Groezinger, I. Heinrichsen, U. Klaas, W. Kraetschmer, E. Kreysa, P. Luetzow-Wentzky, and J. Schubert, “Far-infrared imaging, polarimetry, and spectrophotometry on the Infrared Space Observatory,” Opt. Eng. 33(1), 20–25 (1994).
[Crossref]

Lin, S. S.

Liu, X.

J. J. Wang, F. Walters, X. Liu, P. Sciortino, and X. Deng, “High-performance, large area, deep ultraviolet to infrared polarizers based on 40 nm line/78 nm space nanowire grids,” Appl. Phys. Lett. 90(6), 061104 (2007).
[Crossref]

Locke, A.

D. Sabatke, A. Locke, E. L. Dereniak, M. Descour, J. Garcia, T. Hamilton, and R. W. McMillan, “Snapshot imaging spectropolarimeter,” Opt. Eng. 41(5), 1048–1054 (2002).
[Crossref]

Loe, R. S.

R. S. Loe and M. J. Duggin, “Hyperspectral imaging polarimeter design and calibration,” Proc. SPIE 4481, 195–205 (2002).
[Crossref]

Luetzow-Wentzky, P.

D. Lemke, F. Garzon, H. P. Gemuend, U. Groezinger, I. Heinrichsen, U. Klaas, W. Kraetschmer, E. Kreysa, P. Luetzow-Wentzky, and J. Schubert, “Far-infrared imaging, polarimetry, and spectrophotometry on the Infrared Space Observatory,” Opt. Eng. 33(1), 20–25 (1994).
[Crossref]

McMillan, R. W.

D. Sabatke, A. Locke, E. L. Dereniak, M. Descour, J. Garcia, T. Hamilton, and R. W. McMillan, “Snapshot imaging spectropolarimeter,” Opt. Eng. 41(5), 1048–1054 (2002).
[Crossref]

Miller, D. A.

D. A. Miller, D. W. Wilson, and E. L. Dereniak, “Novel design and alignment of wire-grid diffraction gratings on a visible focal plane array,” Opt. Eng. 51(1), 014001 (2012).
[Crossref]

Miller, J.

R. Antonucci and J. Miller, “Spectropolarimetry and the nature of NGC 1068,” Astrophys. J. 297, 621–632 (1985).
[Crossref]

Nayar, S. K.

S. K. Nayar, X. S. Fang, and T. Boult, “Separation of reflection components using color and polarization,” Int. J. Comput. Vis. 21(3), 163–186 (1997).
[Crossref]

Neel, V.

A. N. Yaroslavsky, V. Neel, and R. R. Anderson, “Demarcation of nonmelanoma skin cancer margins in thick excisions using multispectral polarized light imaging,” J. Invest. Dermatol. 121(2), 259–266 (2003).
[Crossref] [PubMed]

Nehorai, A.

X. Xu, M. Kulkarni, A. Nehorai, and V. Gruev, “A correlation-based interpolation algorithm for division-of-focal-plane polarization sensors,” Proc. SPIE 8364, 83640L, 83640L-8 (2012).
[Crossref]

Ortega, S. E.

D. L. Bowers, J. K. Boger, L. D. Wellems, S. E. Ortega, M. P. Fetrow, J. E. Hubbs, W. T. Black, B. M. Ratliff, and J. S. Tyo, “Unpolarized calibration and nonuniformity correction for long-wave infrared microgrid imaging polarimeters,” Opt. Eng. 47(4), 046403 (2008).
[Crossref]

Ortu, A.

Perkins, R.

Pugh, E. N.

Ratliff, B. M.

B. M. Ratliff, C. F. LaCasse, and J. S. Tyo, “Interpolation strategies for reducing IFOV artifacts in microgrid polarimeter imagery,” Opt. Express 17(11), 9112–9125 (2009).
[Crossref] [PubMed]

D. L. Bowers, J. K. Boger, L. D. Wellems, S. E. Ortega, M. P. Fetrow, J. E. Hubbs, W. T. Black, B. M. Ratliff, and J. S. Tyo, “Unpolarized calibration and nonuniformity correction for long-wave infrared microgrid imaging polarimeters,” Opt. Eng. 47(4), 046403 (2008).
[Crossref]

Rush, A.

A. Rush and P. Hubel, “X3 sensor characteristics,” J. Soc. Photogr. Sci. Technol. Jpn. 66, 57–60 (2003).

Sabatke, D.

D. Sabatke, A. Locke, E. L. Dereniak, M. Descour, J. Garcia, T. Hamilton, and R. W. McMillan, “Snapshot imaging spectropolarimeter,” Opt. Eng. 41(5), 1048–1054 (2002).
[Crossref]

Saif, B.

Schechner, Y. Y.

Y. Y. Schechner and N. Karpel, “Recovery of underwater visibility and structure by polarization analysis,” IEEE J. Oceanic Eng. 30(3), 570–587 (2005).
[Crossref]

Schubert, J.

D. Lemke, F. Garzon, H. P. Gemuend, U. Groezinger, I. Heinrichsen, U. Klaas, W. Kraetschmer, E. Kreysa, P. Luetzow-Wentzky, and J. Schubert, “Far-infrared imaging, polarimetry, and spectrophotometry on the Infrared Space Observatory,” Opt. Eng. 33(1), 20–25 (1994).
[Crossref]

Sciortino, P.

J. J. Wang, F. Walters, X. Liu, P. Sciortino, and X. Deng, “High-performance, large area, deep ultraviolet to infrared polarizers based on 40 nm line/78 nm space nanowire grids,” Appl. Phys. Lett. 90(6), 061104 (2007).
[Crossref]

Shaw, J. A.

Tyo, J. S.

B. M. Ratliff, C. F. LaCasse, and J. S. Tyo, “Interpolation strategies for reducing IFOV artifacts in microgrid polarimeter imagery,” Opt. Express 17(11), 9112–9125 (2009).
[Crossref] [PubMed]

D. L. Bowers, J. K. Boger, L. D. Wellems, S. E. Ortega, M. P. Fetrow, J. E. Hubbs, W. T. Black, B. M. Ratliff, and J. S. Tyo, “Unpolarized calibration and nonuniformity correction for long-wave infrared microgrid imaging polarimeters,” Opt. Eng. 47(4), 046403 (2008).
[Crossref]

J. S. Tyo, D. L. Goldstein, D. B. Chenault, and J. A. Shaw, “Review of passive imaging polarimetry for remote sensing applications,” Appl. Opt. 45(22), 5453–5469 (2006).
[Crossref] [PubMed]

Van der Spiegel, J.

V. Gruev, Z. Yang, J. Van der Spiegel, and R. Etienne-Cummings, “Current mode image sensor with two transistors per pixel,” IEEE Trans. Circuits Syst. I 57(6), 1154–1165 (2010).
[Crossref]

V. Gruev, A. Ortu, N. Lazarus, J. Van der Spiegel, and N. Engheta, “Fabrication of a dual-tier thin film micropolarization array,” Opt. Express 15(8), 4994–5007 (2007).
[Crossref] [PubMed]

Ventura, P.

D. L. Gilblom, S. K. Yoo, and P. Ventura, “Real-time color imaging with a CMOS sensor having stacked photodiodes,” Proc. SPIE 5210, 105–115 (2004).
[Crossref]

D. L. Gilblom, S. K. Yoo, and P. Ventura, “Operation and performance of a color image sensor with layered photodiodes,” Proc. SPIE 5074, 318–331 (2003).
[Crossref]

Walters, F.

J. J. Wang, F. Walters, X. Liu, P. Sciortino, and X. Deng, “High-performance, large area, deep ultraviolet to infrared polarizers based on 40 nm line/78 nm space nanowire grids,” Appl. Phys. Lett. 90(6), 061104 (2007).
[Crossref]

Wang, J. J.

J. J. Wang, F. Walters, X. Liu, P. Sciortino, and X. Deng, “High-performance, large area, deep ultraviolet to infrared polarizers based on 40 nm line/78 nm space nanowire grids,” Appl. Phys. Lett. 90(6), 061104 (2007).
[Crossref]

Wellems, L. D.

D. L. Bowers, J. K. Boger, L. D. Wellems, S. E. Ortega, M. P. Fetrow, J. E. Hubbs, W. T. Black, B. M. Ratliff, and J. S. Tyo, “Unpolarized calibration and nonuniformity correction for long-wave infrared microgrid imaging polarimeters,” Opt. Eng. 47(4), 046403 (2008).
[Crossref]

Wilson, D. W.

D. A. Miller, D. W. Wilson, and E. L. Dereniak, “Novel design and alignment of wire-grid diffraction gratings on a visible focal plane array,” Opt. Eng. 51(1), 014001 (2012).
[Crossref]

Wolff, L. B.

H. Chen and L. B. Wolff, “Polarization phase-based method for material classification in computer vision,” Int. J. Comput. Vis. 28(1), 73–83 (1998).
[Crossref]

Xu, X.

X. Xu, M. Kulkarni, A. Nehorai, and V. Gruev, “A correlation-based interpolation algorithm for division-of-focal-plane polarization sensors,” Proc. SPIE 8364, 83640L, 83640L-8 (2012).
[Crossref]

Yang, Z.

V. Gruev, Z. Yang, J. Van der Spiegel, and R. Etienne-Cummings, “Current mode image sensor with two transistors per pixel,” IEEE Trans. Circuits Syst. I 57(6), 1154–1165 (2010).
[Crossref]

Yaroslavsky, A. N.

A. N. Yaroslavsky, V. Neel, and R. R. Anderson, “Demarcation of nonmelanoma skin cancer margins in thick excisions using multispectral polarized light imaging,” J. Invest. Dermatol. 121(2), 259–266 (2003).
[Crossref] [PubMed]

Yemelyanov, K. M.

Yoo, S. K.

D. L. Gilblom, S. K. Yoo, and P. Ventura, “Real-time color imaging with a CMOS sensor having stacked photodiodes,” Proc. SPIE 5210, 105–115 (2004).
[Crossref]

D. L. Gilblom, S. K. Yoo, and P. Ventura, “Operation and performance of a color image sensor with layered photodiodes,” Proc. SPIE 5074, 318–331 (2003).
[Crossref]

York, T.

Appl. Opt. (4)

Appl. Phys. Lett. (1)

J. J. Wang, F. Walters, X. Liu, P. Sciortino, and X. Deng, “High-performance, large area, deep ultraviolet to infrared polarizers based on 40 nm line/78 nm space nanowire grids,” Appl. Phys. Lett. 90(6), 061104 (2007).
[Crossref]

Astrophys. J. (1)

R. Antonucci and J. Miller, “Spectropolarimetry and the nature of NGC 1068,” Astrophys. J. 297, 621–632 (1985).
[Crossref]

IEEE J. Oceanic Eng. (1)

Y. Y. Schechner and N. Karpel, “Recovery of underwater visibility and structure by polarization analysis,” IEEE J. Oceanic Eng. 30(3), 570–587 (2005).
[Crossref]

IEEE Trans. Circuits Syst. I (1)

V. Gruev, Z. Yang, J. Van der Spiegel, and R. Etienne-Cummings, “Current mode image sensor with two transistors per pixel,” IEEE Trans. Circuits Syst. I 57(6), 1154–1165 (2010).
[Crossref]

Int. J. Comput. Vis. (2)

H. Chen and L. B. Wolff, “Polarization phase-based method for material classification in computer vision,” Int. J. Comput. Vis. 28(1), 73–83 (1998).
[Crossref]

S. K. Nayar, X. S. Fang, and T. Boult, “Separation of reflection components using color and polarization,” Int. J. Comput. Vis. 21(3), 163–186 (1997).
[Crossref]

J. Invest. Dermatol. (1)

A. N. Yaroslavsky, V. Neel, and R. R. Anderson, “Demarcation of nonmelanoma skin cancer margins in thick excisions using multispectral polarized light imaging,” J. Invest. Dermatol. 121(2), 259–266 (2003).
[Crossref] [PubMed]

J. Opt. Soc. Am. A (1)

J. Soc. Photogr. Sci. Technol. Jpn. (1)

A. Rush and P. Hubel, “X3 sensor characteristics,” J. Soc. Photogr. Sci. Technol. Jpn. 66, 57–60 (2003).

Opt. Eng. (4)

D. L. Bowers, J. K. Boger, L. D. Wellems, S. E. Ortega, M. P. Fetrow, J. E. Hubbs, W. T. Black, B. M. Ratliff, and J. S. Tyo, “Unpolarized calibration and nonuniformity correction for long-wave infrared microgrid imaging polarimeters,” Opt. Eng. 47(4), 046403 (2008).
[Crossref]

D. Lemke, F. Garzon, H. P. Gemuend, U. Groezinger, I. Heinrichsen, U. Klaas, W. Kraetschmer, E. Kreysa, P. Luetzow-Wentzky, and J. Schubert, “Far-infrared imaging, polarimetry, and spectrophotometry on the Infrared Space Observatory,” Opt. Eng. 33(1), 20–25 (1994).
[Crossref]

D. Sabatke, A. Locke, E. L. Dereniak, M. Descour, J. Garcia, T. Hamilton, and R. W. McMillan, “Snapshot imaging spectropolarimeter,” Opt. Eng. 41(5), 1048–1054 (2002).
[Crossref]

D. A. Miller, D. W. Wilson, and E. L. Dereniak, “Novel design and alignment of wire-grid diffraction gratings on a visible focal plane array,” Opt. Eng. 51(1), 014001 (2012).
[Crossref]

Opt. Express (4)

Proc. SPIE (6)

X. Xu, M. Kulkarni, A. Nehorai, and V. Gruev, “A correlation-based interpolation algorithm for division-of-focal-plane polarization sensors,” Proc. SPIE 8364, 83640L, 83640L-8 (2012).
[Crossref]

D. L. Gilblom, S. K. Yoo, and P. Ventura, “Operation and performance of a color image sensor with layered photodiodes,” Proc. SPIE 5074, 318–331 (2003).
[Crossref]

D. L. Gilblom, S. K. Yoo, and P. Ventura, “Real-time color imaging with a CMOS sensor having stacked photodiodes,” Proc. SPIE 5210, 105–115 (2004).
[Crossref]

T. York and V. Gruev, “Calibration method for division of focal plane polarimeters in the optical and near-infrared regime,” Proc. SPIE 8012, 80120H, 80120H-7 (2011).
[Crossref]

M. Kulkarni and V. Gruev, “A division-of-focal-plane spectral-polarization imaging sensor,” Proc. SPIE 8364, 83640K, 83640K-11 (2012).
[Crossref]

R. S. Loe and M. J. Duggin, “Hyperspectral imaging polarimeter design and calibration,” Proc. SPIE 4481, 195–205 (2002).
[Crossref]

Prog. Photovolt. Res. Appl. (1)

M. A. Green and M. J. Keevers, “Optical properties of intrinsic silicon at 300 K,” Prog. Photovolt. Res. Appl. 3(3), 189–192 (1995).
[Crossref]

Other (10)

R. B. Merrill, “Color separation in an active pixel cell imaging array using a triple-well structure,” (US Patent 1999).

B. G. Streetman and S. Banerjee, Solid State Electronic Devices (Prentice Hall, 1995).

S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind haze separation,” Proc. IEEE Comp. Vision and Pat. Recog. 2, 1984–1991 (2006).

D. Miyazaki, R. T. Tan, K. Hara, and K. Ikeuchi, “Polarization-based inverse rendering from a single view,” Proc. IEEE Comp. Vision and Pat. Recog. 9, 982–987 (2003).

J. R. Janesick, Photon Transfer (SPIE Press, 2007).

X. Liu, “CMOS image sensors dynamic range and SNR enhancement via statistical signal processing, ” (Stanford University, 2002).

T. York and V. Gruev, “Optical characterization of a polarization imager,” IEEE International Symposium on Circuits and Systems, 1576–1579 (2011).

Kodak, “KAI-1020 Datasheet,” http://www.truesenseimaging.com/products/interline-transfer-ccd/27-KAI-1020 .

J. Nakamura, Image Sensors and Signal Processing forDigital Still Cameras (Taylor & Francis, 2006).

D. H. Goldstein, Polarized Light (CRC Press, 2010).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1
Fig. 1

Spectral-polarization imaging sensor. A ruler and a US quarter are placed next to the camera in order to provide a sense of scale.

Fig. 2
Fig. 2

The wavelength dependence of the absorption depth of light in silicon, for varying levels of absorption.

Fig. 3
Fig. 3

A pixel of the spectral image sensor containing vertically stacked photodiodes and associated circuitry.

Fig. 4
Fig. 4

Block diagram of spectral-polarization sensor array. Each pixel of the sensor is integrated with nanowire polarization filters with transmission axis at 0°, 45°, 90° or 135°, enabling the capture of spectral and polarization information simultaneously.

Fig. 5
Fig. 5

Experimental setup for spectral characterization of integrated spectral-polarization sensor.

Fig. 6
Fig. 6

Measured quantum efficiency of each spectral channel of the sensor.

Fig. 7
Fig. 7

Linear fit for (a) red, (b) green and (c) blue channel responses as integration time is increased, for 550 nm incident light. The corresponding residual errors are shown in (d), (e) and (f) respectively.

Fig. 8
Fig. 8

Mean SNR across a 50 by 50 array (a) plotted against number of incident photons and (b) normalized with quantum efficiency, for 550 nm incident light.

Fig. 9
Fig. 9

Experimental setup for polarimetric characterization of integrated sensor.

Fig. 10
Fig. 10

Polarization responses of the integrated sensor at (a) 550 nm, (b) 650 nm and (c) 480 nm for the three spectral channels over a 50 by 50 pixel region.

Fig. 11
Fig. 11

Extinction ratio across wavelength, averaged over a 50 by 50 array for (a) 0°, (b) 45°, (c) 90°, (d) 135° polarization pixels.

Fig. 12
Fig. 12

Extinction ratio across integration time @ 550 nm, averaged over a 50 by 50 array for (a) 0°, (b) 45°, (c) 90°, (d) 135° polarization pixels.

Fig. 13
Fig. 13

Experimental setup for testing error in DoLP measurement. A rotating quarter-wave retarder is included in the setup.

Fig. 14
Fig. 14

(a) Plot of DoLP as measured by a single super pixel of the green channel of the integrated sensor against a reference measurement, (b) Absolute error in measured DoLP.

Fig. 15
Fig. 15

Spectral image recorded by the integrated sensor. From right to left: a Macbeth color checker chart, silicon ingot, and polarization filter wheel form the imaged scene.

Fig. 16
Fig. 16

Degree of linear polarization image recorded by the integrated sensor.

Fig. 17
Fig. 17

Angle of polarization image recorded by the integrated sensor, for when there is significant polarization information in the scene (DoLP > 0.5).

Tables (1)

Tables Icon

Table 1 Summary of Sensor Characteristics

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

I(λ,x)= I incident × e αx
S 0 = 0.5×( I 0 + I 45 + I 90 + I 135 )
S 1 = I 0 I 90
S 2 = I 45 I 135
DoLP= S 1 2 + S 2 2 S 0
AoP= 1 2 tan 1 ( S 2 S 1 )
QE(λ) = N e N ph
N ph = I×λ× t int × A pd hc
QE(λ) = hc × (DV/0.06) I × λ × t int × A pd
I θ =I cos 2 (θ-ϕ)

Metrics