Abstract

Polarization measurements conducted with a polarization camera using the Sony IMX 250 MZR polarization image sensor are assessed with the super-pixel calibration technique and a simple test setup. We define an error that quantifies the quality of the polarization measurements. Multiple factors influencing the measurement quality of the polarization camera are investigated and discussed. We demonstrate that polarization measurements are generally consistent throughout the sensor if not corrupted by large chief ray angles or large angles of incidence. The central ${600} \times {400}\;{\rm pixels}$ were analyzed, and it is shown that sufficiently large $ f $-numbers no longer influence measurement quality. We also argue that lens design and focal length have little influence on these central pixels. The findings of this study provide useful guidance for researchers using such a polarization image sensor.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. INTRODUCTION

Polarization refers to the geometric orientation of a transverse electromagnetic wave, and the Stokes parameters can be used to describe the state of polarization. The corresponding Stokes vector $\vec S$ is defined as [1]

$$\vec S = \left[{\begin{array}{* {20}{c}}{{S_0}}\\{{S_1}}\\{{S_2}}\\{{S_3}}\end{array}} \right] = \left[{\begin{array}{* {20}{c}}{\frac{{{I_0} + {I_{45}} + {I_{90}} + {I_{135}}}}{2}}\\{{I_0} - {I_{90}}}\\{{I_{45}} - {I_{135}}}\\{{I_R} - {I_L}}\end{array}} \right],$$
wherein ${I_0}$, ${I_{45}}$, ${I_{90}}$, ${I_{135}}$ represent the intensities of light in directions 0°, 45°, 90°, 135°, as indicated by their subscripts. They can be measured by orientating a linear polarizer accordingly. The intensities ${I_R}$, ${I_L}$ correspond to right and left circular polarizations, and their measurement requires additional components such as circular polarizers. Stokes parameters are hence defined by means of intensities and their differences, and can therefore describe not only fully polarized light but non-polarized and partially polarized light as well. With a known Stokes vector, the degree of polarization (DOP) and its orientation can be calculated. Commercially available polarization image sensors such as the Sony IMX 250 MZR sensor [2] apply a polarizer filter array (PFA) as a division-of-focal-plane polarimeter. This constitutes a four-directional polarizer. The four linear polarizers with orientation axes 0°, 45°, 90°, and 135° are arranged in a specific spatial pattern, as indicated in Fig. 1.
 figure: Fig. 1.

Fig. 1. Structure of the Sony polarization image sensor. Figure adapted from [2].

Download Full Size | PPT Slide | PDF

Each linear polarizer is covered by an on-chip micro-lens, and the passing light intensity is captured by individual sensor pixels. A set of four neighboring pixels, each with different polarizer orientations, forms what is known as a super-pixel [3]. With the definition in Eq. (1), each super-pixel can measure the first three Stokes parameters ${S_0}$, ${S_1}$, ${S_2}$. The fourth Stokes parameter ${S_3}$ requires knowledge of the rotational direction of the light and therefore cannot be measured with linear polarizers alone. To achieve a full Stokes imaging polarimeter, different approaches have been proposed by researchers [48]. Polarization image sensors applied in polarization cameras measure Stokes parameters with a resolution defined by the corresponding super-pixels. These types of cameras are being more frequently used in various applications [915] because of their ability to measure four different polarization directions with a single snapshot. For example, with the theory in [16], a polarization camera can measure two-dimensional birefringence with a single exposure. As polarization imaging systems are of interest in various applications [1725], the increased availability of polarization cameras is thought to promote the use of such systems.

Due to the spatial arrangement, polarization measurements conducted with a polarization camera suffer from sparsity, meaning that each single pixel senses only one polarization direction. The resulting field-of-view errors can be corrected [26,27]. Besides these and general digital imaging errors such as fixed pattern noise and photon response nonuniformity [28] that occur during a sensor readout, the PFA introduces a new type of error due to imperfections in the polarization filters. Each polarization filter will have slightly different optical properties that affect transmission, orientation, and the extinction ratio due to micro-polarizer non-uniformities and orientation misalignments. Various calibration techniques have been presented and summarized by Giménez et al. [29,30], such as single-pixel calibration [3,31], super-pixel calibration [3,32], adjacent super-pixel calibration [28], average analysis matrix calibration [33], and installation calibration [34]. It was found that the relatively simple super-pixel method performs well, and the more advanced approaches reportedly bring no significant advantages. Regarding the training data, Giménez et al. [30] recommend applying at least four different polarization angles and two different dynamic ranges. It was pointed out that the gap between PFA and the sensor causes measurement errors that depend on the focal length and the $ f $-number [32]. Therefore, a camera ought to be recalibrated when the focal length or the $ f $-number is changed, as a decrease in the focal length or $ f $-number is said to decrease performance. York and Gruev [35] arrived at a similar conclusion regarding the divergence of light. To counteract this effect, the Sony IMX 250 MZR sensor places the PFA below the on-chip micro-lenses, reducing the gap between the PFA and photodiodes. This is thought to increase performance of polarization filtering and decrease calibration sensitivity on the focal ratio. In general, it would be convenient if one calibration were to be valid for all super-pixels, i.e., that super-pixel performance does not differ significantly within the sensor. The intention of this study is therefore to present a practical optical setup for calibration; study any differences in the super-pixels; propose a definition of a measurement error that quantifies the polarization measurement quality; and discuss other relevant aspects such as pixel position within the sensor, $ f $-number, and camera lens dependencies. This is done using a specific monochrome polarization camera utilizing the Sony IMX 250 polarization image sensor.

2. MATERIAL AND METHODS

A. Optical Setup

The optical setup used in this study is depicted in Fig. 2. A 150 W EKE light bulb in a fiber optic illuminator with an IR blocking glass (transmission $\gt\!{90}\%$ at 400–690 nm; Edmund Optics Inc #64-457) was used as the light source. Two color filters were applied. We generated blue-green light with a bandpass filter (CWL 493 nm, FWHM 120 nm; Edmund Optics Inc #46-051) and red light with a longpass filter (CutOff 620 nm; Edmund Optics Inc #66-055). A broadband hybrid diffuser (${200} \times {200}\;{\rm mm}$, Edmund Optics Inc #36-619) was used as a target on which the lenses were focused. A rotatable linear polarizer (Techspec Glass polarizer 50.8 mm; Edmund Optics Inc #66-183) in a continuous manual rotation mount (Thorlabs Part RSP2/M) with an extinction ratio of 10,000:1 was placed directly in front of the lens. The camera was a monochrome polarization camera (Phoenix PHX050S-P, Lucid Vision Labs [36]) containing the Sony IMX 250 MZR sensor [2]. The sensor size is 11.1 mm, and the resolution of the camera is ${2448} \times {2048}\;{\rm pixels}$ with a pixel size of ${3.45} \times {3.45}\;\unicode{x00B5}{\rm m}$. This gives a spatial resolution of ${1224} \times {1024}$ (1.25 MP) super-pixels (compare Fig. 1). The various lenses investigated are summarized in Table 1. For each lens, the camera was positioned with the stated object distance to the diffuser and the focus adjusted accordingly. All available focal ratios up to a ratio of 22 were tested. The lenses chosen are commonly used types and were chosen to represent a sample within the wide spectrum of available lenses.

 figure: Fig. 2.

Fig. 2. Optical setup consisting of a rotatable linear polarizer and a polarization camera with a mounted lens, focused at an optical diffuser illuminated by color filtered light.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 1. Applied Lenses and Tested Focal Lengths, $ f $-Numbers and Object Distances

B. Calibration Procedure

The polarization camera was assessed by implementing the super-pixel calibration technique. The concept of this method is that the information from all four individual pixels within one super-pixel is considered. Derived quantities from the measured intensities such as degree of linear polarization (DOLP) and angle of linear polarization (AOLP) are more precise than those achieved by use of a single-pixel calibration approach. The calibration function

$${\rm Cal}(\vec I) = \underline G \cdot (\vec I - \vec d)$$
consists of a matrix $\underline G$, referred to as gain correction, the measured intensity vector $\vec I = {[{\begin{array}{* {20}{c}}{{I_0}}&{{I_{45}}}&{{I_{90}}}&{{I_{135}}}\end{array}}]^T}$, and a vector $\vec d = {[{\begin{array}{* {20}{c}}{{d_0}}&{{d_{45}}}&{{d_{90}}}&{{d_{135}}}\end{array}}]^T}$, which corrects for sensor dark noise. Equation (2) is well established in the literature [3]. However, some studies neglect the dark noise parameter $\vec d$ [32]. If the training data have to be considered as only partially polarized, meaning that the DOP, defined as
$${\rm DOP} = \frac{{\sqrt {S_1^2 + S_2^2 + S_3^2}}}{{{S_0}}},$$
is below one, i.e., ${\rm DOP} \lt 1$, the Stokes vector should be considered as a superposition of a fully polarized Stokes vector ${\vec S _P}$ and an unpolarized Stokes vector ${\vec S _U}$ [1]
$$\vec S = {\vec S _P} + {\vec S _U} = {\rm DOP}\left[{\begin{array}{* {20}{c}}{{S_0}}\\{{S_1}/{\rm DOP}}\\{{S_2}/{\rm DOP}}\\{{S_3}/{\rm DOP}}\end{array}} \right] + (1 - {\rm DOP})\left[{\begin{array}{* {20}{c}}{{S_0}}\\0\\0\\0\end{array}} \right].$$

In the case of purely linear polarization, the fourth Stokes parameter ${S_3}$ is equal to zero, enabling measurement of the DOP with a polarization camera. Following this approach, unpolarized light equally distributed over the 0°, 45°, 90°, and 135° pixels can be treated without the necessity to add it to the parameter $\vec d$. The intensity values corresponding to the unpolarized Stokes vector ${\vec S _U}$ are to be subtracted prior to calibration. In this study, it can be assumed that the training data are fully polarized (${\rm DOP} = {1}$), as the linear polarizer was placed directly in front of the lens.

The imaging model that transforms the incoming Stokes vector into the measured intensities $\vec I$ is modeled as

$$\vec I = \underline A \vec S + \vec d .$$

The ideal transfer function $\underline A$ is

$${\underline A _{\rm ideal}} = \frac{1}{2}\left[{\begin{array}{* {20}{c}}1&\quad1&\quad0&\quad0\\1&\quad0&\quad1&\quad0\\1&\quad{- 1}&\quad0&\quad0\\1&\quad0&\quad{- 1}&\quad0\end{array}} \right].$$

The gain correction matrix for the calibration function of a super-pixel is calculated as

$$\underline G = {\underline A _{\rm ideal}} \cdot {\underline A ^ +},$$
where ${\underline A ^ +}$ is the pseudo-inverse of the measured transfer function $\underline A$. This approach calibrates the measured intensities to ${\vec I_{\rm ideal}}$, so that they correspond to an ideal transfer function ${\underline A _{\rm ideal}}$
$${\rm Cal}(\vec I) = \underline G (\vec I - \vec d) = {\underline A _{\rm ideal}}{\underline A ^ +}(\underline A \vec S + \vec d - \vec d) = {\underline A _{\rm ideal}}\vec S = {\vec I _{\rm ideal}}.$$

For each calibration, training data for six different (linear) polarization angles {0°, 30°, 60°, 90°, 120°, 150°} at each of the three levels of dynamic range {50%, 70%, 90%} are acquired. This is considered to provide sufficient training data for calibration [30]. For each angle and at each dynamic range level, 10 images were taken and averaged. This results in a total of 180 images per calibration. The different dynamic range levels were obtained by varying the set exposure time. The images were taken at 12-bit resolution and without additional electronic gain (0 dB). For each lens, each $ f $-number, wavelength, and distance setting, the exposure time was adjusted so that the desired dynamic ranges were obtained. We use the normalized Stokes vector definition

$${\vec S _N} = \frac{1}{{{S_0}}}\vec S = \frac{2}{{{I_0} + {I_{45}} + {I_{90}} + {I_{135}}}}\vec S,$$
and normalized intensity vector and dark noise vector accordingly
$${\vec I _N} = \frac{2}{{{I_0} + {I_{45}} + {I_{90}} + {I_{135}}}}\left[{\begin{array}{* {20}{c}}{{I_0}}\\{{I_{45}}}\\{{I_{90}}}\\{{I_{135}}}\end{array}} \right],$$
and
$${\vec d _N} = \frac{2}{{{I_0} + {I_{45}} + {I_{90}} + {I_{135}}}}\left[{\begin{array}{* {20}{c}}{{d_0}}\\{{d_{45}}}\\{{d_{90}}}\\{{d_{135}}}\end{array}} \right],$$
so that we can rewrite Eq. (5) as
$${\vec I _N} - {\vec d _N} = \underline A \,{\vec S _N}.$$

The rotatable linear polarizer was aligned to the 0° direction of the polarization camera. However, as the manual arrangement with its 2° increment scale could introduce alignment errors, a misalignment correction factor $\Delta \phi$ is introduced. The normalized Stokes vector of linearly polarized light with polarization angle $\phi$ can be calculated with Mueller matrices. Together with the misalignment correction factor $\Delta \phi$, this leads to

$${\vec S _{N,{\rm linear}}}(\phi) = \left[{\begin{array}{* {20}{c}}1\\{\cos 2(\phi + \Delta \phi)}\\{\sin 2(\phi + \Delta \phi)}\\0\end{array}} \right].$$

The misalignment correction factor $\Delta \phi$ is estimated in an optimization routine by fitting the measured Stokes parameters ${S_{{1_N}}}(\phi)$ and ${S_{{2_N}}}(\phi)$ to the expected distributions in Eq. (13). For known ${\vec I _N}$, ${\vec d _N}$, and ${\vec S _N}$, the matrix $\underline A$ in Eq. (12) is estimated by a second optimization routine with starting point ${\underline A _{\rm ideal}}$. For the fitting of Eqs. (12) and (13), all three dynamic range levels were considered. We propose a relative error as the difference between measured normalized intensity vector ${\vec I _{N,{\rm linear}}}$ and calibrated (ideal) normalized intensity vector ${\vec I _{N,{\rm linear},{\rm ideal}}}$ in relation to the latter. By making use of the compatibility between Euclidean and Frobenius norms, indicated with ${\| \|_2}$ and ${\|\|_F}$, respectively, we obtain

$$\begin{split}&\frac{{{{\big\| {{{\vec I}_{N,{\rm linear}}} - {{\vec I}_{N,{\rm linear},{\rm ideal}}}} \big\|}_2}}}{{{{\big\| {{{\vec I}_{N,{\rm linear},{\rm ideal}}}} \big\|}_2}}}\\[-2pt] &= \frac{{{{\big\| {\underline A \,{{\vec S}_{N,{\rm linear}}} - {{\underline A}_{\rm ideal}}\,{{\vec S}_{N,{\rm linear}}}} \big\|}_2}}}{{{{\big\| {{{\underline A}_{\rm ideal}}\,{{\vec S}_{N,{\rm linear}}}} \big\|}_2}}} = \frac{{{{\big\| {(\underline A - {{\underline A}_{\rm ideal}}) \cdot {{\vec S}_{N,{\rm linear}}}} \big\|}_2}}}{{\frac{{\sqrt 6}}{2}}}\\ &\quad\le\frac{2}{{\sqrt 6}}{\big\| {\underline A - {{\underline A}_{\rm ideal}}} \big\|_F}{\big\| {{{\vec S}_{N,{\rm linear}}}} \big\|_2} = \frac{2}{{\sqrt 3}}{\big\| {\underline A - {{\underline A}_{\rm ideal}}} \big\|_F}.\end{split}$$

For the equation above, it is important to note that the Euclidean norm of the normalized linear Stokes vector in Eq. (13) is $\sqrt 2$. We define the result of Eq. (14) as our error estimation Err

$${Err}: = \frac{2}{{\sqrt 3}}{\big\| {\underline A - {{\underline A}_{\rm ideal}}} \big\|_F}.$$

It will therefore serve as a parameter that quantifies the amount of required calibration and the measurement error of a polarization camera when used uncalibrated. The relative error Err is based on linear polarized light with ${\rm DOP} = {1}$. The upper limit of the absolute measurement error ${\| {{{\vec I}_N} - {{\vec I}_{N,{\rm ideal}}}} \|_2}$ for partially polarized or unpolarized light is smaller, due to the smaller Euclidean norm of the corresponding Stokes vector [compare Eq. (3)]

$$\begin{split}{\big\| {\vec S} \big\|_2} &= \sqrt {S_0^2 + S_1^2 + S_2^2 + S_3^2}\\[-2pt]& = \sqrt {S_0^2 + {{({\rm DOP} \cdot {S_0})}^2}} \le \sqrt {2 \cdot S_0^2} = {\big\| {{{\vec S}_{{\rm DOP} = 1}}} \big\|_2}.\end{split}$$

In conclusion, the measurement error of a polarization camera is maximum when measuring completely linearly polarized light defined with ${\rm DOP} = {1}$ and ${S_0} = \sqrt {S_1^2 + S_2^2}$. As mentioned in Section 1, a polarization camera is not able to measure circularly polarized light, and thus ${S_3} = 0$ is assumed here.

C. Image Analysis

For the analysis in Section 3.C, a region of interest of ${600} \times {400}\;{\rm pixels}$ in the center of the sensor was selected. The same pixels were investigated in each measurement. This gives ${300} \times {200}$ analyzed super-pixels and hence 60,000 solutions of $\underline A$ for Eq. (12). We limited the amount of analyzed pixels for two reasons: first, due to the large number of required optimization routines and the consequent computational cost that would result if all ${1224} \times {1024}$ super-pixels are considered for every lens assessment; second, due to potential distortions related to geometrical optics [37,38]. The chief ray angle, the angle between the principal ray and the optical axis, is a function of the distance between the exit pupil (relevant aperture seen from the image plane) and sensor and the pixel position on the sensor. If the distance between the exit pupil and sensor is small, the chief ray angle for the pixels in the periphery of the sensor might cause the rays to be not appropriately focused onto the corresponding photodiodes. This can cause pixel vignetting and cross talk. Pixels in the center of the sensor are illuminated by principal rays that are largely parallel to the optical axis, and hence the microlenses are able to focus light correctly to the photodiodes. By choosing the region in the center of the sensor, we try to avoid corruption from large chief ray angles.

Light from the exit pupil entering a microlens is shaped like a cone, with the principal ray being in the center. The half-angle $\theta$ of the cone is the angle at which the marginal ray enters the microlens. This angle can be described by the numerical aperture. Increasing the $ f $-number decreases this angle, as the rays that remain able to enter the microlens become increasingly parallel to each other. Decreasing the $ f $-number by opening the aperture enables more light to reach the microlens by widening the light cone. This implies that the additional rays enter the microlens at an increased angle. Very small $ f $-numbers lead to rays with large angles of incidence that may not be correctly focused on to the photodiode. Moreover, large angles of incidence reduce the polarization efficiency of the PFA due to the dependence of polarizers on the angle of incidence [39].

In Section 3.B, the individual super-pixels are studied, and we evaluate the uniformity of the polarization measurement across the sensor. Potential corruption from large chief ray angles and from marginal rays with large angles of incidence $\theta$ is avoided as far as practically possible. We conducted measurements with the Nikon Micro-Nikkor 105 mm 1:2.8 lens and set the $ f $-number to 22. This gives marginal rays with a small incidence angle of about $\theta \approx {\sin ^{- 1}}[1/(2 \cdot 22)] \approx 1.3^\circ$. The distance between the exit pupil and sensor is lens type dependent, but a lens with a large focal length usually comes with a larger exit pupil distance than a lens with a small focal length.

 figure: Fig. 3.

Fig. 3. Histogram of the pixel counts (12-bit) for an exposure time of 1 s.

Download Full Size | PPT Slide | PDF

 figure: Fig. 4.

Fig. 4. Super-pixel assessment. (a) Measured versus calibrated data for a single super-pixel. (b) Histogram of the relative error Err for all super-pixels. Overall mean: 0.024 and standard deviation: 0.006. (c) Variation of the relative error Err for all ${1224} \times {1024}$ super-pixels. Lens: Nikon Micro-Nikkor 105 mm 1:2.8 with f/22. Wavelength: 413–573 nm.

Download Full Size | PPT Slide | PDF

3. RESULTS AND DISCUSSION

A. Effect of Dark Noise

Figure 3 shows the distribution of dark noise pixel counts (12-bit) for an exposure time of 1 s. The image was acquired with a lid covering the camera to block the surrounding light. The pixels are sorted according to their position within the polarization filter array: 0°, 45°, 90°, 135°. We can see that the dark noise pixel count values (12-bit) are small with respect to the available gray levels (4096). There seems to be a slight difference between the 0°–135° pixels and the 45°–90° pixels. The 45°–90° pixels, which are on the same horizontal sensor row, perform slightly better than their counterparts. If the measured intensities can be kept significantly large ($\vec I \gg \vec d$), the overall influence of the dark noise should be negligible, as the normalized vector in Eq. (12) will be close to zero due to Eq. (11): ${\vec d _N} \to 0$. We propose that it is feasible not to model the dark noise via vector $\vec d$, provided that the measured intensity values are sufficiently large compared to the dark noise. In Sections 3.B and 3.C, we will therefore neglect the impact of dark noise and do not consider its modeling, i.e., by setting $\vec d = 0$.

B. Consistency of the ${1224} \times {1024}$ Super-Pixels

The analyzed images in this section were taken with a Nikon Micro-Nikkor 105 mm 1:2.8. The settings were f/22, and exposure times of 1440 ms, 1920 ms, and 2400 ms were used to measure at three dynamic ranges. As discussed in Section 2.C, the large focal length of 105 mm is chosen to minimize distortion from large chief ray angles, and the high $ f $-number leads to rays that are largely parallel to each other. The blue-green bandpass filter was applied, limiting the wavelengths to a range of 413–573 nm. Each super-pixel was assessed by applying Eqs. (12) and (13), and the relative error Err was calculated using Eq. (15). Mean and standard deviation (Std) of the ${1224} \times {1024}$ correction factors $\Delta \phi$ in Eq. (15) are 0.26° and 0.09°, respectively. Figure 4(a) shows measured and calibrated data for an exemplary super-pixel. Figure 4(b) depicts all ${1224} \times {1024}$ super-pixel errors Err. The overall mean is 0.024 with a Std of 0.006. Most super-pixels produce errors below 4%. The mean of the fitted transfer functions $\underline A$ is

$${\rm mean}(\underline A) = \frac{1}{2}\left[{\begin{array}{* {20}{c}}{0.989}&\quad{0.980}&\quad{- 0.001}&\quad0\\{1.015}&\quad{- 0.001}&\quad{1.003}&\quad0\\{0.986}&\quad{- 0.980}&\quad{- 0.002}&\quad0\\{1.011}&\quad0&\quad{- 1.001}&\quad0\end{array}} \right],$$
with a Std of
$${\rm Std}(\underline A) = \left[{\begin{array}{* {20}{c}}{0.003}&\quad{0.003}&\quad{0.002}&\quad0\\{0.003}&\quad{0.002}&\quad{0.003}&\quad0\\{0.003}&\quad{0.003}&\quad{0.002}&\quad0\\{0.003}&\quad{0.002}&\quad{0.003}&\quad0\end{array}} \right].$$

Figure 4(c) shows the relative error Err for all analyzed ${1224} \times {1024}$ super-pixels. No influence of the sensor position can be identified. The “stripes” in the image are thought to be caused by the column-parallel readout of the CMOS sensor. We conclude that super-pixel polarization measurements are generally consistent across the sensor. Due to the small Stds in Eq. (18) (one order of magnitude smaller), we also conclude that all super-pixels perform similarly, and hence the mean of the transfer matrix $\underline A$ is a valid estimate for all super-pixels.

 figure: Fig. 5.

Fig. 5. Measurement results for the Schneider Xenon 25 mm f/0.95 lens. (a) Box plots of the relative error Err for the ${300} \times {200}$ analyzed super-pixels plotted as function of the $ f $-number. (b) Measured versus calibrated data for a single super-pixel with ${Err} = {10.8}\%$ taken with f/0.95. Measurements at position 1 ($\phi = 0$) are more accurate than at position 2 ($\phi = \pi /2$).

Download Full Size | PPT Slide | PDF

C. Influence of Focal Ratio and Choice of Lens

The results described in this section have been obtained by analyzing the images in the way described in Section 2.C. For the reasons given, a region of ${600} \times {400}$ pixels in the center of the sensor was selected. No dark noise effect was taken into account. For the lenses summarized in Table 1, all available $ f $-numbers up to f/22 have been tested by changing the aperture. The results obtained for the Schneider Xenon 25 mm f/0.95 lens and with red light are shown in Fig. 5(a). All relative errors Err of the ${300} \times {200}$ super-pixels are plotted as a function of the $ f $-number. For f/0.95, the relative errors are between 9.6% and 12.3%, and for f/11, between 2% and 5% (except for two outliers). Differences within the super-pixels are significantly smaller compared to the variations caused by changing the aperture. Mean values of the transfer functions $\underline A$ for the f/0.95 configuration are

$${\rm mean}(\underline A) = \frac{1}{2}\left[{\begin{array}{* {20}{c}}{1.038}&\quad{0.943}&\quad{0.001}&\quad0\\{0.998}&\quad{- 0.029}&\quad{0.935}&\quad0\\{0.939}&\quad{- 0.878}&\quad{- 0.001}&\quad0\\{1.025}&\quad{- 0.037}&\quad{- 0.935}&\quad0\end{array}} \right],$$
with Stds of
$${\rm Std}(\underline A) = \left[{\begin{array}{* {20}{c}}{0.003}&\quad{0.002}&\quad{0.001}&\quad0\\{0.003}&\quad{0.001}&\quad{0.002}&\quad0\\{0.003}&\quad{0.002}&\quad{0.002}&\quad0\\{0.003}&\quad{0.002}&\quad{0.002}&\quad0\end{array}} \right].$$

The Stds in Eq. (20) are small, indicating similar performance of the super-pixels across the sensor area and supporting the conclusion of Section 3.B. Looking at Fig. 5(b), we can see that the measurements at position 2 ($\phi = \pi /2$, polarization parallel to ${I_{90}}$) are less accurate compared to those at position 1 ($\phi = 0$, polarization parallel to ${I_0}$). It seems that the values for ${S_{{1_N}}}$ are slightly too high at position 2, indicating that the intensity measurements of the ${I_0}$ pixels are either too high or that the ${I_{90}}$ measurements are too low. The results for ${S_{{2_N}}}$ do not show this mismatch, indicating that the ${I_{45}}$ and ${I_{135}}$ pixels perform similarly. However, mismatching measurements for positions 1 and 2 were observed at low $ f $-numbers and not at higher $ f $-numbers. Besides comparing Fig. 5(b) with Fig. 4(a), measurement positions 1 and 2 can also be studied by means of entries ${A_{11}}$, ${A_{12}}$, ${A_{31}}$, ${A_{32}}$ in Eqs. (19) and (17).

The results for all lenses are shown in Fig. 6. The depicted values are the mean values for the calculated relative errors Err of the ${300} \times {200}$ analyzed super-pixels for each test case. We distinguish between blue-green light in Fig. 6(a), and red light in Fig. 6(b). Selected full transfer functions $\underline A$ for some of the results can be found in Appendix A in Tables 3–5. The overall performance of the blue-green light measurements is better. This is assumed to be due to a higher omnidirectional extinction ratio of the IMX250MZR sensor for wavelengths in the blue-green range compared to the longer wavelengths of red light. Sony states an extinction ratio of 330 for a wavelength of 500 nm compared to a ratio of 130 for a wavelength of 650 nm [2]. The results for Err are very close for nearly all investigated lenses, indicating good measurement repeatability as well as a lack of focal length dependency. The focal length could, however, have an influence on the pixels in the periphery of the sensor due to cross talk resulting from large chief ray angles. Also, the distance to the objective and hence the focus does not seem to influence the results, as both are varied from lens to lens. Although these parameters have not been examined individually, the results obtained do not indicate any potential dependency. The only results that do not match the others are those of measurements conducted with red light and the Sill TZM 1260/0.31. Its being the only telecentric lens investigated, we assume the particular telecentric design to be responsible. However, to clearly identify the cause of the deviation and to exclude any potential measurement errors, further investigation would be required.

 figure: Fig. 6.

Fig. 6. Relative error Err for the tested lenses and focal ratios: (a) blue-green light and (b) red light.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 2. Measurement Results for LCD Monitora

The influence of the focal ratio on performance is apparent in Figs. 6(a) and 6(b). In the case of blue-green light, focal ratios below 2.8 perform significantly worse. For ratios 2.8 and above, no differences between lenses can be observed, and the results do not improve with increasing focal ratio. The same trend can be seen for the red wavelengths, but an increase in the focal ratio up to values of eight seems to constantly improve performance. Focal ratios above 2.8 for blue-green light and above eight for red light seem to have converged to lower limits for Err of approximately 2.4% and 2.7%, respectively. We suppose these limits are more likely to be linked to the performance of the sensor than to the choice of lens. Similar to the consideration referred to in Section 2.C, we explain the inferior accuracy at low $ f $-numbers with ray optics. Estimating the incident angles $\theta$ of the marginal rays with $\sin \theta = 1/(2 \times f {\text -} {\rm number})$ gives an angle of about 32° for f/0.95. For these large angles of incidence, the polarization efficiency of the PFA is reduced [39]. Moreover, $ f $-numbers this low might not focus the rays correctly. For the sufficiently large $ f $-numbers stated, this effect seems to disappear, and any further increase does not bring any noticeable improvements, as the transfer functions $\underline A$ [and consequently the calibration functions Eq. (2)] for high focal ratios are similar (e.g., Table 3: f/8-f/16, and Table 4: f/2.8-f/11 in Appendix A). We also learn that the choice of lens does not seem to have an influence on the transfer function, as our tested lenses give similar results (see, for example, Table 5 in Appendix A). The transfer function does however depend on the wavelength (as seen, for instance, by comparing Table 3: f/8 with Table 5 in Appendix A). Summarizing, the focal ratio and wavelength affect polarization measurements, and researchers should consider these parameters, whereas, since we could not determine any significant differences between the tested lenses, their choice of lens is not otherwise constrained. It is important to note, however, that polarization measurement accuracy in the periphery of the sensor might deteriorate due to increased chief ray angles. This is particularly important for small focal lengths.

D. LCD Monitor Test Case

A LCD monitor emitting green light was analyzed. The intensities were calibrated with Eq. (2) using the corresponding transfer matrices for f/0.95 and f/8 (see Table 4 in Appendix A) and the Stokes parameters calculated with Eq. (1). We measured the AOLP using

$${\rm AOLP} = \frac{1}{2}{\tan ^{- 1}}\frac{{{S_2}}}{{{S_1}}}$$
and the DOLP applying Eq. (3), with ${S_3} = 0$. Eight measurements were conducted and all ${1224} \times {1024}$ pixels evaluated. Table 2 summarizes the results. We rotated the camera to change the AOLP. The images ${I_0}$, ${I_{45}}$, ${I_{90}}$, ${I_{135}}$ showed a striped pattern due to the unused red and blue LCD pixels. We applied Gaussian filtering to unify the images. A LCD monitor emits light with ${\rm DOLP} = {100}\%$. A DOLP above 100% is physically not possible, but calibrated results happen to be above this value. Looking at Table 2, it is striking that the measurements for an AOLP close to 0° seem to be more accurate with f/0.95 than with f/8. It is, however, important to note that these are only exemplary measurements, and no general conclusions should be drawn from them. The uncalibrated measurements for f/0.95 are more accurate in the 0° direction than they are in the 90° direction, which is consistent with Fig. 5(b) and the transfer function in Eq. (19). The maximum measurement errors in Table 2 are around 8% for f/0.95 and 3% for f/8. This is in line with the results for the calculated relative Err plotted in Fig. 6(a).
Tables Icon

Table 3. Matrix Entries for Transfer Functions Calculated with a Nikon Nikkor 85 mm 1:1.4 Lensa

Tables Icon

Table 4. Matrix Entries for Transfer Functions Calculated with a Schneider Xenon 25 mm f/0.95 Lensa

Tables Icon

Table 5. Comparison of Transfer Functions Calculated with Different Lensesa

4. CONCLUSION

In this paper, the Sony polarization image sensor IMX250MZR was calibrated using different lenses and focal ratios and two wavelengths: blue-green (413–573 nm) and red (620–690 nm). The sensor has ${2448} \times {2048}\;{\rm pixels}$ covered by a polarization filter array. Four neighboring single pixels form a super-pixel, resulting in ${1224} \times {1024}$ available super-pixels. We have defined a parameter that quantifies the amount of calibration necessary and thus serves as an indicator for the polarization measurement quality. With the help of this parameter, we show that polarization measurements are consistent for all super-pixels within quantifiable variances. Therefore, not every super-pixel has to be calibrated individually. We also deduce that dark noise does not significantly corrupt the results and hence does not have to be modeled, provided that the measured intensity values are sufficiently large. Various lenses and focal ratios up to 22 were tested. It was demonstrated that polarization measurements with blue-green light are generally more precise than with red, as the sensor’s extinction ratio is about three times higher for these shorter wavelengths [2]. Moreover, the choice of lens does not influence polarization measurement quality, but the results indicate that focal ratios below 2.8 disrupt the measurements. We finally determine that measurements conducted with $ f $-numbers above 2.8 (blue-green light) and eight (red light) are no longer affected by the focal ratio, and the upper error in these cases is estimated to be below 4%. This is, however, valid only for the central pixels. For a short focal length, a potential measurement deterioration towards the peripheral pixels due to an increasing influence of the chief ray angle cannot be excluded.

APPENDIX A: EVALUATED TRANSFER FUNCTIONS FOR THE TESTED LENSES AND FOCAL RATIOS

Noted are evaluated transfer functions $\underline A$ for some of the measurements conducted in Section 3.C. They are given in Tables 35 in the form of

$$\underline A = \frac{1}{2}\left[{\begin{array}{* {20}{c}}{{a_{11}}}&\quad{{a_{12}}}&\quad{{a_{13}}}&\quad0\\{{a_{21}}}&\quad{{a_{22}}}&\quad{{a_{23}}}&\quad0\\{{a_{31}}}&\quad{{a_{32}}}&\quad{{a_{33}}}&\quad0\\{{a_{41}}}&\quad{{a_{42}}}&\quad{{a_{43}}}&\quad0\end{array}} \right].$$

Disclosures

The authors declare no conflicts of interest.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

REFERENCES

1. R. A. Chipman, “Polarimetry,” in Handbook of Optics, M. Bass, E. W. Van Stryland, D. R. Williams, and W. L. Wolfe, eds. (McGraw-Hill, 1995), Vol. 2, pp. 781–783.

2. Sony Semiconductor Solutions Corporation, “Polarization image sensor with four-directional on-chip polarizer and global shutter function,” 2021, https://www.sony-semicon.co.jp/e/products/IS/industry/product/polarization.html.

3. S. Powell and V. Gruev, “Calibration methods for division-of-focal-plane polarimeters,” Opt. Express 21, 21039–21055 (2013). [CrossRef]  

4. S. Shibata, N. Hagen, and Y. Otani, “Robust full Stokes imaging polarimeter with dynamic calibration,” Opt. Lett. 44, 891–894 (2019). [CrossRef]  

5. X. Li, F. Goudail, P. Qi, T. Liu, and H. Hu, “Integration time optimization and starting angle autocalibration of full Stokes imagers based on a rotating retarder,” Opt. Express 29, 9494–9512 (2021). [CrossRef]  

6. Y. Otani, “Snapshot full Stokes imager by polarization cameras and its application to bio-imaging,” Proc. SPIE 11709, 1170904 (2021). [CrossRef]  

7. M. Vedel, S. Breugnot, and N. Lechocinski, “Full Stokes polarization imaging camera,” Proc. SPIE 8160, 81600X (2011). [CrossRef]  

8. X. Li, B. L. Teurnier, M. Boffety, T. Liu, H. Hu, and F. Goudail, “Theory of autocalibration feasibility and precision in full Stokes polarization imagers,” Opt. Express 28, 15268–15283 (2020). [CrossRef]  

9. L. B. Wolff, “Applications of polarization camera technology,” IEEE Expert 10, 30–38 (1995). [CrossRef]  

10. N. Oba and T. Inoue, “An apparatus for birefringence and extinction angle distributions measurements in cone and plate geometry by polarization imaging method,” Rheologica Acta 55, 699–708 (2016). [CrossRef]  

11. S. Iwata, T. Takahashi, T. Onuma, R. Nagumo, and H. Mori, “Local flow around a tiny bubble under a pressure-oscillation field in a viscoelastic worm-like micellar solution,” J. Non-Newtonian Fluid Mech. 263, 24–32 (2019). [CrossRef]  

12. G. Liu, J. Xiong, Y. Cao, R. Hou, L. Zhi, Z. Xia, W. Liu, X. Liu, C. Glorieux, J. H. Marsh, and L. Hou, “Visualization of ultrasonic wave field by stroboscopic polarization selective imaging,” Opt. Express 28, 27096–27106 (2020). [CrossRef]  

13. S. Sattar, P. J. Lapray, A. Foulonneau, and L. Bigué, “Review of spectral and polarization imaging systems,” Proc. SPIE 11351, 113511Q (2020). [CrossRef]  

14. C. Lane, D. Rode, and T. Rösgen, “Optical characterization method for birefringent fluids using a polarization camera,” Opt. Laser. Eng. 146, 106724 (2021). [CrossRef]  

15. C. Lane, D. Rode, and T. Rösgen, “Two-dimensional birefringence measurement technique using a polarization camera,” Appl. Opt. 60, 8435–8444 (2021). [CrossRef]  

16. T. Onuma and Y. Otani, “A development of two-dimensional birefringence distribution measurement system with a sampling rate of 1.3 MHz,” Opt. Commun. 315, 69–73 (2014). [CrossRef]  

17. O. Morel, C. Stolz, F. Meriaudeau, and P. Gorria, “Active lighting applied to three-dimensional reconstruction of specular metallic surfaces by polarization imaging,” Appl. Opt. 45, 4062–4068 (2006). [CrossRef]  

18. S. Tominaga and A. Kimachi, “Polarization imaging for material classification,” Opt. Eng. 47, 123201 (2008). [CrossRef]  

19. M. Ferraton, C. Stolz, and F. Mériaudeau, “Optimization of a polarization imaging system for 3D measurements of transparent objects,” Opt. Express 17, 21077–21082 (2009). [CrossRef]  

20. J. S. Tyo, D. L. Goldstein, D. B. Chenault, and J. A. Shaw, “Review of passive imaging polarimetry for remote sensing applications,” Appl. Opt. 45, 5453–5469 (2006). [CrossRef]  

21. S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind haze separation,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 1984–1991.

22. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt. 42, 511–525 (2003). [CrossRef]  

23. T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 385–399 (2009). [CrossRef]  

24. S. S. Lin, K. M. Yemelyanov, E. N. Pugh, and N. Engheta, “Polarization-based and specular-reflection-based noncontact latent fingerprint imaging and lifting,” J. Opt. Soc. Am. A 23, 2137–2153 (2006). [CrossRef]  

25. E. Puttonen, J. Suomalainen, T. Hakala, and J. Peltoniemi, “Measurement of reflectance properties of asphalt surfaces and their usability as reference targets for aerial photos,” IEEE Trans. Geosci. Remote Sens. 47, 2330–2339 (2009). [CrossRef]  

26. B. M. Ratliff, C. F. LaCasse, and J. S. Tyo, “Interpolation strategies for reducing IFOV artifacts in microgrid polarimeter imagery,” Opt. Express 17, 9112–9125 (2009). [CrossRef]  

27. J. S. Tyo, C. F. LaCasse, and B. M. Ratliff, “Total elimination of sampling errors in polarization imagery obtained with integrated microgrid polarimeters,” Opt. Lett. 34, 3187–3189 (2009). [CrossRef]  

28. Z. Chen, X. Wang, and R. Liang, “Calibration method of microgrid polarimeters with image interpolation,” Appl. Opt. 54, 995–1001 (2015). [CrossRef]  

29. Y. Giménez, P. J. Lapray, A. Foulonneau, and L. Bigué, “Calibration for polarization filter array cameras: recent advances,” in 14th International Conference on Quality Control by Artificial Vision (IEEE, 2019), p. 1117216.

30. Y. Giménez, P. J. Lapray, A. Foulonneau, and L. Bigué, “Calibration algorithms for polarization filter array camera: survey and evaluation,” J. Electron. Imag. 29, 041011 (2020). [CrossRef]  

31. N. A. Hagen, S. Shibata, and Y. Otani, “Calibration and performance assessment of microgrid polarization cameras,” Opt. Eng. 58, 082408 (2019). [CrossRef]  

32. G. Myhre, W. L. Hsu, A. Peinado, C. LaCasse, N. Brock, R. A. Chipman, and S. Pau, “Liquid crystal polymer full-stokes division of focal plane polarimeter,” Opt. Express 20, 27393–27409 (2012). [CrossRef]  

33. J. Zhang, H. Luo, B. Hui, and Z. Chang, “Non-uniformity correction for division of focal plane polarimeters with a calibration method,” Appl. Opt. 55, 7236–7240 (2016). [CrossRef]  

34. G. Han, X. Hu, J. Lian, X. He, L. Zhang, Y. Wang, and F. Dong, “Design and calibration of a novel bio-inspired pixelated polarized light compass,” Sensors 17, 2623 (2017). [CrossRef]  

35. T. York and V. Gruev, “Characterization of a visible spectrum division-of-focal-plane polarimeter,” Appl. Opt. 51, 5392–5400 (2012). [CrossRef]  

36. LUCID Vision Labs Inc., “Phoenix 5.0 MP Polarization Model (IMX250MZR/MYR),” 2021, https://thinklucid.com/product/phoenix-5-0-mp-polarized-model/.

37. A. J. Theuwissen, “Advanced imaging: light sensitivity,” in Solid-state Imaging with Charge-coupled Devices (Springer, 2006), Vol. 1, pp. 193–218.

38. E. Hecht, “Geometrical optics,” in Optics, 3rd ed. (Addison-Wesley, 1998), pp. 148–246.

39. R. A. Chipman, W. S. T. Lam, and G. Young, “Typical polarization problems in optical systems,” in Polarized Light and Optical Systems (CRC Press, 2019), pp. 12–17.

References

  • View by:

  1. R. A. Chipman, “Polarimetry,” in Handbook of Optics, M. Bass, E. W. Van Stryland, D. R. Williams, and W. L. Wolfe, eds. (McGraw-Hill, 1995), Vol. 2, pp. 781–783.
  2. Sony Semiconductor Solutions Corporation, “Polarization image sensor with four-directional on-chip polarizer and global shutter function,” 2021, https://www.sony-semicon.co.jp/e/products/IS/industry/product/polarization.html .
  3. S. Powell and V. Gruev, “Calibration methods for division-of-focal-plane polarimeters,” Opt. Express 21, 21039–21055 (2013).
    [Crossref]
  4. S. Shibata, N. Hagen, and Y. Otani, “Robust full Stokes imaging polarimeter with dynamic calibration,” Opt. Lett. 44, 891–894 (2019).
    [Crossref]
  5. X. Li, F. Goudail, P. Qi, T. Liu, and H. Hu, “Integration time optimization and starting angle autocalibration of full Stokes imagers based on a rotating retarder,” Opt. Express 29, 9494–9512 (2021).
    [Crossref]
  6. Y. Otani, “Snapshot full Stokes imager by polarization cameras and its application to bio-imaging,” Proc. SPIE 11709, 1170904 (2021).
    [Crossref]
  7. M. Vedel, S. Breugnot, and N. Lechocinski, “Full Stokes polarization imaging camera,” Proc. SPIE 8160, 81600X (2011).
    [Crossref]
  8. X. Li, B. L. Teurnier, M. Boffety, T. Liu, H. Hu, and F. Goudail, “Theory of autocalibration feasibility and precision in full Stokes polarization imagers,” Opt. Express 28, 15268–15283 (2020).
    [Crossref]
  9. L. B. Wolff, “Applications of polarization camera technology,” IEEE Expert 10, 30–38 (1995).
    [Crossref]
  10. N. Oba and T. Inoue, “An apparatus for birefringence and extinction angle distributions measurements in cone and plate geometry by polarization imaging method,” Rheologica Acta 55, 699–708 (2016).
    [Crossref]
  11. S. Iwata, T. Takahashi, T. Onuma, R. Nagumo, and H. Mori, “Local flow around a tiny bubble under a pressure-oscillation field in a viscoelastic worm-like micellar solution,” J. Non-Newtonian Fluid Mech. 263, 24–32 (2019).
    [Crossref]
  12. G. Liu, J. Xiong, Y. Cao, R. Hou, L. Zhi, Z. Xia, W. Liu, X. Liu, C. Glorieux, J. H. Marsh, and L. Hou, “Visualization of ultrasonic wave field by stroboscopic polarization selective imaging,” Opt. Express 28, 27096–27106 (2020).
    [Crossref]
  13. S. Sattar, P. J. Lapray, A. Foulonneau, and L. Bigué, “Review of spectral and polarization imaging systems,” Proc. SPIE 11351, 113511Q (2020).
    [Crossref]
  14. C. Lane, D. Rode, and T. Rösgen, “Optical characterization method for birefringent fluids using a polarization camera,” Opt. Laser. Eng. 146, 106724 (2021).
    [Crossref]
  15. C. Lane, D. Rode, and T. Rösgen, “Two-dimensional birefringence measurement technique using a polarization camera,” Appl. Opt. 60, 8435–8444 (2021).
    [Crossref]
  16. T. Onuma and Y. Otani, “A development of two-dimensional birefringence distribution measurement system with a sampling rate of 1.3 MHz,” Opt. Commun. 315, 69–73 (2014).
    [Crossref]
  17. O. Morel, C. Stolz, F. Meriaudeau, and P. Gorria, “Active lighting applied to three-dimensional reconstruction of specular metallic surfaces by polarization imaging,” Appl. Opt. 45, 4062–4068 (2006).
    [Crossref]
  18. S. Tominaga and A. Kimachi, “Polarization imaging for material classification,” Opt. Eng. 47, 123201 (2008).
    [Crossref]
  19. M. Ferraton, C. Stolz, and F. Mériaudeau, “Optimization of a polarization imaging system for 3D measurements of transparent objects,” Opt. Express 17, 21077–21082 (2009).
    [Crossref]
  20. J. S. Tyo, D. L. Goldstein, D. B. Chenault, and J. A. Shaw, “Review of passive imaging polarimetry for remote sensing applications,” Appl. Opt. 45, 5453–5469 (2006).
    [Crossref]
  21. S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind haze separation,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 1984–1991.
  22. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt. 42, 511–525 (2003).
    [Crossref]
  23. T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 385–399 (2009).
    [Crossref]
  24. S. S. Lin, K. M. Yemelyanov, E. N. Pugh, and N. Engheta, “Polarization-based and specular-reflection-based noncontact latent fingerprint imaging and lifting,” J. Opt. Soc. Am. A 23, 2137–2153 (2006).
    [Crossref]
  25. E. Puttonen, J. Suomalainen, T. Hakala, and J. Peltoniemi, “Measurement of reflectance properties of asphalt surfaces and their usability as reference targets for aerial photos,” IEEE Trans. Geosci. Remote Sens. 47, 2330–2339 (2009).
    [Crossref]
  26. B. M. Ratliff, C. F. LaCasse, and J. S. Tyo, “Interpolation strategies for reducing IFOV artifacts in microgrid polarimeter imagery,” Opt. Express 17, 9112–9125 (2009).
    [Crossref]
  27. J. S. Tyo, C. F. LaCasse, and B. M. Ratliff, “Total elimination of sampling errors in polarization imagery obtained with integrated microgrid polarimeters,” Opt. Lett. 34, 3187–3189 (2009).
    [Crossref]
  28. Z. Chen, X. Wang, and R. Liang, “Calibration method of microgrid polarimeters with image interpolation,” Appl. Opt. 54, 995–1001 (2015).
    [Crossref]
  29. Y. Giménez, P. J. Lapray, A. Foulonneau, and L. Bigué, “Calibration for polarization filter array cameras: recent advances,” in 14th International Conference on Quality Control by Artificial Vision (IEEE, 2019), p. 1117216.
  30. Y. Giménez, P. J. Lapray, A. Foulonneau, and L. Bigué, “Calibration algorithms for polarization filter array camera: survey and evaluation,” J. Electron. Imag. 29, 041011 (2020).
    [Crossref]
  31. N. A. Hagen, S. Shibata, and Y. Otani, “Calibration and performance assessment of microgrid polarization cameras,” Opt. Eng. 58, 082408 (2019).
    [Crossref]
  32. G. Myhre, W. L. Hsu, A. Peinado, C. LaCasse, N. Brock, R. A. Chipman, and S. Pau, “Liquid crystal polymer full-stokes division of focal plane polarimeter,” Opt. Express 20, 27393–27409 (2012).
    [Crossref]
  33. J. Zhang, H. Luo, B. Hui, and Z. Chang, “Non-uniformity correction for division of focal plane polarimeters with a calibration method,” Appl. Opt. 55, 7236–7240 (2016).
    [Crossref]
  34. G. Han, X. Hu, J. Lian, X. He, L. Zhang, Y. Wang, and F. Dong, “Design and calibration of a novel bio-inspired pixelated polarized light compass,” Sensors 17, 2623 (2017).
    [Crossref]
  35. T. York and V. Gruev, “Characterization of a visible spectrum division-of-focal-plane polarimeter,” Appl. Opt. 51, 5392–5400 (2012).
    [Crossref]
  36. LUCID Vision Labs Inc., “Phoenix 5.0 MP Polarization Model (IMX250MZR/MYR),” 2021, https://thinklucid.com/product/phoenix-5-0-mp-polarized-model/ .
  37. A. J. Theuwissen, “Advanced imaging: light sensitivity,” in Solid-state Imaging with Charge-coupled Devices (Springer, 2006), Vol. 1, pp. 193–218.
  38. E. Hecht, “Geometrical optics,” in Optics, 3rd ed. (Addison-Wesley, 1998), pp. 148–246.
  39. R. A. Chipman, W. S. T. Lam, and G. Young, “Typical polarization problems in optical systems,” in Polarized Light and Optical Systems (CRC Press, 2019), pp. 12–17.

2021 (4)

X. Li, F. Goudail, P. Qi, T. Liu, and H. Hu, “Integration time optimization and starting angle autocalibration of full Stokes imagers based on a rotating retarder,” Opt. Express 29, 9494–9512 (2021).
[Crossref]

Y. Otani, “Snapshot full Stokes imager by polarization cameras and its application to bio-imaging,” Proc. SPIE 11709, 1170904 (2021).
[Crossref]

C. Lane, D. Rode, and T. Rösgen, “Optical characterization method for birefringent fluids using a polarization camera,” Opt. Laser. Eng. 146, 106724 (2021).
[Crossref]

C. Lane, D. Rode, and T. Rösgen, “Two-dimensional birefringence measurement technique using a polarization camera,” Appl. Opt. 60, 8435–8444 (2021).
[Crossref]

2020 (4)

G. Liu, J. Xiong, Y. Cao, R. Hou, L. Zhi, Z. Xia, W. Liu, X. Liu, C. Glorieux, J. H. Marsh, and L. Hou, “Visualization of ultrasonic wave field by stroboscopic polarization selective imaging,” Opt. Express 28, 27096–27106 (2020).
[Crossref]

S. Sattar, P. J. Lapray, A. Foulonneau, and L. Bigué, “Review of spectral and polarization imaging systems,” Proc. SPIE 11351, 113511Q (2020).
[Crossref]

X. Li, B. L. Teurnier, M. Boffety, T. Liu, H. Hu, and F. Goudail, “Theory of autocalibration feasibility and precision in full Stokes polarization imagers,” Opt. Express 28, 15268–15283 (2020).
[Crossref]

Y. Giménez, P. J. Lapray, A. Foulonneau, and L. Bigué, “Calibration algorithms for polarization filter array camera: survey and evaluation,” J. Electron. Imag. 29, 041011 (2020).
[Crossref]

2019 (3)

N. A. Hagen, S. Shibata, and Y. Otani, “Calibration and performance assessment of microgrid polarization cameras,” Opt. Eng. 58, 082408 (2019).
[Crossref]

S. Iwata, T. Takahashi, T. Onuma, R. Nagumo, and H. Mori, “Local flow around a tiny bubble under a pressure-oscillation field in a viscoelastic worm-like micellar solution,” J. Non-Newtonian Fluid Mech. 263, 24–32 (2019).
[Crossref]

S. Shibata, N. Hagen, and Y. Otani, “Robust full Stokes imaging polarimeter with dynamic calibration,” Opt. Lett. 44, 891–894 (2019).
[Crossref]

2017 (1)

G. Han, X. Hu, J. Lian, X. He, L. Zhang, Y. Wang, and F. Dong, “Design and calibration of a novel bio-inspired pixelated polarized light compass,” Sensors 17, 2623 (2017).
[Crossref]

2016 (2)

J. Zhang, H. Luo, B. Hui, and Z. Chang, “Non-uniformity correction for division of focal plane polarimeters with a calibration method,” Appl. Opt. 55, 7236–7240 (2016).
[Crossref]

N. Oba and T. Inoue, “An apparatus for birefringence and extinction angle distributions measurements in cone and plate geometry by polarization imaging method,” Rheologica Acta 55, 699–708 (2016).
[Crossref]

2015 (1)

2014 (1)

T. Onuma and Y. Otani, “A development of two-dimensional birefringence distribution measurement system with a sampling rate of 1.3 MHz,” Opt. Commun. 315, 69–73 (2014).
[Crossref]

2013 (1)

2012 (2)

2011 (1)

M. Vedel, S. Breugnot, and N. Lechocinski, “Full Stokes polarization imaging camera,” Proc. SPIE 8160, 81600X (2011).
[Crossref]

2009 (5)

2008 (1)

S. Tominaga and A. Kimachi, “Polarization imaging for material classification,” Opt. Eng. 47, 123201 (2008).
[Crossref]

2006 (3)

2003 (1)

1995 (1)

L. B. Wolff, “Applications of polarization camera technology,” IEEE Expert 10, 30–38 (1995).
[Crossref]

Bigué, L.

S. Sattar, P. J. Lapray, A. Foulonneau, and L. Bigué, “Review of spectral and polarization imaging systems,” Proc. SPIE 11351, 113511Q (2020).
[Crossref]

Y. Giménez, P. J. Lapray, A. Foulonneau, and L. Bigué, “Calibration algorithms for polarization filter array camera: survey and evaluation,” J. Electron. Imag. 29, 041011 (2020).
[Crossref]

Y. Giménez, P. J. Lapray, A. Foulonneau, and L. Bigué, “Calibration for polarization filter array cameras: recent advances,” in 14th International Conference on Quality Control by Artificial Vision (IEEE, 2019), p. 1117216.

Boffety, M.

Breugnot, S.

M. Vedel, S. Breugnot, and N. Lechocinski, “Full Stokes polarization imaging camera,” Proc. SPIE 8160, 81600X (2011).
[Crossref]

Brock, N.

Cao, Y.

Chang, Z.

Chen, Z.

Chenault, D. B.

Chipman, R. A.

G. Myhre, W. L. Hsu, A. Peinado, C. LaCasse, N. Brock, R. A. Chipman, and S. Pau, “Liquid crystal polymer full-stokes division of focal plane polarimeter,” Opt. Express 20, 27393–27409 (2012).
[Crossref]

R. A. Chipman, W. S. T. Lam, and G. Young, “Typical polarization problems in optical systems,” in Polarized Light and Optical Systems (CRC Press, 2019), pp. 12–17.

R. A. Chipman, “Polarimetry,” in Handbook of Optics, M. Bass, E. W. Van Stryland, D. R. Williams, and W. L. Wolfe, eds. (McGraw-Hill, 1995), Vol. 2, pp. 781–783.

Dong, F.

G. Han, X. Hu, J. Lian, X. He, L. Zhang, Y. Wang, and F. Dong, “Design and calibration of a novel bio-inspired pixelated polarized light compass,” Sensors 17, 2623 (2017).
[Crossref]

Engheta, N.

Ferraton, M.

Foulonneau, A.

S. Sattar, P. J. Lapray, A. Foulonneau, and L. Bigué, “Review of spectral and polarization imaging systems,” Proc. SPIE 11351, 113511Q (2020).
[Crossref]

Y. Giménez, P. J. Lapray, A. Foulonneau, and L. Bigué, “Calibration algorithms for polarization filter array camera: survey and evaluation,” J. Electron. Imag. 29, 041011 (2020).
[Crossref]

Y. Giménez, P. J. Lapray, A. Foulonneau, and L. Bigué, “Calibration for polarization filter array cameras: recent advances,” in 14th International Conference on Quality Control by Artificial Vision (IEEE, 2019), p. 1117216.

Giménez, Y.

Y. Giménez, P. J. Lapray, A. Foulonneau, and L. Bigué, “Calibration algorithms for polarization filter array camera: survey and evaluation,” J. Electron. Imag. 29, 041011 (2020).
[Crossref]

Y. Giménez, P. J. Lapray, A. Foulonneau, and L. Bigué, “Calibration for polarization filter array cameras: recent advances,” in 14th International Conference on Quality Control by Artificial Vision (IEEE, 2019), p. 1117216.

Glorieux, C.

Goldstein, D. L.

Gorria, P.

Goudail, F.

Gruev, V.

Hagen, N.

Hagen, N. A.

N. A. Hagen, S. Shibata, and Y. Otani, “Calibration and performance assessment of microgrid polarization cameras,” Opt. Eng. 58, 082408 (2019).
[Crossref]

Hakala, T.

E. Puttonen, J. Suomalainen, T. Hakala, and J. Peltoniemi, “Measurement of reflectance properties of asphalt surfaces and their usability as reference targets for aerial photos,” IEEE Trans. Geosci. Remote Sens. 47, 2330–2339 (2009).
[Crossref]

Han, G.

G. Han, X. Hu, J. Lian, X. He, L. Zhang, Y. Wang, and F. Dong, “Design and calibration of a novel bio-inspired pixelated polarized light compass,” Sensors 17, 2623 (2017).
[Crossref]

He, X.

G. Han, X. Hu, J. Lian, X. He, L. Zhang, Y. Wang, and F. Dong, “Design and calibration of a novel bio-inspired pixelated polarized light compass,” Sensors 17, 2623 (2017).
[Crossref]

Hecht, E.

E. Hecht, “Geometrical optics,” in Optics, 3rd ed. (Addison-Wesley, 1998), pp. 148–246.

Hou, L.

Hou, R.

Hsu, W. L.

Hu, H.

Hu, X.

G. Han, X. Hu, J. Lian, X. He, L. Zhang, Y. Wang, and F. Dong, “Design and calibration of a novel bio-inspired pixelated polarized light compass,” Sensors 17, 2623 (2017).
[Crossref]

Hui, B.

Inoue, T.

N. Oba and T. Inoue, “An apparatus for birefringence and extinction angle distributions measurements in cone and plate geometry by polarization imaging method,” Rheologica Acta 55, 699–708 (2016).
[Crossref]

Iwata, S.

S. Iwata, T. Takahashi, T. Onuma, R. Nagumo, and H. Mori, “Local flow around a tiny bubble under a pressure-oscillation field in a viscoelastic worm-like micellar solution,” J. Non-Newtonian Fluid Mech. 263, 24–32 (2019).
[Crossref]

Kimachi, A.

S. Tominaga and A. Kimachi, “Polarization imaging for material classification,” Opt. Eng. 47, 123201 (2008).
[Crossref]

LaCasse, C.

LaCasse, C. F.

Lam, W. S. T.

R. A. Chipman, W. S. T. Lam, and G. Young, “Typical polarization problems in optical systems,” in Polarized Light and Optical Systems (CRC Press, 2019), pp. 12–17.

Lane, C.

C. Lane, D. Rode, and T. Rösgen, “Optical characterization method for birefringent fluids using a polarization camera,” Opt. Laser. Eng. 146, 106724 (2021).
[Crossref]

C. Lane, D. Rode, and T. Rösgen, “Two-dimensional birefringence measurement technique using a polarization camera,” Appl. Opt. 60, 8435–8444 (2021).
[Crossref]

Lapray, P. J.

S. Sattar, P. J. Lapray, A. Foulonneau, and L. Bigué, “Review of spectral and polarization imaging systems,” Proc. SPIE 11351, 113511Q (2020).
[Crossref]

Y. Giménez, P. J. Lapray, A. Foulonneau, and L. Bigué, “Calibration algorithms for polarization filter array camera: survey and evaluation,” J. Electron. Imag. 29, 041011 (2020).
[Crossref]

Y. Giménez, P. J. Lapray, A. Foulonneau, and L. Bigué, “Calibration for polarization filter array cameras: recent advances,” in 14th International Conference on Quality Control by Artificial Vision (IEEE, 2019), p. 1117216.

Lechocinski, N.

M. Vedel, S. Breugnot, and N. Lechocinski, “Full Stokes polarization imaging camera,” Proc. SPIE 8160, 81600X (2011).
[Crossref]

Li, X.

Lian, J.

G. Han, X. Hu, J. Lian, X. He, L. Zhang, Y. Wang, and F. Dong, “Design and calibration of a novel bio-inspired pixelated polarized light compass,” Sensors 17, 2623 (2017).
[Crossref]

Liang, R.

Lin, S. S.

Liu, G.

Liu, T.

Liu, W.

Liu, X.

Luo, H.

Marsh, J. H.

Meriaudeau, F.

Mériaudeau, F.

Morel, O.

Mori, H.

S. Iwata, T. Takahashi, T. Onuma, R. Nagumo, and H. Mori, “Local flow around a tiny bubble under a pressure-oscillation field in a viscoelastic worm-like micellar solution,” J. Non-Newtonian Fluid Mech. 263, 24–32 (2019).
[Crossref]

Myhre, G.

Nagumo, R.

S. Iwata, T. Takahashi, T. Onuma, R. Nagumo, and H. Mori, “Local flow around a tiny bubble under a pressure-oscillation field in a viscoelastic worm-like micellar solution,” J. Non-Newtonian Fluid Mech. 263, 24–32 (2019).
[Crossref]

Namer, E.

S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind haze separation,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 1984–1991.

Narasimhan, S. G.

Nayar, S. K.

Oba, N.

N. Oba and T. Inoue, “An apparatus for birefringence and extinction angle distributions measurements in cone and plate geometry by polarization imaging method,” Rheologica Acta 55, 699–708 (2016).
[Crossref]

Onuma, T.

S. Iwata, T. Takahashi, T. Onuma, R. Nagumo, and H. Mori, “Local flow around a tiny bubble under a pressure-oscillation field in a viscoelastic worm-like micellar solution,” J. Non-Newtonian Fluid Mech. 263, 24–32 (2019).
[Crossref]

T. Onuma and Y. Otani, “A development of two-dimensional birefringence distribution measurement system with a sampling rate of 1.3 MHz,” Opt. Commun. 315, 69–73 (2014).
[Crossref]

Otani, Y.

Y. Otani, “Snapshot full Stokes imager by polarization cameras and its application to bio-imaging,” Proc. SPIE 11709, 1170904 (2021).
[Crossref]

S. Shibata, N. Hagen, and Y. Otani, “Robust full Stokes imaging polarimeter with dynamic calibration,” Opt. Lett. 44, 891–894 (2019).
[Crossref]

N. A. Hagen, S. Shibata, and Y. Otani, “Calibration and performance assessment of microgrid polarization cameras,” Opt. Eng. 58, 082408 (2019).
[Crossref]

T. Onuma and Y. Otani, “A development of two-dimensional birefringence distribution measurement system with a sampling rate of 1.3 MHz,” Opt. Commun. 315, 69–73 (2014).
[Crossref]

Pau, S.

Peinado, A.

Peltoniemi, J.

E. Puttonen, J. Suomalainen, T. Hakala, and J. Peltoniemi, “Measurement of reflectance properties of asphalt surfaces and their usability as reference targets for aerial photos,” IEEE Trans. Geosci. Remote Sens. 47, 2330–2339 (2009).
[Crossref]

Powell, S.

Pugh, E. N.

Puttonen, E.

E. Puttonen, J. Suomalainen, T. Hakala, and J. Peltoniemi, “Measurement of reflectance properties of asphalt surfaces and their usability as reference targets for aerial photos,” IEEE Trans. Geosci. Remote Sens. 47, 2330–2339 (2009).
[Crossref]

Qi, P.

Ratliff, B. M.

Rode, D.

C. Lane, D. Rode, and T. Rösgen, “Two-dimensional birefringence measurement technique using a polarization camera,” Appl. Opt. 60, 8435–8444 (2021).
[Crossref]

C. Lane, D. Rode, and T. Rösgen, “Optical characterization method for birefringent fluids using a polarization camera,” Opt. Laser. Eng. 146, 106724 (2021).
[Crossref]

Rösgen, T.

C. Lane, D. Rode, and T. Rösgen, “Optical characterization method for birefringent fluids using a polarization camera,” Opt. Laser. Eng. 146, 106724 (2021).
[Crossref]

C. Lane, D. Rode, and T. Rösgen, “Two-dimensional birefringence measurement technique using a polarization camera,” Appl. Opt. 60, 8435–8444 (2021).
[Crossref]

Sattar, S.

S. Sattar, P. J. Lapray, A. Foulonneau, and L. Bigué, “Review of spectral and polarization imaging systems,” Proc. SPIE 11351, 113511Q (2020).
[Crossref]

Schechner, Y. Y.

T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 385–399 (2009).
[Crossref]

Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt. 42, 511–525 (2003).
[Crossref]

S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind haze separation,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 1984–1991.

Shaw, J. A.

Shibata, S.

S. Shibata, N. Hagen, and Y. Otani, “Robust full Stokes imaging polarimeter with dynamic calibration,” Opt. Lett. 44, 891–894 (2019).
[Crossref]

N. A. Hagen, S. Shibata, and Y. Otani, “Calibration and performance assessment of microgrid polarization cameras,” Opt. Eng. 58, 082408 (2019).
[Crossref]

Shwartz, S.

S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind haze separation,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 1984–1991.

Stolz, C.

Suomalainen, J.

E. Puttonen, J. Suomalainen, T. Hakala, and J. Peltoniemi, “Measurement of reflectance properties of asphalt surfaces and their usability as reference targets for aerial photos,” IEEE Trans. Geosci. Remote Sens. 47, 2330–2339 (2009).
[Crossref]

Takahashi, T.

S. Iwata, T. Takahashi, T. Onuma, R. Nagumo, and H. Mori, “Local flow around a tiny bubble under a pressure-oscillation field in a viscoelastic worm-like micellar solution,” J. Non-Newtonian Fluid Mech. 263, 24–32 (2019).
[Crossref]

Teurnier, B. L.

Theuwissen, A. J.

A. J. Theuwissen, “Advanced imaging: light sensitivity,” in Solid-state Imaging with Charge-coupled Devices (Springer, 2006), Vol. 1, pp. 193–218.

Tominaga, S.

S. Tominaga and A. Kimachi, “Polarization imaging for material classification,” Opt. Eng. 47, 123201 (2008).
[Crossref]

Treibitz, T.

T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 385–399 (2009).
[Crossref]

Tyo, J. S.

Vedel, M.

M. Vedel, S. Breugnot, and N. Lechocinski, “Full Stokes polarization imaging camera,” Proc. SPIE 8160, 81600X (2011).
[Crossref]

Wang, X.

Wang, Y.

G. Han, X. Hu, J. Lian, X. He, L. Zhang, Y. Wang, and F. Dong, “Design and calibration of a novel bio-inspired pixelated polarized light compass,” Sensors 17, 2623 (2017).
[Crossref]

Wolff, L. B.

L. B. Wolff, “Applications of polarization camera technology,” IEEE Expert 10, 30–38 (1995).
[Crossref]

Xia, Z.

Xiong, J.

Yemelyanov, K. M.

York, T.

Young, G.

R. A. Chipman, W. S. T. Lam, and G. Young, “Typical polarization problems in optical systems,” in Polarized Light and Optical Systems (CRC Press, 2019), pp. 12–17.

Zhang, J.

Zhang, L.

G. Han, X. Hu, J. Lian, X. He, L. Zhang, Y. Wang, and F. Dong, “Design and calibration of a novel bio-inspired pixelated polarized light compass,” Sensors 17, 2623 (2017).
[Crossref]

Zhi, L.

Appl. Opt. (7)

IEEE Expert (1)

L. B. Wolff, “Applications of polarization camera technology,” IEEE Expert 10, 30–38 (1995).
[Crossref]

IEEE Trans. Geosci. Remote Sens. (1)

E. Puttonen, J. Suomalainen, T. Hakala, and J. Peltoniemi, “Measurement of reflectance properties of asphalt surfaces and their usability as reference targets for aerial photos,” IEEE Trans. Geosci. Remote Sens. 47, 2330–2339 (2009).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 385–399 (2009).
[Crossref]

J. Electron. Imag. (1)

Y. Giménez, P. J. Lapray, A. Foulonneau, and L. Bigué, “Calibration algorithms for polarization filter array camera: survey and evaluation,” J. Electron. Imag. 29, 041011 (2020).
[Crossref]

J. Non-Newtonian Fluid Mech. (1)

S. Iwata, T. Takahashi, T. Onuma, R. Nagumo, and H. Mori, “Local flow around a tiny bubble under a pressure-oscillation field in a viscoelastic worm-like micellar solution,” J. Non-Newtonian Fluid Mech. 263, 24–32 (2019).
[Crossref]

J. Opt. Soc. Am. A (1)

Opt. Commun. (1)

T. Onuma and Y. Otani, “A development of two-dimensional birefringence distribution measurement system with a sampling rate of 1.3 MHz,” Opt. Commun. 315, 69–73 (2014).
[Crossref]

Opt. Eng. (2)

S. Tominaga and A. Kimachi, “Polarization imaging for material classification,” Opt. Eng. 47, 123201 (2008).
[Crossref]

N. A. Hagen, S. Shibata, and Y. Otani, “Calibration and performance assessment of microgrid polarization cameras,” Opt. Eng. 58, 082408 (2019).
[Crossref]

Opt. Express (7)

Opt. Laser. Eng. (1)

C. Lane, D. Rode, and T. Rösgen, “Optical characterization method for birefringent fluids using a polarization camera,” Opt. Laser. Eng. 146, 106724 (2021).
[Crossref]

Opt. Lett. (2)

Proc. SPIE (3)

Y. Otani, “Snapshot full Stokes imager by polarization cameras and its application to bio-imaging,” Proc. SPIE 11709, 1170904 (2021).
[Crossref]

M. Vedel, S. Breugnot, and N. Lechocinski, “Full Stokes polarization imaging camera,” Proc. SPIE 8160, 81600X (2011).
[Crossref]

S. Sattar, P. J. Lapray, A. Foulonneau, and L. Bigué, “Review of spectral and polarization imaging systems,” Proc. SPIE 11351, 113511Q (2020).
[Crossref]

Rheologica Acta (1)

N. Oba and T. Inoue, “An apparatus for birefringence and extinction angle distributions measurements in cone and plate geometry by polarization imaging method,” Rheologica Acta 55, 699–708 (2016).
[Crossref]

Sensors (1)

G. Han, X. Hu, J. Lian, X. He, L. Zhang, Y. Wang, and F. Dong, “Design and calibration of a novel bio-inspired pixelated polarized light compass,” Sensors 17, 2623 (2017).
[Crossref]

Other (8)

LUCID Vision Labs Inc., “Phoenix 5.0 MP Polarization Model (IMX250MZR/MYR),” 2021, https://thinklucid.com/product/phoenix-5-0-mp-polarized-model/ .

A. J. Theuwissen, “Advanced imaging: light sensitivity,” in Solid-state Imaging with Charge-coupled Devices (Springer, 2006), Vol. 1, pp. 193–218.

E. Hecht, “Geometrical optics,” in Optics, 3rd ed. (Addison-Wesley, 1998), pp. 148–246.

R. A. Chipman, W. S. T. Lam, and G. Young, “Typical polarization problems in optical systems,” in Polarized Light and Optical Systems (CRC Press, 2019), pp. 12–17.

Y. Giménez, P. J. Lapray, A. Foulonneau, and L. Bigué, “Calibration for polarization filter array cameras: recent advances,” in 14th International Conference on Quality Control by Artificial Vision (IEEE, 2019), p. 1117216.

S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind haze separation,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 1984–1991.

R. A. Chipman, “Polarimetry,” in Handbook of Optics, M. Bass, E. W. Van Stryland, D. R. Williams, and W. L. Wolfe, eds. (McGraw-Hill, 1995), Vol. 2, pp. 781–783.

Sony Semiconductor Solutions Corporation, “Polarization image sensor with four-directional on-chip polarizer and global shutter function,” 2021, https://www.sony-semicon.co.jp/e/products/IS/industry/product/polarization.html .

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Structure of the Sony polarization image sensor. Figure adapted from [2].
Fig. 2.
Fig. 2. Optical setup consisting of a rotatable linear polarizer and a polarization camera with a mounted lens, focused at an optical diffuser illuminated by color filtered light.
Fig. 3.
Fig. 3. Histogram of the pixel counts (12-bit) for an exposure time of 1 s.
Fig. 4.
Fig. 4. Super-pixel assessment. (a) Measured versus calibrated data for a single super-pixel. (b) Histogram of the relative error Err for all super-pixels. Overall mean: 0.024 and standard deviation: 0.006. (c) Variation of the relative error Err for all ${1224} \times {1024}$ super-pixels. Lens: Nikon Micro-Nikkor 105 mm 1:2.8 with f/22. Wavelength: 413–573 nm.
Fig. 5.
Fig. 5. Measurement results for the Schneider Xenon 25 mm f/0.95 lens. (a) Box plots of the relative error Err for the ${300} \times {200}$ analyzed super-pixels plotted as function of the $ f $ -number. (b) Measured versus calibrated data for a single super-pixel with ${Err} = {10.8}\%$ taken with f/0.95. Measurements at position 1 ( $\phi = 0$ ) are more accurate than at position 2 ( $\phi = \pi /2$ ).
Fig. 6.
Fig. 6. Relative error Err for the tested lenses and focal ratios: (a) blue-green light and (b) red light.

Tables (5)

Tables Icon

Table 1. Applied Lenses and Tested Focal Lengths, f -Numbers and Object Distances

Tables Icon

Table 2. Measurement Results for LCD Monitor a

Tables Icon

Table 3. Matrix Entries for Transfer Functions Calculated with a Nikon Nikkor 85 mm 1:1.4 Lens a

Tables Icon

Table 4. Matrix Entries for Transfer Functions Calculated with a Schneider Xenon 25 mm f/0.95 Lens a

Tables Icon

Table 5. Comparison of Transfer Functions Calculated with Different Lenses a

Equations (22)

Equations on this page are rendered with MathJax. Learn more.

S = [ S 0 S 1 S 2 S 3 ] = [ I 0 + I 45 + I 90 + I 135 2 I 0 I 90 I 45 I 135 I R I L ] ,
C a l ( I ) = G _ ( I d )
D O P = S 1 2 + S 2 2 + S 3 2 S 0 ,
S = S P + S U = D O P [ S 0 S 1 / D O P S 2 / D O P S 3 / D O P ] + ( 1 D O P ) [ S 0 0 0 0 ] .
I = A _ S + d .
A _ i d e a l = 1 2 [ 1 1 0 0 1 0 1 0 1 1 0 0 1 0 1 0 ] .
G _ = A _ i d e a l A _ + ,
C a l ( I ) = G _ ( I d ) = A _ i d e a l A _ + ( A _ S + d d ) = A _ i d e a l S = I i d e a l .
S N = 1 S 0 S = 2 I 0 + I 45 + I 90 + I 135 S ,
I N = 2 I 0 + I 45 + I 90 + I 135 [ I 0 I 45 I 90 I 135 ] ,
d N = 2 I 0 + I 45 + I 90 + I 135 [ d 0 d 45 d 90 d 135 ] ,
I N d N = A _ S N .
S N , l i n e a r ( ϕ ) = [ 1 cos 2 ( ϕ + Δ ϕ ) sin 2 ( ϕ + Δ ϕ ) 0 ] .
I N , l i n e a r I N , l i n e a r , i d e a l 2 I N , l i n e a r , i d e a l 2 = A _ S N , l i n e a r A _ i d e a l S N , l i n e a r 2 A _ i d e a l S N , l i n e a r 2 = ( A _ A _ i d e a l ) S N , l i n e a r 2 6 2 2 6 A _ A _ i d e a l F S N , l i n e a r 2 = 2 3 A _ A _ i d e a l F .
E r r := 2 3 A _ A _ i d e a l F .
S 2 = S 0 2 + S 1 2 + S 2 2 + S 3 2 = S 0 2 + ( D O P S 0 ) 2 2 S 0 2 = S D O P = 1 2 .
m e a n ( A _ ) = 1 2 [ 0.989 0.980 0.001 0 1.015 0.001 1.003 0 0.986 0.980 0.002 0 1.011 0 1.001 0 ] ,
S t d ( A _ ) = [ 0.003 0.003 0.002 0 0.003 0.002 0.003 0 0.003 0.003 0.002 0 0.003 0.002 0.003 0 ] .
m e a n ( A _ ) = 1 2 [ 1.038 0.943 0.001 0 0.998 0.029 0.935 0 0.939 0.878 0.001 0 1.025 0.037 0.935 0 ] ,
S t d ( A _ ) = [ 0.003 0.002 0.001 0 0.003 0.001 0.002 0 0.003 0.002 0.002 0 0.003 0.002 0.002 0 ] .
A O L P = 1 2 tan 1 S 2 S 1
A _ = 1 2 [ a 11 a 12 a 13 0 a 21 a 22 a 23 0 a 31 a 32 a 33 0 a 41 a 42 a 43 0 ] .

Metrics