Division of focal plane (DoFP) polarization image sensors capture polarization properties of light at every imaging frame. However, these imaging sensors capture only partial polarization information, resulting in reduced spatial resolution output and a varying instantaneous field of overview (IFoV). Interpolation methods are used to reduce the drawbacks and recover the missing polarization information. In this paper, we propose residual interpolation as an alternative to normal interpolation for division of focal plane polarization image sensors, where the residual is the difference between an observed and a tentatively estimated pixel value. Our results validate that our proposed algorithm using residual interpolation can give state-of-the-art performance over several previously published interpolation methods, namely bilinear, bicubic, spline and gradient-based interpolation. Visual image evaluation as well as mean square error analysis is applied to test images. For an outdoor polarized image of a car, residual interpolation has less mean square error and better visual evaluation results.
© 2017 Optical Society of America
The vital physical parameters of light are intensity (I), wavelength (λ), and polarization (Vector E). In the past, polarization has been ignored by imaging technology, as the human eye is insensitive to the polarization factor of light. Polarization information provides orthogonal information compared to intensity and color, and it captures information about the target 3-D surface normals [1–4], material composition, roughness and ultra-high efficiency metamaterial polarizers [5–8]. In bioengineering research, polarization imaging is used to discriminate healthy from diseased tissue without the use of molecular markers [9–11].
Various techniques and instruments have been developed to record the polarization parameters of light . With developments in nanofabrication technology, compact, inexpensive and high-resolution polarization sensors called division of focal plane (DoFP) polarization image sensors have been realized [13–18]. These developments in nanofabrication and nanomaterials allow for fabrication of pixelated nanowire filters on the top surface of the imaging sensor and help realize robust DoFP polarization imaging sensors. The imaging elements, i.e., photodetectors and micro polarization filter arrays, are included on the same substrate as the DoFP image sensor. The main advantage of DoFP image sensors over division of time (DoT) sensors is their capability of seizing polarization information at each frame and avoiding incorrect polarization information in moving targets . DoFP sensors integrate pixelated polarization filters with an array of imaging elements and are organized in a super-pixel [20,21] configuration containing four distinct pixelated polarization filters with transmission axes oriented at 0°, 90°, 45° and 135°, respectively (see Fig. 1). The super pixel holds all the required information to obtain a useful polarized image, recording the first three (S0, S1, S2) or four Stokes (S3) parameters at every frame .
The image obtained from a DoFP sensor has a lower accuracy of polarization information as each individual pixel within the super pixel has a slightly different field of view. To reconstruct the polarization information, missing pixel information is estimated across the imaging array [23,24]. Normally DoFP polarization sensors lose spatial resolution and capture erroneous polarization information [22,23,25,26]. Due to the four spatially distributed pixelated polarization filters, the instantaneous fields of view for the neighboring pixels in a super-pixel configuration can be different from each other [24,27–30]. Therefore, the first three Stokes parameters (S0, S1, S2), angle of linear polarization (AoP) and degree of linear polarization (DoLP) will contain error and are different from the true polarization component. Such edge artifacts can be easily observed in AoP and DoLP images. These drawbacks need to be resolved to obtain the real-time advantage of DoFP image sensors.
The polarization-imaging sensor shares many similarities with color imaging using a Bayer color filter array . The 2 × 2 super pixel of a color filter array consists of three wavelength channels of red, green and blue. The blue and red colors are down-sampled by a factor of 25% each, while green is down-sampled by a factor of 50%. As the color filters are placed on the imaging element array, spatial resolution is reduced in the different color channels during the above-mentioned down-sampling. Since the sensor for each channel only perceives partial information, interpolation algorithms are used to recover the lost spatial resolution with minimal artifacts.
In color image demosaicking, the G image is first interpolated, and then the tentatively estimated R image () is generated. The residuals are created between observed and tentatively estimated R pixels (R-) at the R pixels. Then the interpolated residuals are added to the tentatively estimated R to get the interpolated image [32–34]. The interpolation techniques used for a color filter array cannot be directly employed on the polarization domain due to the essential difference between these two modalities. We have borrowed the tentative estimation of pixels from the residual interpolation technique used for a color filter array. In DoFP, we apply the residual interpolation method to the ‘4’ polarized images separately before calculating the DoLP and AoP.
Interpolation methods are applied to recover some of the lost spatial resolution and improve the accuracy of the captured polarization information. The following methods have been traditionally used to interpolate polarization information: bilinear, bicubic, spline and gradient-based methods [22,23,25–28]. For each method, four polarization-filtered images are required to obtain the necessary polarization information, such as Stokes parameters and angle and degree of linear polarization. The bilinear, bicubic, and spline methods are essentially low-pass filters, which smooth out the intensity information obtained by the four polarization-filtered images and create saw tooth artifacts at the edges. In the case of multiple-object images against a background, their continuity for low-resolution images fails and false polarization signatures are generated. In the gradient-based interpolation technique, interleaved gradients are used, and these will introduce nonconformities due to the instantaneous field of overview (IFoV). However, the errors can be reduced clearly if a proper interpolation technique is used to reduce incorrect polarization information. Therefore, a novel residual interpolation method with edge preservation and interpolation of residuals obtained by tentatively estimating and observing pixel difference is developed to provide more accuracy.
In this paper, we propose residual interpolation for division of focal plane imaging sensors, where the interpolation is executed in a “residual” domain. We interpolated low-resolution polarized images, generated tentative estimates of 0°, 45°, 90° and 135° () and calculated their residuals, which are the differences between the observed and the tentatively estimated pixel values (i.e., 0° -,45° -90° - and 135° -). We used a guided filter for edge preservation and to accurately generate the tentative estimates of the pixel values . The advantage of the guided filter is that its computing time is independent of filter size. The performance of the residual interpolation method is compared with several previously published interpolation methods, the bilinear, bicubic, spline and gradient-based methods. Based on the results, it is clear that residual interpolation outperforms the others in terms of both mean square error and visual evaluation.
1.2 Linear polarization imaging calculations
A DoFP imaging sensor captures both the intensity and polarization information of a scene. The sensor samples the scene through 0°, 45°, 90° and 135° polarization filters, and registers the four sub-sampled images. The intensity and polarization are then worked out from images with 0°, 45°, 90° and 135° linear polarization filters. To observe polarization, two polarization properties are of most interest, DoLP, and AoP. The intensity, polarization difference, DoLP and AoP are computed via the following equations:
A linear polarization filter has been used to find the Stokes parameters; however, the Stokes parameter () is not captured for the DoFP sensor shown in Fig. 1. The above equations show that a polarization imaging sensor has to sample the images with four linear polarization filters offset by 45° .
2. Residual interpolation
In this section, the bilinear interpolation method is first briefly reviewed, followed by an overview of the proposed residual interpolation method. We used bilinear interpolation due to its low computational complexity. The basic principle of bilinear interpolation is to estimate the pixel values in two dimensions. The distance weighted average of the four nearest pixel values is used to estimate a new pixel value.
Based on the four neighboring pixel points (see Fig. 2), f (i, j), f (i + 1, j), f (i, j + 1) and f (i + 1, j + 1), of the interpolating point f (x, y), the mathematical formula for bilinear interpolation can be written as follows:
We used a guided filter for edge-preserving smoothing of the images taken at. The guided filter is a linear model between high-resolution guided images (, )and filtered images (). The filter output ‘’ is a linear transform of the guided image in window at pixel k, and this model is applied on all four images:
Similarly, this model can be applied on to get filter output and are the linear coefficients constant in window . These coefficients can be determined by minimizing the cost function in window :
Similarly, cost functions E45, E90 and E135 can be calculated for and are the mean and variance of in , is the number of pixels and is the mean of Similarly, for E45, E90 and E135, respectively. The filter output for all the tentatively estimated images of the four polarizers can be found as follows:
The guided filter provides the tentatively estimated pixel values for each 4-polarizer filter array (). The residuals (∆) can be calculated from the original and guided filter output pixel as follows:
The residuals (∆) can be further interpolated and then added to the tentatively obtained pixel values. The ∆ for is shown in the residual interpolated difference block in Fig. 4(a). We can calculate the pixel ∆ at 90° polarization orientation by bilinear interpolation as follows:
The net residual interpolation is the addition of each pixel value of the residuals (∆), and the tentative estimates for each polarized image are shown in Fig. 4(b). This can be represented as follows:
The difference between the residual and bilinear interpolations is that bilinear estimates the new pixel from the four nearest neighbors and residual interpolates, using bilinear interpolation, the residuals obtained from the tentatively estimated and observed pixel values. The interpolated residuals are added to the tentatively estimated pixel values to get the net residual interpolation.
In Fig. 5, the flow chart of residual interpolation is presented. First, the low-resolution polarization images are up-sampled using bilinear interpolation to generate high-resolution images. With the guided-filter, the proposed algorithm can up sample the sparse data by using the above-mentioned interpolated images and the high-resolution guided images ( ). Therefore, the image structures of the interpolated images are preserved. We generated the tentative estimation of the 0°, 45°, 90° and 135° images () and calculated the residuals, as presented in Eq. (15). The residuals were again interpolated using bilinear interpolation, and those tentatively estimated were augmented to get the residual interpolation, as shown in Eq. (19).
3. Modulation transfer function
The modulation transfer function (MTF) of an imaging system is a measure of the contrast being transferred by the lens. The MTF measures the magnitude response of an imaging system to a varying sinusoidal pattern at different spatial frequencies. Simply, it measures how a camera can see small things . At a range of spatial frequency, the MTF can be calculated as the ratio of the contrast at the output to the input sinusoidal patterns.
The polarization image sensor captures the polarization information at each imaging frame. An input target image can be defined as varying sinusoidal patterns at different frequencies. We generated an artificial sinusoidal image in MATLAB at each frame, i.e, 0°, 45°, 90° and 135°, as follows :Fig. 6. In Fig. 6(a) to (e) the 3-D MTF chart of bilinear, bicubic, spline and gradient-based and residual interpolation is represented, showing the MTF along and . The horizontal frequency and vertical frequency were swept from 0 to 0.5 cycles per pixel. Figure 6(f) shows the MTF response of spline interpolation in cyan color, bilinear shown in green, bicubic in blue, gradient in yellow and residual interpolation in red along = . The ideal MTF, shown by the dotted purple line, has unity gain for 0 to 0.5 and zero gain at higher frequencies. All interpolation algorithms other than residual interpolation give low gain below 0.25 cycles per second and zero gain at higher frequencies. The residual interpolation has a higher gain at low frequencies beyond 0.25 cycles per pixel than the other methods. At higher frequencies, greater than 0.375, residual interpolation again provides increased gain less than 0.5 cycles per pixel as compared to other interpolation methods.
4. Experimental setup
To assess the accuracy of different interpolation methods, the “true” high-resolution polarization image must be known beforehand, and with the DoFP polarization imaging sensors, we can only generate low-resolution images. Four images at 0°, 45°, 90° and 135° orientations were taken of a car in an outdoor environment with a CMLN-13S2M-CS CCD camera mounted with a linear polarization filter. These true high-resolution images were down-sampled by following the sampling pattern of the DoFP polarization imaging sensor, and four low-resolution images at 0°, 45°, 90° and 135° feature angles were obtained, like those acquired from a DoFP sensor. After applying the interpolation algorithms, the final high-resolution interpolated images were compared against the true high-resolution images that were originally obtained. The images obtained at 0°, 45°, 90° and 135° orientations are grayscale images. A further four true high-resolution images of a car were captured in an outdoor environment. The intensity, DoLP and AoP images for the car are shown in Fig. 7. Potential error in the original high-resolution images due to optical misalignment is not an experimental concern. Our concern is only to test the interpolation algorithm in terms of mean square error and visual evaluation because the algorithms are tested on low-resolution images, and any original error will be the same in both the low and high resolution images. Our setup provides a fair comparison of the reconstruction error among the bilinear, bicubic, spline and gradient based and residual interpolation methods.
5. Performance estimation
In this section, we adopt mean square error (MSE) and visual evaluation to compare the performance of the different interpolation algorithms. The interpolation methods are used to get high-resolution images from the low-resolution images. The interpolated images are matched with the true high-resolution image, and the characteristics of polarization are explored on the images. In Section 5.1 and 5.2 the visual image evaluation and the MSE of both test images are given, respectively.
5.1 Visual image evaluation
In Fig. 7, the intensity, DoLP and AoP images computed from the high-resolution car image are shown. The row contains the true high-resolution polarization image, and this is used to visually compare the reconstruction accuracy of the different interpolation methods presented in Fig. 8 using small patches. In the first column of Fig. 8, the original intensity, DoLP and AoP are given, while the second to the sixth columns show bilinear, bicubic, spline, gradient, and residual interpolation images, respectively.
In Fig. 7, the DoLP values are lower in the red areas and higher on the car’s glass windows, with the light blue spot on the glass marked with a white oval showing medium DoLP. The AoP value is low, medium, and high in the red, light blue and purple areas, respectively.
In Fig. 8, small patches of the car image are shown. In the figure, the purple ovals on the original and residual intensity images show how many artifacts have been effectively recovered. The image artifacts and glitches are effectively reduced by residual interpolation, with the interpolated images near to the original images. DoLP and AoP patches of the car show the accuracy of residual interpolation as compared to the bilinear, bicubic, gradient and spline algorithms.
We used parallel programming to speed up the processing to real-time. On our system Intel (R) core (TM) i5-3470 CPU @ 3.20GHz, 8-Gb RAM, bilinear interpolation computes the AoP image (960 x1280) in 40 milliseconds, bicubic in 47 milliseconds, gradient in 45 milliseconds, spline in 57 milliseconds and residual interpolation in 61 milliseconds. The most important is that in terms of the polarization information recovery, mean square error, visual evaluation and MTF, the residual interpolation performance is significantly better than other interpolation methods.
5.2. MSE comparison
The MSE for the different interpolation algorithms is found using the following equation:Table 1 for the car image. The minimum MSE for the I(0°), I(45°), I(90°), I(135°), intensity, DoLP and AoP images is obtained via the residual interpolation method. The spline interpolation method introduces the largest error, while the bicubic and gradient interpolation methods show similar error performance, with the latter being computationally efficient.
In this paper, we proposed the residual interpolation algorithm for a division of focal plane image sensor. We have compared the structure of the gradient, bilinear, bicubic and spline interpolation algorithms with residual interpolation. The performance was compared visually, by modulation transfer function (MTF) and with a MSE matrix through a CCD camera and a linear polarization filter turned around the sensor. The interpolation algorithms were applied on low-resolution images and compared statistically against the true high-resolution polarization images. We applied the algorithms on intensity (S0), angle of linear polarization (AoP) and degree of linear polarization (DoLP) to observe the accuracy of edge recovery and polarization information. The improvements in the reconstruction accuracy using the proposed residual interpolation method were shown both with the MSE and visually in comparison with the bilinear, bicubic, spline and gradient-based algorithms. This demonstrates that residual interpolation could bring a large improvement to the output quality in terms of edge artifacts for a real DoFP polarization image sensor. Most importantly, the residual method outperforms and shows its advantage over other leading methods.
The Qatar National Research Fund (NPRP9-421-2-170).
The authors would like to thank Neal Brock at 4D technology and Shengkui Gao at Apple, United States for their guidance about the polarization image sensors.
References and links
2. D. Miyazaki, T. Shigetomi, M. Baba, R. Furukawa, S. Hiura, and N. Asada, “Surface normal estimation of black specular objects from multiview polarization images,” Opt. Eng. 56(4), 041303 (2016). [CrossRef]
3. H. Zhan and D. G. Voelz, “Modified polarimetric bidirectional reflectance distribution function with diffuse scattering: surface parameter estimation,” Opt. Eng. 55(12), 123103 (2016). [CrossRef]
4. V. Thilak, D. G. Voelz, and C. D. Creusere, “Polarization-based index of refraction and reflection angle estimation for remote sensing applications,” Appl. Opt. 46(30), 7527–7536 (2007). [CrossRef] [PubMed]
5. B. Shen, P. Wang, R. Polson, and R. Menon, “Ultra-high-efficiency metamaterial polarizer,” Optica 1(5), 356–360 (2014). [CrossRef]
8. M. W. Hyde 4th, J. D. Schmidt, M. J. Havrilla, and S. C. Cain, “Enhanced material classification using turbulence-degraded polarimetric imagery,” Opt. Lett. 35(21), 3601–3603 (2010). [CrossRef] [PubMed]
10. T. York, S. B. Powell, S. Gao, L. Kahan, T. Charanya, D. Saha, N. W. Roberts, T. W. Cronin, J. Marshall, S. Achilefu, S. P. Lake, B. Raman, and V. Gruev, “Bioinspired polarization imaging sensors: from circuits and optics to signal processing algorithms and biomedical applications: analysis at the focal plane emulates nature’s method in sensors to image and diagnose with polarized light,” Proc IEEE Inst Electr Electron Eng 102(10), 1450–1469 (2014). [CrossRef] [PubMed]
11. N. W. Roberts, M. J. How, M. L. Porter, S. E. Temple, R. L. Caldwell, S. B. Powell, V. Gruev, N. J. Marshall, and T. W. Cronin, “Animal polarization imaging and implications for optical processing,” Proc. IEEE 102(10), 1427–1434 (2014). [CrossRef]
13. X. Zhao, A. Bermak, F. Boussaid, and V. G. Chigrinov, “Liquid-crystal micropolarimeter array for full Stokes polarization imaging in visible spectrum,” Opt. Express 18(17), 17776–17787 (2010). [CrossRef] [PubMed]
15. V. Gruev and R. E. Cummings, “Implementation of steerable spatiotemporal image filters on the focal plane,” IEEE Trans. Circuits Syst. 49(4), 233–244 (2002). [CrossRef]
16. X. Zhao, F. Boussaid, A. Bermak, and V. G. Chigrinov, “High-resolution thin “guest-host” micropolarizer arrays for visible imaging polarimetry,” Opt. Express 19(6), 5565–5573 (2011). [CrossRef] [PubMed]
20. V. Gruev and R. E. Cummings, “A pipelined temporal difference imager,” IEEE J. Solid-State Circuits 39(3), 538–543 (2004). [CrossRef]
21. Y. Liu, R. Njuguna, T. Matthews, W. J. Akers, G. P. Sudlow, S. Mondal, R. Tang, V. Gruev, and S. Achilefu, “Near-infrared fluorescence goggle system with complementary metal-oxide-semiconductor imaging sensor and see-through display,” J. Biomed. Opt. 18(10), 101303 (2013). [CrossRef] [PubMed]
25. E. Gilboa, J. P. Cunningham, A. Nehorai, and V. Gruev, “Image interpolation and denoising for division of focal plane sensors using Gaussian processes,” Opt. Express 22(12), 15277–15291 (2014). [CrossRef] [PubMed]
28. P. Thévenaz, T. Blu, and M. Unser, “Image interpolation and Resampling” in Handbook of Medical Imaging (SPIE Press, 2000), pp. 393–420.
29. D. H. Goldstein, Polarized Light, 3rd ed. (CPC Press, 2010).
30. M. W. Kudenov, L. J. Pezzaniti, and G. R. Gerhart, “Microbolometer-infrared imaging Stokes polarimeter,” Opt. Eng. 48(6), 063201 (2009). [CrossRef]
31. D. Kiku, Y. Monno, M. Tanaka, and M. Okutomi, “Residual interpolation for color image demosaicking,” in 2013 IEEE International Conference on Image Processing, Melbourne, (IEEE, 2013), pp. 2304–2308. [CrossRef]
32. Y. Monno, D. Kiku, S. Kikuchi, M. Tanaka, and M. Okutomi, “Multispectral demosaicking with novel guide image generation and residual interpolation,” in IEEE International Conference on Image Processing (IEEE, 2014), pp. 645–649. [CrossRef]
34. D. Kiku, Y. Monno, M. Tanaka, and M. Okutomi, “Beyond color difference: Residual interpolation for color image demosaicking,” IEEE Trans. Image Process. 25(3), 1288–1300 (2016). [PubMed]
36. G. D. Boreman, Modulation Transfer Function in Optical and Electro-Optical Systems (SPIE, 2001).