## Abstract

Division of focal plane (DoFP) polarimeters operate by integrating micro-polarizer elements with a focal plane. These polarization imaging sensors reduce spatial resolution output and each pixel has a varying instantaneous field of view (IFoV). These drawbacks can be mitigated by applying proper interpolation methods. In this paper, we present a new interpolation method for DoFP polarimeters by using intensity correlation. We employ the correlation of intensity measurements in different orientations to detect edges and then implement interpolation along edges. The performance of the proposed method is compared with several previous methods by using root mean square error (RMSE) comparison and visual comparison. Experimental results showed that our proposed method can achieve better visual effects and a lower RMSE than other methods.

© 2016 Optical Society of America

## 1. Introduction

The three fundamental properties of light are intensity, wavelength and polarization. Humans perceive light intensity and wavelength as brightness and color respectively. However, polarization is invisible to the human eye. Even so, polarization can convey information about the transverse electric field orientation. This fact has been exploited to material classification [1,2], underwater imaging [3] and infrared imaging [4,5].

A common description of polarized light is in the form of Stokes vector [6]. Typically two polarization properties of optical field are the angle of polarization (AoP) [6] and the degree of linear polarization (DoLP) [6]. In order to get polarization images, we need to obtain the intensity images of different orientations via polarization imaging sensors. Current polarization imaging sensors are mainly divided into division of time (DoT) [7], division of amplitude (DoAM) [8], division of aperture (DoAP) [9] and division of focal plane polarimeters (DoFP) [10, 11]. The obvious disadvantage with the DoT is that the scene must be static; otherwise inter-frame motion will lead to false polarization information. The DoAM system is hard to align and the incident intensity will be subdued after through beam splitters. The DoAP system is also hard to align and it is sensitive to polarization-dependent aberration effects [12]. The DoFP uses the integration of micro-optical polarization elements onto the focal plane array (FPA) directly. The sensor is composed of a collection of 2-by-2 pixels, or super-pixels. Each super-pixel comprises four different orientations: 0, 45, 90 and 135 degrees. DoFP polarimeters have rugged, aligned designs and are inherently temporally synchronized. These advantages make DoFP polarimeters the ideal choice for motion platform. We focus on DoFP polarimeters because our polarization platform is moving.

However, each pixel records only one out of four necessary intensity measurements, leading to a loss of spatial resolution. Obtaining four intensity measurements for each pixel is necessary to reconstruct the original image. Thus, a pixel should rely on its neighbors to estimate the other three intensity measurements. The estimation process will inevitably introduce errors, because each pixel has a different IFoV. However, the error can be decreased distinctly if interpolation methods are used. The main interpolation methods for DoFP polarimetric images are consisting of bilinear, bicubic, bicubic spline and gradient-based interpolation methods [13–16]. The first three interpolation methods are based on space invariant non-adaptive linear filters [17] and are essentially low pass filters, which smooth the detailed information. Serrated artifacts at edges of objects will be generated in the DoLP images due to intensity images smoothing. These serrated artifacts degrade imaging quality and should be eliminated. Gradient-based interpolation method is the state of the art and it can reconstruct edges and maintain smooth areas. However, interleave gradients are used in this method. In dynamic scene, interleave gradients will introduce deviations due to IFoV. So this method will not be the best choice for motion platform.

In this paper, we propose a new interpolation method for DoFP polarization imaging sensors. Firstly, we obtain the missing intensity measurement in the opposite polarization orientation by employing bicubic spline interpolation method to implement interpolation in the diagonal directions according to the intensity similarity and polarization similarity. Secondly, the correlation of intensity measurements in different orientations is used to detect edges and compute the direction of edges. Finally, bicubic spline interpolation method is implemented along edges and smooth regions. The performance of the proposed method is compared with bilinear, bicubic, bicubic spline and gradient-based interpolation methods by using RMSE comparison and visual comparison. Experimental results showed that our proposed method can achieve better visual effects and a lower RMSE than other previous methods.

## 2. Correlation-based interpolation method

The interpolation fundamental idea is the following: interpolation is performed along the edge and not across an edge. In order to implement this method, edge detection is firstly implemented. In conventional image processing, an edge occurs at boundaries between regions of different color, intensity, or texture [18]. However, in polarization image, an edge occurs at locations between regions with different intensity, DoLP or AoP, i.e. in conventional or polarization images an edge has low correlation with a neighborhood of pixels.

#### 2.1. Edge detection for DoFP images

Because each pixel has a different polarization orientation with a neighborhood of 3 by 3 pixels, as shown in Fig. 1, traditional edge detection methods can’t detect accurate edges for DoFP images. Fortunately, we can detect edges according to the correlation. Before defining the correlation, we review the computation of Stokes vector [19] that is described by Eq. (1):

where*I*,

_{θ}*θ*= 0°, 45°, 90°, 135° is the intensity of the light recorded with linear polarizer oriented at

*θ*angle and ${\overrightarrow{S}}_{in}={({S}_{0},{S}_{1},{S}_{2},0)}^{T}$ is the incident Stokes vector. The 4th Stokes parameter is usually ignored for natural light, since the phase information between the orthogonal polarization components is not readily available for naturally occurring light [20], i.e. no circular polarization is measured. The Muller matrix of linear polarizer

**M**

*is formulated by Eq. (2):*

_{lp}Substituting the values of *θ* into Eq. (1), we obtain Eq. (3):

With the above relationship, we can estimate a given pixel’s value from its neighbors in different orientations.

*θ*= 0°, 45°, 90°, 135° is the estimating value of intensity in the

*θ*orientation. ${\overline{I}}_{\theta}$ is the average value of intensity calculated by using the neighbors of target pixel.

An edge has low correlation with a neighborhood of pixels, i.e. it has a high correlation error. According to Eq. (3) and Eq. (4), we can define the correlation error Δ*I*(*i*, *j*) of pixel (*i*, *j*): the absolute error between real value and estimated value.

*ω*is defined as following:

Figure 2 expresses the smooth region and an edge in the horizontal direction, in the vertical direction and in the diagonal direction respectively. We choose the pixel in 90° polarization orientation as an example to calculate the correlation error. In smooth region the correlation error is zero; along the edge in horizontal direction the correlation error is zero; along the edge in vertical direction the correlation error is zero and along the edge in diagonal direction the correlation error is 0.75. Unfortunately, the correlation error of smooth region is as same as the correlation error of edge in horizontal direction and vertical direction. In order to distinguish between smooth regions and edges, we modify Eq. (5) as following:

We recalculate the correlation errors of four situations in Fig. 2 and the results are as following: in smooth region the correlation error Δ*I ^{h}* and Δ

*I*are zeros; along the edge in horizontal direction the correlation error Δ

^{v}*I*is one and Δ

^{h}*I*is zero; along the edge in vertical direction the correlation error Δ

^{v}*I*is one and Δ

^{v}*I*is zero and along the edge in diagonal direction the correlation error Δ

^{h}*I*is 0.5 and Δ

^{h}*I*is 0.5. Meanwhile, when an edge in horizontal direction the correlation error Δ

^{v}*I*is zero and an edge in vertical direction the correlation error Δ

^{v}*I*is zero. Thus, we can judge that the direction of an edge is horizontal or vertical in terms of correlation error. Normalization should be used for correlation error over one.

^{h}#### 2.2. Interpolation in the diagonal directions

Each pixel has four adjacent pixels in the diagonal directions, which are in the same polarization orientation. For example, as shown in Fig. 1: the pixel in the 0° orientation has four adjacent pixels in the diagonal directions that are in the 90° orientation. We can obtain the missing intensity measurement that is in the opposite polarization orientation via interpolation in the diagonal directions for each pixel. For example, we can get the intensity measure in the 135° orientation for the pixel in the 45° orientation. In order to obtain higher precision interpolation result, we employ the intensity similarity and the polarization similarity as the gradient. The gradient is calculated by Eq. (7):

*A*= |2

*I*(

*i*,

*j*) −

*I*(

*i*− 2,

*j*+ 2) −

*I*(

*i*+ 2,

*j*− 2)| and

*B*= |2

*I*(

*i*,

*j*) −

*I*(

*i*− 2,

*j*− 2) −

*I*(

*i*+ 2,

*j*+ 2)|. The interpolation is implemented according to the gradient values of 45° and 135°. If the value in the 45° direction is equal to the value in the 135° direction, the intensity value of the target pixel is the average of the intensity values in the diagonal directions. Besides, interpolation is implemented along single direction such that 45° direction or 135° direction. For example, if the gradient value in the 45° direction of pixel in the 90° orientation is larger, we can get the intensity measure in the 0° orientation by implementing bicubic spline interpolation method with four adajacent pixels along 135° direction.

#### 2.3. Interpolation in the horizontal and vertical directions

After completing interpolation in the diagonal direction, we employ the correlation error that is defined in section 2.1 as a judgment to distinguish horizontal edges, vertical edges and smooth regions. In this section, we don’t use gradient as a judgement which is the major difference with gradient-based interpolation method [16]. The interpolation algorithm is shown in Table 1.

## 3. Experimental setup

In order to evaluate the performance of different interpolation methods, four true images in different polarization orientations were recorded by using DoT system. Each scene is sampled 100 times to reduce the effects of random noise on interpolation results. The experimental apparatus is shown in Fig. 3. The light passes through the Newport 10LP-VIS-B linear polarizer before passing into the gray-scale camera (DMK 22AUC03). The linear polarizer is mounted on a rotation stage (Thorlabs PRM1Z8E) to generate four different polarization orientations. The orientations were distributed every 45° from 0° to 135°. Next, we generated the simulated DOFP image from these four true images as the sampling pattern shown in Fig. 1. Then, we could get four interpolated images by applying interpolation methods. Finally, we evaluate the interpolation performance by comparing interpolated images with true images. This section aims to provide a quantitative comparison of the reconstruction errors and visual comparison.

## 4. Experimental results

To evaluate the performance of different interpolation methods, image visual comparison and quantitative comparison are applied to the test images recorded by DoT polarization system. Besides, images recorded by DoFP polarization system are also used to visually evaluate reconstruction results.

#### 4.1. Image visual comparison

As shown in Fig. 4, the first row presents intensity images; the second row presents DoLP images and the red rectangle regions of the DOLP images are expanded and presented in the last row in false color scale. The first column presents the true polarization images, which are used to compare the reconstruction accuracy of different interpolation methods. Images presented in the second column are reconstructed via bilinear interpolation. Edges are smoothed in the intensity image and serrated artifacts are showed in the DoLP image due to large error introduced by bilinear interpolation. The third and fourth column present similar results. Serrated artifacts are slightly mitigated compared with bilinear method, but serrated artifacts are still strong. Images presented in Fig. 4(e) are obtained via gradient-based method. The threshold is set to 60% of the CDF as described in Ref. [16]. This method outperforms the first three methods and reconstructs edges. Meanwhile, it mitigates serrated artifacts. However, the slight serrated artifacts still exist. The last column shows images obtained via our method. It can minimize serrated artifacts and closely resemble the true polarization presented in the first column.

#### 4.2. Quantitative comparison

We employ the root mean square error (RMSE) value to evaluate the accuracy of the different interpolation methods. The formula of RMSE is described by Eq. (8):

Where *I _{true}*(

*i*,

*j*) is the value of true image,

*I*interpolation (

*i*,

*j*) is the value of interpolated image, M and N represent the height and width of the image respectively. The RMSE comparison results are shown in Table 2. The bold number is the best result in each row.

Our proposed method shows the best results in Table 2. The bilinear interpolation method introduces the largest errors. One of the reasons for these results is that the first three methods are essentially low pass filters that smooth the detailed information and our method is performed along the edges. Our method is better than gradient-based method because we use the intensity similarity and the polarization similarity in the digonal interpolations and polarization intensity correlation in the horizontal and vertical interpolations. RMSE comparison results showed that our method reconstructs polarization images with the highest accuracy.

The interpolation accuracy for different illuminations should be evaluated because the DoFP polarimeters work with various dynamic ranges. We simulated different illuminations for images shown in Fig. 4. The ratios of original illumination were distributed every 0.05 from 0.05 to 1.2. The RMSE results of intensity and DoLP for different illuminations are presented in Fig. 5. Figure 5(a) shows that the RMSE of intensity linearly increases with the illumination increasing. The interpolation errors are determined by the intensity value. The intensity value is higher for high illumination compared with low illumination. Figure 5(b) shows that the RMSE of DoLP and illumination are not relevant. The RMSE of DoLP is approximately to a constant. The definition of DoLP shows that it is independent of intensity measurements. Fig. 5 also shows that our method has the lowest RMSE compared with other methods.

#### 4.3. DoFP polarization image comparison

Different interpolation methods are applied to reconstruct polarization images recorded by DoFP polarization system. Thus, these reconstruction performances can be evaluated visually. Real-life images of cars on the snowy day have been recorded as shown in Fig. 6. The first column presents intensity images and the second column presents DoLP images.

The first row presents reconstructed results produced by using bilinear interpolation method and strong serrated artifacts occurred at edges (red and white rectangle regions). Images presented in the second and third row are reconstructed by using bicubic and bicubic spline interpolation methods respectively. Serrated artifacts are slightly suppressed compared with the first row. Both gradient-based method and our method can eliminate serrated artifacts as shown in the last two rows (the threshold of gradient-based method is set to 60% of the CDF). However, gradient-based method brings slight edge distortion as shown in the green rectangle region. Besides, the blue circle region presents the blind pixels. Compared with other methods in this section, our method can minimize the region of bline pixels. According to interpolation results, our proposed interpolation method can achieve the best visual effects.

## 5. Conclusion

In this paper, we presented a new interpolation method for division of focal plane polarization imaging sensors with intensity correlation. A theoretical framework for the new interpolation method is presented in the paper. These interpolation methods are evaluated on a set of images recorded by DoT polarization system and DoFP polarization system. According to the RMSE comparison and visual comparison results, our proposed interpolation method outperforms bilinear, bicubic, bicubic spline and gradient-based interpolation methods.

## References and links

**1. **T. Krishna, C. Creusere, and D. Voelz, “Passive polarimetric imagery-based material classification robust to illumination source position and viewpoint,” IEEE Trans. Image Process. **20**(1), 288–292 (2011). [CrossRef]

**2. **M. Sarkar, D. San SegundoBello, C. van Hoof, and A. Theuwissen, “Integrated polarization analyzing CMOS image sensor for material classification,” IEEE Sens. J. **11**(8), 1692–1703 (2011). [CrossRef]

**3. **Y. Y. Schechner and N. Karpel, “Recovery of underwater visibility and structure by polarization analysis,” IEEE J. Oceanic Eng. **30**(3), 570–587 (2005). [CrossRef]

**4. **F. A. Sadjadi and C. S. Chun, “Passive Polarimetric IR Target Classification,” IEEE Trans. Aerospace Electron. Syst. **37**, 740–751 (2001). [CrossRef]

**5. **BN Tiwari, PJ Fay, GH Bernstein, AO Orlov, and W Porod, “Effect of Read-Out Interconnects on the Polarization Characteristics of Nanoantennas for the Long-Wave Infrared Regime,” IEEE Trans. Nanotechnol. **12**, 270–275 (2013). [CrossRef]

**6. **G. G. Stokes, “On the composition and resolution of streams of polarized light from different sources,” Trans. Cambridge Philos. Soc. **9**, 399–416 (1852).

**7. **C. K. Harnett and H. G. Craighead, “Liquid-crystal micropolarizer array for polarization-difference imaging,” Appl. Opt. **41**(7), 1291–1296 (2002). [CrossRef] [PubMed]

**8. **C. A. Farlow, D. B. Chenault, K. D. Spradley, M. G. Gulley, M. W. Jones, and C. M. Persons, “Imaging polarimeter development and applications,” Proc. SPIE4481, 118 (2002). [CrossRef]

**9. **J. S. Tyo, “Hybrid division of aperture/division of a focal-plane polarimeter for real-time polarization imagery without an instantaneous field-of-view error,” Opt. Lett. **31**(20), 2984–2986 (2006). [CrossRef] [PubMed]

**10. **R. Perkins and V. Gruev, “Signal-to-noise analysis of Stokes parameters in division of focal plane polarimeters,” Opt. Express **18**, 25815–25824 (2010). [CrossRef] [PubMed]

**11. **J. S. Tyo, D. L. Goldstein, D. B. Chenault, and J. A. Shaw, “Review of passive imaging polarimetry for remote sensing applications,” Appl. Opt. **45**(22), 5453–5469, August 2006. [CrossRef] [PubMed]

**12. **R. A. Chipman, “Polarization analysis of optical systems,” Opt. Eng. **28**, 90–99 (1989).

**13. **B. Ratliff, C. LaCasse, and S. Tyo, “Interpolation strategies for reducing IFoV artifacts in microgrid polarimeter imagery,” Opt. Express **17**, 9112–9125 (2009). [CrossRef] [PubMed]

**14. **S. Gao and V. Gruev, “Bilinear and bicubic interpolation methods for division of focal plane polarimeters,” Opt. Express **19**, 26161–26173 (2011). [CrossRef]

**15. **S. Gao and V. Gruev, “Image interpolation methods evaluation for division of focal plane polarimeters,” Proc. SPIE8012, 80120N (2011). [CrossRef]

**16. **S. Gao and V. Gruev, “Gradient-based interpolation method for division-of-focal-plane polarimeters,” Opt. Express **21**, 1137–1151(2013). [CrossRef] [PubMed]

**17. **G Elad, JP Cunningham, N Arye, and V. Gruev, “Image interpolation and denoising for division of focal plane sensors using Gaussian processes,” Opt. Express **22**(12), 15277–15291 (2014). [CrossRef]

**18. **C. M. Xiao, Z. L. Shi, R. B. Xia, and W. Wei, “Edge-Detection Algorithm Based on Visual Saliency,” Information & Control **43**(1), 9–13 (2014).

**19. **B. M. Ratliff, T. J. Scott, J. K. Boger, W. T. Black, D. L. Bowers, and MP Fetrow, “Dead pixel replacement in LWIR microgrid polarimeters,” Opt. Express **15**(12), 7596–7609 (2007). [CrossRef] [PubMed]

**20. **V. Gruev, J. Van der Spiegel, and N. Engheta, “Advances in integrated polarization image sensors,” in Proceedings of IEEE Conference on Life Science Systems and Applications Workshop (IEEE, 2009), pp. 62–65.