Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Texture characterization and classification of polarized images based on multi-angle orthogonal difference

Open Access Open Access

Abstract

The Local Binary Pattern (LBP) and its variants are capable of extracting image texture and have been successfully applied to classification. However, LBP has not been used to extract and describe the texture of polarized images, and simple LBP cannot characterize the polarized texture information from different polarizations of angles. In order to solve these problems, we propose a new multi-angle orthogonal difference polarization image texture descriptor (MODP_ITD) by analyzing the relationship between the difference of orthogonal difference polarization images from different angles and the pixel intensity distribution in the local neighborhood of images from different angles. The MODP_ITD consists of three patterns: multi-angle polarization orthogonal difference local binary pattern (MODP_LBP), multi-angle polarization orthogonal difference local sampling point principal component sequence pattern (MODP_LPCSP) and multi-angle orthogonal difference polarization local difference binary pattern (MODP_LDBP). The MODP_LBP extracts local corresponding texture characteristics of polarized orthogonal difference images from multiple angles. The MODP_LPCSP sorts the principal component order of each angle orthogonal difference local sampling point. The MODP LDBP extracts the local difference characteristics between different angles by constructing a new polarized image. Then, the frequency histograms of MODP_LBP, MOD_LPCSP ,and MODP_LDBP are cascaded to generate MODP_ITD, so as to distinguish local neighborhoods. By using vertical and parallel polarization and unpolarized light active illumination, combined with the measurements at three different detection zenith angles, we constructed a polarization texture image database. A substantial number of experimental results on the self-built database show that our proposed MODP_ITD can represent the detailed information of polarization images texture. In addition, compared with the existing LBP methods, The MODP_ITD has a competitive advantage in classification accuracy.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Texture is a fundamental characteristic of an object’s surface, which plays a crucial role in the human perception of the world. In the field of computer vision and pattern recognition, the extraction of texture characteristics is of great significance. However, due to the complexity of texture patterns, such as roughness, smoothness, and unevenness, and the interference of uncontrollable factors such as illumination, scale, and visual angle changes, extracting discriminative and robust texture characteristics has become a challenging task. Therefore, researchers are committed to developing effective methods and algorithms to overcome these challenges and acquire dependable texture characteristics [1,2,3].

In the past few decades, the methods of extracting texture characteristics from images have been widely studied, which can be roughly divided into statistics-based methods, model-based methods and filtering-based methods [4]. Statistics-based methods are obtained by extracting the indicators of texture characteristics and counting them. For instance, Haralick et al. [5] calculates the gray level co-occurrence matrix (GLCM) of pixels in a specific distance and direction, extracting certain statistical characteristics. The model-based approach estimates assumed image model parameters and considers these parameters as texture characteristics. The typical work is that Antosik et al. [6] proposed and compared two new texture classification methods based on stochastic modeling using Markov random fields. Based on the filtering method, the local energy is calculated from the filtered image as the texture characteristic. For example, Unser et al. [7] employed wavelet transform to classify and segment the textures. Clausi et al. [8] designed a Gabor filter with exceptional texture separation capability. All the above methods can express the texture well, but they have the problem of high characteristic dimension. Recently, the local binary pattern (LBP) has garnered significant attention for obtaining descriptors with a high recognition rate while maintaining a low feature dimension. Ojala et al. [9] combined statistical method with filtering method, and put forward LBP coding method. In this method, the intensity difference between the central pixel and its adjacent sampling points was coded, and the coded histogram was used as texture representation. Subsequently, Tan et al. [10] introduced the local ternary pattern (LTP), an enhancement of LBP, exhibiting heightened texture discrimination and reduced noise sensitivity. Meanwhile, Akhloufi et al. [11]proposed a local adaptive ternary pattern(LATP) based on local ternary pattern(LTP), which can automatically determine the local mode threshold using local statistical information. And LATP is more robust to illumination than LTP. Ren et al. [12] put forward anti-noise LBP(NRLBP), which uses error correction mechanism to restore the image mode distorted by noise. The complete LBP(CLBP) proposed by Guo et al. [13] combines three complementary descriptors, including difference sign (CLBP_S), difference amplitude (CLBP_M) and central pixel intensity (CLBP_C). Drawing inspiration from CLBP, Zhao et al. [14] proposed a complete local binary counting mode (CLBC), which is realized by directly counting the number of “1” in local binary sequences. Song et al. [15] show that the traditional LBP method is not robust to illumination and rotation changes, and their LGONBP descriptor shows great advantages in this respect. Inspired by LGONBP, Xin Shu et al. [16] put forward a new global refined local binary pattern (GRLBP), which has competitive advantages in classification accuracy and characteristic dimension. Luo et al. [17] proposed an improved LBP descriptor, ${\rm LB}{P^{mr}}/GNP$, based on local binary pattern and global threshold segmentation pattern, which has excellent texture classification accuracy. Through the research on LBP and its variants, it is evident that they exhibit high computational efficiency and good description ability, so it has been widely application in texture classification [18], target detection [19] and other fields.

To sum up, texture characteristics have been widely used in various fields. Prior research has made noteworthy contributions to the extraction of texture characteristics, achieving more detailed and accurate depiction of image texture attributes and providing valuable texture information. However, the impact of light polarization on texture images has been disregarded in the aforementioned research. Polarization is a property of light, and the polarization state of light can be characterized by many parameters, including polarization angle AOP and polarization degree DOP [20]. Typically, these parameters are usually calculated based on Stokes parameters, which are a set of values describing the polarization state of electromagnetic radiation [21]. Based on Fresnel’s theory, the polarization of light indicates the relative direction of the electromagnetic wave during its propagation through a medium. In contrast to unpolarized light, polarized light has specific propagation direction and vibration direction, so it has advantages in target enhancement, material detection and target recognition [22].

There are few studies have focused on polarization texture. However, it has also made initial strides in recent years. Pirard et al. [23] conducted texture analysis on granular ore materials in polarized illumination. Yuan et al. [24] proposed a novel algorithm for extracting image texture characteristics, which is based on the Gabor filter and CS-LBP operator. The extracted frequency components are transformed into time domain to realize texture extraction, and the result is better than the traditional image texture extraction algorithm. Liu et al. [25] provide more microstructure information for auxiliary diagnosis by quantifying the correlation between multiple texture features of H&E images and the polarization parameter set of Mueller matrix images of the same sample. There is a certain correlation between texture characteristics and polarization parameters through Pearson coefficient. Serban et al. [26] proposed a novel texture characterization technique based on polarized light characteristics. They obtained texture changes in image areas from various angles by rotating the polarizer and demonstrated the method’s applicability to color image segmentation of natural outdoor scenes.

Since light reflected from an object’s surface produces polarization [27], polarization information becomes an indispensable part in the texture representation of an image. And the polarization information contained in different polarization angles is different, so the texture information reflected by different angles of polarization is also different. To extract and characterize polarization texture characteristics from diverse angles, we proposed a multi-angle orthogonal difference polarization image texture descriptor (MODP_ITD). The MODP_ITD includes three patterns: multi-angle polarization orthogonal difference local binary pattern (MODP_LBP), multi-angle polarization orthogonal difference local principal component sequence pattern (MODP_LPCSP) and multi-angle polarization orthogonal local difference binary pattern (MODP_LDBP). The MODP_LBP obtains several orthogonal difference polarization images with equal interval angles within 0$^{\circ }$-90$^{\circ }$ through calculation and extracts the texture characteristics of the images with different orthogonal difference angles. The MODP_LPCSP obtains the relationship between local textures at different angles by sorting the principal components of all local sampling points of orthogonal difference polarization images at different angles. The MODP_LDBP constructs polarization difference images from orthogonal difference polarization images with different angles and extracts the texture characteristics of the images to reflect the differences in texture among the angles. The frequency histograms of MODP_LBP, MODP_LPCSP and MODP_LDBP are cascaded to generate MODP_ITD, so as to distinguish local neighborhoods. Additionally, we employ an extended coding scheme to subdivide the non-uniform mode. Our main contributions are as follows:

  • (1). Considering the characteristic description of multiple orthogonal difference polarized images, we propose a texture descriptor MODP_ITD for polarized images, the MODP_ITD algorithm requires no training.
  • (2). We propose three patterns: MODP_LBP, MODP_LPCSP and MODP_LDBP. MODP_LBP encodes the relationship between corresponding sampling points of orthogonal difference polarization images with different angles, MODP_LPCSP encodes the principal component sequence relationship of corresponding sampling points of orthogonal difference polarization images with different angles, and the MODP_LDBP encodes the local differences of corresponding sampling points by constructing a new polarization difference image.
  • (3). Experiments conducted on our self-built dataset of polarized texture images demonstrate that the MODP_ITD algorithm outperforms the existing LBP algorithm in texture classification across various types of polarized and unpolarized incident light.

2. Related work

2.1 Polarization parameter representation method

The state of polarized light can be represented by Stokes vectors ${S_0}$ , ${S_1}$ , ${S_2}$ and ${S_3}$ [28].Because there are very few circular polarization components in nature, the ${S_3}$ can be ignored [29]. The Stokes vector as:

$$S = \left[ {\begin{array}{c} {{S_0}}\\ {{S_1}}\\ {{S_2}}\\ {{S_3}} \end{array}} \right] = \left[ {\begin{array}{c} {E_X^2 + E_Y^2}\\ {E_X^2 - E_Y^2}\\ {2{E_X}{E_Y}cos\delta }\\ {2{E_X}{E_Y}sin\delta } \end{array}} \right] = \left[ {\begin{array}{c} {{I_0} + {I_{90}}}\\ {{I_0} - {I_{90}}}\\ {{I_{45}} - {I_{135}}}\\ {{I_L} + {I_R}} \end{array}} \right]$$

On the basis of Stokes vector, the definition of DOLP and AOP as [30]:

$$DOLP\; = \;\frac{{\sqrt {{S_1}^2 + {S_2}^2} }}{{{S_0}}}.$$
$$AOP = 0.5*arctan\left( {\frac{{{S_2}}}{{{S_1}}}} \right).$$

After establishing the zero reference direction for the polarizer’s rotation angle, the transmitted light intensity $I(\theta )$ of the polarizer can be expressed for any polarization direction as follows:

$$I(\theta ) = \frac{{{S_0} + {S_1}\cos \left( {2\theta } \right) + {S_2}{\rm{sin}}\left( {2\theta } \right)}}{2}.$$

The light entering the sensor belongs to partially polarized light, which can be decomposed into natural light component and fully polarized light component [31].Compared with natural light components, fully polarized light components can provide more valuable information about target recognition and detection.

$$I(\theta ) = {I_{upol}}(\theta ) + {I_{pol}}(\theta ) = \frac{1}{2}{S_0} + \frac{{{S_1}\cos \left( {2\theta } \right) + {S_2}{\rm{sin}}\left( {2\theta } \right)}}{2}.$$
$${I_{pol}}(\theta ) = \frac{{{S_1}\cos \left( {2\theta } \right) + {S_2}{\rm{sin}}\left( {2\theta } \right)}}{2}.$$
Where $I(\theta )$ is the partially polarized light obtained when the polarization detection angle is $\theta$. ${I_{upol}}(\theta )$ is a natural light component, and ${I_{pol}}(\theta )$ is a completely polarized light component.

By applying orthogonal differencing to the two components with polarization detection angles differing by $\frac {\pi }{2}$, the natural light component can be entirely eliminated, and the fully polarized component generated by reflections from the object’s surface can be acquired. The orthogonal polarization component ${I_ \bot }(\theta )$ as : The orthogonal polarization component $I_ \bot ^\theta$ as follows:

$$\begin{array}{c} {I_ \bot }(\theta ) = I(\theta ) - I(\theta + \frac{\pi }{2}) = \frac{1}{2}\left[ {({S_1}\cos \left( {2\theta } \right) - {S_1}\cos \left( {2\theta + \pi } \right)} \right) + ({S_2}\sin \left( {2\theta } \right) - {S_2}\sin \left( {2\theta + \pi } \right))]\\ {\rm{ = }}{S_1}\cos \left( {2\theta } \right) + {S_2}{\rm{sin}}\left( {2\theta } \right) \end{array}$$

As shown in Fig. 1, the scene of polarization imaging difference when the polarizer is at different angles is shown. Among them, the polarization detection angle is 45$^{\circ }$ in Fig. 1(a), and 135$^{\circ }$ in Fig. 1(b). It is evident that when the polarization detection angle aligns with that of the polarizer, the central region of the polarizer allows light transmission. Conversely, when there is a 90$^{\circ }$ orthogonal difference between them, the central portion of the polarizer undergoes extinction. and the linearly polarized light at 135$^{\circ }$ cannot penetrate the 45$^{\circ }$ polarization detection angle. In Fig. 1(c), an image denoted as $I({45^ \circ }) - I({135^ \circ })$ is depicted, and we define it as the orthogonal difference image ${I_ \bot }({45^ \circ })$, Similarly, in Fig. 1(d) ,an orthogonal difference image ${I_ \bot }({135^ \circ })$ is presented.

 figure: Fig. 1.

Fig. 1. Orthogonal difference diagram of different angles.

Download Full Size | PDF

2.2 LBP coding

For any central pixel in image, its basic LBP [9] descriptor is defined as:

$$LB{P_{R,P}}(m) = \sum_{p = 0}^{P - 1} {s({g_p} - {g_c}){2^p}} .$$
$$s(x) = \left\{ {\begin{array}{c} {\begin{array}{cc} 1 & {x \ge 1} \end{array}}\\ {\begin{array}{cc} 0 & {x < 1} \end{array}} \end{array}} \right.$$
Where ${g_c}$ represents the value of the central pixel, ${g_p}(p = 0,1, \cdot \cdot \cdot,P - 1)$ corresponds to the value of the neighboring pixels on the circle with radius $r$, and $p$ is the number of sampling neighbors, $m$ is the serial number of the central pixel, and its range is ${m = 0,1, \cdot \cdot \cdot,M - 1}$. For the neighborhood of pixels in the image cannot be accurately located, estimation can be performed using bilinear interpolation. Ojala et al [9] found that certain patterns have a higher proportion in LBP, so they proposed a uniform coding $LB{P^{U2}}$. They defined a uniformity measure $U$, to count the number of state transitions between "0" and "1" in cyclic binary sequences, which is defined as follows:
$$U(LB{P_{R,P}}) = |s({g_{p - 1}} - {g_c}) - s({g_0} - {g_c})| + \sum_{p = 0}^{P - 1} {|s({g_p} - {g_c}) - s({g_{p - 1}} - {g_c})|}.$$

When $U \le 2$, LBP type is uniform. Otherwise, LBP belongs to non-uniform pattern. reduces the characteristic dimension of LBP from ${2P}$ to ${P(P - 1) + 3}$.

In order to enhance the robustness of image rotation and reduce the characteristic dimension, the $LB{P^{riu2}}$ is defined as:

$$LBP_{R,P}^{riu2}(l) = \left\{ {\begin{array}{cc} {\sum_{p = 0}^{P - 1} {s({g_p} - {g_c})} } & {U(LB{P_{R,P}}) \le 2}\\ {P + 1} & {otherwise} \end{array}} \right.$$

The characteristic dimension of $LB{P^{riu2}}$ is only ${P+2}$.

3. Polarization texture extraction method based on multi-angle orthogonal difference

3.1 MODP_LBP

The Stokes vector and DOLP can be used as polarization characteristics of light and yield good results in distinguishing objects, as they reflect differences and invariability among polarization components in various directions. For objects with smooth surfaces, the disparities between different polarization detection angles are evident, but for objects with rough surfaces, the differences in different polarization detection angles are either small. Polarization image is to distinguish and identify objects by using the different characteristics of polarization components in this direction.

When collecting polarized images, it is usually difficult to get all the valuable information in one direction because of the randomness and irregularity of the distribution of the surface orientation of the target object, and the difference after orthogonal polarization difference of adjacent polarization detection angles is very small [32]. However, acquiring or calculating orthogonal difference images for all angles is time-consuming and computationally intensive, and it is difficult for the processed image to reflect the polarization texture of the object. To try to acquire the polarization components in each direction in the image, multiple polarization orthogonal difference images with different angles can be acquired by calculating the polarization detection angles fixed apart from each other.

When acquiring polarized images with different angles, it’s crucial to consider the range of orthogonal difference angles. As shown in Fig. 2, we designated the region within the red frame of Fig. 2(a) as the target position and computed the average pixel value for the orthogonal difference across positions ranging from 0$^{\circ }$ - 180$^{\circ }$. Fig. 2(b) and 2(c) demonstrate that the orthogonal difference angles spanning 0$^{\circ }$-90$^{\circ }$ and 90$^{\circ }$-180$^{\circ }$ exhibit an opposite trend in relation to pixel values. Referring to Fig. 3, it is evident that encoding the images of the orthogonal regions at 45$^{\circ }$ and 135$^{\circ }$ results in different difference vectors and symbol codes; however, their final encoding is identical. Consequently, this will hinder effective differentiation of the characteristics in orthogonal difference images with varying polarizations during subsequent calculations. To address these challenges, we think that while calculating the texture characteristics of orthogonal differences polarization images, the polarization texture attributes of images exhibiting orthogonal differences within the range of 0$^{\circ }$-90$^{\circ }$ and 90$^{\circ }$-180$^{\circ }$ remain consistent. Therefore, to avoid redundancy in characteristic extraction, we will extract the texture from images featuring orthogonal differences within the range of 0$^{\circ }$-90$^{\circ }$.

 figure: Fig. 2.

Fig. 2. Relationship between polarization angle and pixel value.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Local coding process of polarization orthogonal difference image.

Download Full Size | PDF

We obtain $N$ orthogonal difference images at equal intervals within the orthogonal difference of 0$^{\circ }$-90$^{\circ }$, and extract the texture characteristics of the $N$ orthogonal difference images to obtain the texture coding of multi-angle polarized orthogonal difference images. The specific process is as follows:

  • a. The polarized DOFP is decomposed by Newton Polynomial Interpolation [33], and orthogonal difference images $I_ \bot ^n$ with equal angular intervals are obtained by Eq. (7), where the upper corner $n$ represents the $n$th orthogonal difference image, $n = \theta /({90^ \circ }/N)$, $(N \in {{\rm N}^ + };{0^ \circ } < \theta < {90^ \circ })$.
  • b. To reduce the influence of noise on texture description, we use the method of calculating the average gray value of $\omega \times \omega$ pixel blocks around $N$ orthogonal difference angle images when calculating the central pixel and its adjacent sampling points.
    $${\overline o ^n}(x,y) = {{\sum_{i = x - \frac{\omega }{2}}^{i = x + \frac{\omega }{2}} {\sum_{j = y - \frac{\omega }{2}}^{j = y + \frac{\omega }{2}} {I_ \bot ^n(i,j)} } } \mathord{\left/ {\vphantom {{\sum_{i = x - \frac{\omega }{2}}^{i = x + \frac{\omega }{2}} {\sum_{j = y - \frac{\omega }{2}}^{j = y + \frac{\omega }{2}} {I_ \bot ^n(i,j)} } } {{\omega ^2}}}} \right. } {{\omega ^2}}}.$$
    Where ${\overline o ^n}(x,y)$ is the average value after calculating $\omega \times \omega$ pixel blocks, $I_ \bot ^n(i,j)$ is the pixel value of orthogonal difference image.
  • c. Obtaining a local amplitude sequence of orthogonal difference polarization texture images of each angle by calculating the difference between the average value of the central pixel and the average value of its adjacent sampling points, which is described as follows:
    $$(g_{r,0}^n, \ldots ,g_{r,P - 1}^n) = (\overline o _{r,0}^n, \ldots ,\overline o _{r,P - 1}^n) - \overline o _c^n.$$
    Where $\overline o _c^n$ represents the average value of the central pixel, $\overline o _{r,p}^n$ represents the average value of its central neighboring sampling points, and $g_{r,p}^n$ represents the value of the difference between the average value of the central pixel and the average value of its neighboring sampling points, where the lower corner mark $c$ represents the central point, $r$ represents the radius of the pixel block, and $p$ represents the number of fitting neighboring sampling points.
  • d. To encode the local amplitude sequence of orthogonal difference polarization texture for each angle, we employ the extended "eriu2" to encode, so as to obtain the orthogonal difference polarization local binary pattern (ODP_LBP) at each angle:
    $$ODP\_LBP_{r,P}^{n,eriu2} = \left\{ {\begin{array}{c} {\begin{array}{c} {\begin{array}{cc} {\sum\limits_{p = 0}^{P - 1} {s(g_{r,p}^n)} } & {U \le 2} \end{array}}\\ {\begin{array}{cc} {P + 1} & {U = 4} \end{array}} \end{array}}\\ {\begin{array}{cc} {P + 2} & {U = 6} \end{array}}\\ {\begin{array}{cc} {P + 3} & {U = 8} \end{array}}\\ {\begin{array}{cc} {P + 4} & {Others} \end{array}} \end{array}} \right.$$
    $$U(ODP\_LBP_{r,P}^{n,eriu2}) = |s(g_{r,P - 1}^n) - s(g_{r,0}^n)| + \sum\limits_{p = 0}^{P - 1} {|s(g_{r,p}^n)} - s(g_{r,P - 1}^n)|.$$
  • e. MODP_LBP can be obtained by cascading ODP _ LBP results from $N$ angles.

In Eq. (15), $ODP\_LB{P^{eriu2}}$ encodes the traditional non-uniform pattern (${U> 2}$) into different values. These non-uniform patterns, corresponding to the complex texture structure, can occupy a certain proportion of $ODP\_LB{P^{eriu2}}$. To verify this, we set ${r = 5}$ and ${P = 24}$ for conducting experiments on the self-constructed database. As shown in Fig. 4, for the three cases where the incident light is vertical, parallel and unpolarized, we make statistics on the sum of the image textures at different zenith angles in each case, and we can see that the modes with a $U$ value of 4˜8 account for a large proportion (63.2%˜85.6%). Therefore, although the problem of texture rotation is not discussed in this paper, it can still be concluded that the adopted “eriu2” extended coding is richer in texture characteristics than the traditional “riu2” through statistics.

 figure: Fig. 4.

Fig. 4. $ODP\_LBP_{r,P}^{n,eriu2}$ extended coding proportion diagram.

Download Full Size | PDF

3.2 MODP_LPCSP

Although the textures of orthogonal difference polarization images vary across different angles, certain areas also exhibit similar or identical characteristics. Thus, there is an issue of differences in the degree of variation among images from different angles. The sequence of these variations can also differentiate between different textures. To sum up, we designs a multi-angle orthogonal difference polarization local sampling point principal component sequence pattern (MODP_LPCSP). Through the difference between the neighborhood and the central pixel of each angle image and taking the absolute value, several absolute value difference sequences are obtained, and the difference sequences of sampling points corresponding to each angle are analyzed by principal component analysis, and the characteristic vectors corresponding to the largest characteristic value are sorted and coded to obtain multi-angle polarization orthogonal difference texture characteristics. This approach can encode and elucidate the proportion of principal components inherent in multi-angle textures, all the while preserving the spatial relationships within each angle-specific image.

As shown in Fig. 5, the construction process of the MODP_LPCSP is as follows:

  • a. Interpolate and decompose the polarized DOFP image, and obtain a plurality of orthogonal difference images $I_ \bot ^n$ with equal angular intervals by Eq. (7).
  • b. To reduce the influence of noise on texture description, the average gray value of pixel blocks of size $\omega \times \omega$ around the central pixel and its adjacent sampling points is calculated.
  • c. Calculate the difference between the average value of the central pixel and the average value of its neighboring sampling points to obtain the local amplitude absolute value sequence of orthogonal difference polarization texture images with $n$th angle.
    $$L_{r,P}^n = (l_{r,0}^n, \ldots ,l_{r,P - 1}^n) = abs((\overline o _{r,0}^n, \ldots ,\overline o _{r,P - 1}^n) - \overline o _c^n).$$
  • d. The principal component analysis method is used to analyze the local amplitude sequence of N orthogonal difference polarization texture images, and the characteristic vector corresponding to the maximum characteristic value is obtained. Firstly, the covariance of local amplitude series corresponding to $N$ directions is calculated:
    $$C = {\mathop{\rm cov}} (L_{r,P}^0, \ldots ,L_{r,P}^{N - 1}).$$

    Next, the eigenvalues and eigenvectors are computed: the eigenvalue decomposition operation is performed on the covariance matrix $C$ to obtain the eigenvalue ${\lambda _0},{\lambda _1}, \ldots,{\lambda _{N - 1}}$ and the corresponding eigenvector ${v_0},{v_1}, \ldots,{v_{N - 1}}$. Based on the size of the eigenvalues, the largest eigenvalue and its corresponding eigenvector are selected, and the eigenvalues and their corresponding eigenvectors are ranked to obtain the sequential order:

    $$(D_r^0, \ldots ,D_r^{N - 1}) = sort({v_0}, \ldots ,{v_{N - 1}}).$$
    Where ${D^n}$ represents the sorting number of the local amplitude sequence of the nth orthogonal difference image after sorting.

  • e. We encode the sequence number of the local amplitude sequence to obtain the encoding of the local amplitude sequence of the $N$ angles orthogonal difference polarization texture image:
    $$MODP\_LPCS{P_{r,P,N}} = f({D^0}, \ldots ,{D^{N - 1}}).$$
    Where $f()$ is a mapping function, which converts each local amplitude sequence into a unique integer. It can be realized by looking up the table [34].

The MODP_LPCSP proposed in this section has the following properties. Firstly, compact coding with high correlation can be obtained for N polarized orthogonal difference texture images, which is helpful for low-dimensional histogram representation. Secondly, the calculated MODP_LPCSP coding improves the anti-noise ability by averaging the surrounding pixels. Thirdly, the MODP_LPCSP coding has rich local characteristics, including local difference amplitude and principal component sequence information of sampling points corresponding to orthogonal difference textures at all angles, but all these characteristics have not been explored in LBP.

 figure: Fig. 5.

Fig. 5. Coding process of MODP_LPCSP.

Download Full Size | PDF

3.3 MODP_LDBP

The texture details of polarized orthogonal difference images at different angles are different, so in order to highlight the difference in texture of each angle of different polarized orthogonal difference images, we uses $N$ angles orthogonal difference image to calculate the polarization difference image. Compared with $DOLP$, $S0$ and orthogonal difference images of various angles, this image has the advantages of rich texture detail and high texture energy, and highlights the difference of orthogonal difference polarized images at different angles, We propose to calculate the maximum difference of the same pixel position of multiple orthogonal difference images to obtain the polarization difference image. The image can reflect the degree of polarization generated by the reflection of natural light on the surface of an object within the same scene. A larger polarization difference indicates that this part is more prone to polarization, while a smaller difference suggests that this part is less likely to be polarized. The orthogonal difference polarization difference image ${I_{ODPD}}$ as:

$${I_{ODPD}}(i,j) = (\max (I_ \bot ^{\rm{0}}(i,j), \cdot{\cdot} \cdot I_ \bot ^{N{\rm{ - 1}}}(i,j)) - \min (I_ \bot ^{\rm{0}}(i,j), \cdot{\cdot} \cdot I_ \bot ^{N{\rm{ - 1}}}(i,j))).$$
Where $N$ is the number of polarized orthogonal difference images, $\max (I_ \bot ^{\rm {0}}(i,j), \cdot \cdot \cdot I_ \bot ^{N{\rm {\ }\hbox{-}{\rm \ 1}}}(i,j)$ represents the maximum pixel value at the $(i,j)$ position of the $N$ polarized orthogonal difference images, similarly, $\min (I_ \bot ^{\rm {0}}(i,j), \cdot \cdot \cdot I_ \bot ^{N{\rm {\ }\hbox{-}{\rm \ 1}}}(i,j))$ represents the minimum pixel value at the N polarized orthogonal difference images at $(i,j)$ position.

By calculating the energy, variance, angular second moment and entropy of the gray level co-occurrence matrix of three types of images with different incident light in the self-built data set, we obtained the following average data, as shown in Fig. 6. Fig. 6(a), (b) and (c) represent four evaluation indexes of the gray level co-occurrence matrix (GLCM) for images acquired under vertically polarized, parallel polarized and unpolarized incident light respectively, where vertical and parallel refer to the vertical and parallel relationship between the polarization direction of light and the incident plane. Fig. 6(d), (e) and (f) are the numerical comparisons obtained by adding up the energy, variance, angular second moment and entropy at the detection zenith of 30$^{\circ }$, 45$^{\circ }$ and 60$^{\circ }$ when the incident light is polarized in vertical and parallel directions and unpolarized. Among them, image ${I_{ODPD}}$ has obvious advantages in terms of variance, angular second moment and entropy, while in terms of energy, although the value of S0 is better than that of ${I_{ODPD}}$, the numerical difference between the two is very small. Therefore, in summary, it is fully proved that the image ${I_{ODPD}}$ with the maximum polarization difference we have proposed has certain advantages in reflecting texture details.

 figure: Fig. 6.

Fig. 6. Comparison of evaluation indexes of polarized texture images in self-built data sets.

Download Full Size | PDF

By extracting the texture characteristics of image ${I_{ODPD}}$, its coding can be obtained. Because its texture coding process is the same as that of a single orthogonal difference polarization image, the texture characteristic MODP_LDBP of image ${I_{ODPD}}$ can be obtained through the steps in Section 3.1.

3.4 Combined polarization texture characteristics

For each image texture characteristic MODP_LBP, MODP_LPCSP , and MODP_LDBP, frequency histograms are constructed and expressed as ${H_{MODP\_LBP}}$, ${H_{MODP\_LPCSP}}$, and ${H_{MODP\_LDBP}}$, respectively. As shown in Fig. 7, these histograms are cascaded to establish the polarization orthogonal difference texture characteristic representation. The specific process as follows:

$${H_{MODP\_ITD}} = [{H_{MODP\_LBP}},{H_{MODP\_LPCSP}},{H_{MODP\_LDBP}}].$$
$${H_{MODP\_LBP}} = [{H_{ODP\_LB{P^0}}},{H_{ODP\_LB{P^1}}}, \ldots ,{H_{ODP\_LB{P^{N - 1}}}}].$$
$${H_{ODP\_LBP}}({k_1}) = \sum\limits_{m = 0}^{M-1} {h(ODP\_LBP_{r,P}^N(m),{k_1})}.$$
$${H_{MODP\_LPCSP}}({k_2}) = \sum\limits_{m = 0}^{M-1} {h(MODP\_LPCS{P_{r,P}}(m),{k_2})}.$$
$${H_{MODP\_LDBP}}({k_3}) = \sum\limits_{m = 0}^{M-1} {h(MODP\_LDB{P_{r,P}}(m),{k_3})}.$$
$$h(x,y) = \left\{ {\begin{array}{cc} 1 & {x = y}\\ 0 & {others} \end{array}} \right.$$

 figure: Fig. 7.

Fig. 7. Process of MODP_ITD cascade.

Download Full Size | PDF

Among them, $0 \le {k_1} \le P + 4$, $0 \le {k_2} \le P$ ,and $0 \le {k_3} \le P + 4$. Therefore, the dimension of the frequency histogram ${H_{MODP\_ITD}}$ is $P(N + 2) + 8$. ${H_{MODP\_ITD}}$ extracted the texture characteristics of the orthogonal difference polarization image from multiple angles, which can show the structure with large texture changes under different angles. Meanwhile, the difference of the multi-angle orthogonal difference image was calculated and the proportion of components contained in different angles was sorted. Thus, ${H_{MODP\_ITD}}$ extracts the polarization texture characteristics from various aspects to preserve the texture information of the polarization image to the greatest extent.

4. Experiment and analysis

To verify the validity of the proposed method, we first investigate the effects of parameters on classification performance and fine-tune these parameters to achieve a trade-off between computational efficiency and classification accuracy. Next, we compare the performance of ${H_{MODP\_ITD}}$ with other advanced LBP variants texture descriptors on self-built polarization texture datasets.

In the experiment, a simple nearest neighbor classifier (NNC) is used because the emphasis is on image characteristic representation. The distance between two frequency histograms is measured using chi-square statistics:

$${d_{{x^2}}}({H_1},{H_2}) = \sum\limits_k {\frac{{{{[{H_1}(k) - {H_2}(k)]}^2}}}{{{H_1}(k) + {H_2}(k)}}}.$$
Where ${k}$ is the index of the histogram, ${{H_1}}$ and ${{H_2}}$ indicate the test and training histograms, respectively.

4.1 Polarization texture image dataset

In view of the existing texture image data sets TC10, TC12_000, and TC12_001 contains only intensity images, and not including the polarization image, we construct the polarization texture image data sets.

As shown in Fig. 8, the dataset is composed of 4 materials: fabric, plexiglass, mica sheet, and slate. Each material has 3 different textures, resulting in a total of 12 images with distinct textures. As shown in Fig. 9, to reduce the influence of external factors during the experiment, we employed a BRDF measuring instrument to capture polarization texture images of these 12 different textures using active lighting. In the acquisition process, the exposure time and gain of the DOFP camera were fixed, vertical polarized light, parallel polarized light, and unpolarized light emitted by LED were used as the incident light. The incident light was fixed at a zenith ${Z_i}$ angle of 40$^{\circ }$ with an azimuth $\varphi$ angle of 180$^{\circ }$. By adjusting the detection zenith ${Z_r}$ angle to capture texture polarization images from different positions. The detection range of zenith angle is 0$^{\circ }$-90$^{\circ }$, but we found in the experiment that too small zenith angle will lead to insufficient reflected light intensity collected by the detector, and too large zenith angle will lead to serious image texture distortion and too small depth of field. Therefore, we choose 30$^{\circ }$, 45$^{\circ }$ and 60$^{\circ }$ as the detection zenith angle.

 figure: Fig. 8.

Fig. 8. Polarization texture image.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Schematic diagram of polarization texture acquisition device.

Download Full Size | PDF

A division of focal plane (DOFP) polarization camera is used as the polarization texture collector, which adopts SONY IMX-250-MZR sensor with a resolution of 2448$\times$2048 and pixel size of 3.45$\times$3.45$\mathrm{\mu}$m. Intensity images can be obtained at four polarization detection angles of 0$^{\circ }$, 45$^{\circ }$, 90$^{\circ }$ and 135$^{\circ }$.However, due to the characteristic of the DOFP sensor is "sacrificing space for time", in the imaging process, four adjacent pixels are imaged at four angles, causing two problems:(1) instantaneous field error; (2) The resolution of the four-angle image is reduced by half compared to the original image (DOFP). To solve these problems, we use the Newton polynomial interpolation method to interpolate 0$^{\circ }$, 45$^{\circ }$, 90$^{\circ }$, and 135$^{\circ }$ Angle images, restoring the resolution to 2448$\times$2048, and reducing the impact of instantaneous field errors.

After observing the collected texture images, we found that the texture of materials at different locations was slightly different, so the texture images after imaging would also be different. In order to increase the amount of sample data and reduce the texture calculation time of each image, we evenly split each DOFP image with a resolution of 2048$\times$2448 into 64 sub-images with a resolution of 256$\times$306, and ensured that the texture details are rich at this resolution, so each DOFP image can get 64 sub-images.

For each DOFP image, 20 sub-images were randomly selected as the training set, while the remaining 44 sub-images were designated as the test set. Considering the three incident light conditions and their corresponding three detection zenith angles, this results in a total of nine distinct data categories. Each category comprises 12 different textures, leading to a total of 768 DOFP-decomposed sub-images for each dataset, 240 for the training set and 528 for the test set. Consequently, the complete dataset comprises 6912 sub-images, with 2160 in the training set and 4752 in the test set.

4.2 Polarization texture image dataset

The performance of the MODP_ITD we proposed depends on three parameters: number of orthogonal difference images $N$, the sampling radius $r$, and the sampling point $P$. Appropriate parameter settings allow our method to achieve higher classification accuracy and are applicable to general texture image classification applications. Firstly, we estimate the effect of parameter $N$ on the classification performance of the descriptor. After determining the optimal $N$, we will evaluate $r$ and $P$. To minimize the impact of noise on texture extraction, when evaluating the parameter $N$, we select ${r = 3}$ and ${P = 24}$.

As shown in Fig. 10, we conducted texture characteristic extraction and classification experiments for ${N=3}$, ${N=4}$, and ${N=5}$. The results indicated that with ${N=4}$, the classification accuracy of the nine texture datasets was generally better compared to ${N=3}$ and ${N=5}$. The analysis showed that when N=3, the dimension of the ${H_{MODP\_ITD}}$ frequency histogram was only 128, especially the ${H_{MODP\_ITD}}$ frequency histogram of the subdescriptor had a dimension of 6. This would lead to an inadequate extraction and description of the texture in orthogonal difference polarization images with different angles and the texture differences among these angles. Consequently, the classification accuracy of ${N=3}$ was worse than that of ${N=4}$. With ${N=5}$, although the dimension of ${H_{MODP\_ITD}}$ frequency histogram reached 248, the orthogonal difference image with more angles also included more noise after extracting the texture characteristics. Additionally, having more angles led to confusion in distinguishing texture differences among these angles, thereby reducing noise classification accuracy and significantly increasing the time required for texture extraction and coding. Based on the above comprehensive experimental analysis and careful consideration of classification accuracy and dimensionality, we set the number of orthogonal difference polarization images to ${N=4}$ in the subsequent experiments.

 figure: Fig. 10.

Fig. 10. Comparison of classification accuracy of parameter ${N}$ under different incident light and different detection zenith angles.

Download Full Size | PDF

Next, we evaluate ${r}$ as well as the sampling point ${P}$. Due to the strong correlation between ${r}$ and ${P}$, when ${r}$ is increased, ${P}$ will be increased accordingly, and synthesizing the relationship between ${r}$ and ${P}$, we set up five combinations of ${r}$ and ${P}$, which are ${r=1}$, ${P=8}$, ${r=2}$, ${P=16}$, ${r=3}$, ${P=24}$, ${r=5}$, ${P=24}$, and ${r=7}$, ${P=24}$, respectively. The results are shown in Fig. 11, where Fig. 11(a), (b), and (c) show the classification accuracy values for five ${r}$ and ${P}$ combinations at detecting zenith angles of 30$^{\circ }$, 45$^{\circ }$, and 60$^{\circ }$ under vertical, parallel, and unpolarized incident light. Fig. 11(d) shows the average values under vertical, parallel, and unpolarized incident light. Evaluating the five combinations, we observed that for detection zenith angles of 30$^{\circ }$ and 45$^{\circ }$, the classification accuracy of the ${r=3}$, ${P=24}$ combination was generally better than that of the ${r=1}$, ${P=8}$ and ${r=2}$, ${P=16}$ combinations across all three types of incident light. However, for a detection zenith angle of 60$^{\circ }$, the classification accuracy of the ${r=3}$, ${P=24}$ combination was worse than that of the ${r=5}$, ${P=24}$ and ${r=7}$, ${P=24}$ combinations. After analysis, the reason for the above problems is that although the sub-image has rich texture details, the points and lines in the texture are scattered. When the detection zenith angle is 60$^{\circ }$, the collected image will be distorted, which will further widen the distance between the points and lines, so the larger sampling radius is better than the smaller sampling radius in describing the texture.

 figure: Fig. 11.

Fig. 11. Comparison of classification accuracy of parameters ${r}$ and ${P}$ under different incident light and different detection zenith angles.

Download Full Size | PDF

4.3 Comparison experiments of the LBP variant

To comprehensively evaluate the performance of MODP_ITD, we compared it with various variants of LBP on self-constructed polarized texture dataset. In our experiments, we utilized publicly available codes for implementing LBP [9], LTP [10], CLBP [14], LGONBP [15] and ${\rm LB}{P^{mr}}/GNP$[17] to validate the effectiveness of these characteristic descriptors. The verification is necessary because LBP, LTP, CLBP, LGONBP, and ${\rm LB}{P^{mr}}/GNP$ were designed as texture descriptors for unpolarized images. We employed the S0 images as the inputs image for the above mentioned six methods, setting $r$ of LBP, LTP, CLBP and LGONBP as 3, 5, and 7, respectively, while keeping ${P = 24}$. The parameters of ${\rm LB}{P^{mr}}(K = 0.8)/GN{P_{{N^{'}} = 2}}$ are slightly different from other methods. While parameters $K$ and $N'$ are related to the dimension in the texture extraction process, which is consistent with the original text, this method adopts scheme $(r,P) = (1,8) + \cdots + ({n^{'}},8)$ with different scale combinations for the selection of $(r,P)$, and we set ${{n^{'}}=5}$ by comparison. To ensure fair comparison results, we computed the classification accuracies of all the comparison methods in this subsection using nearest neighbor classifiers.

4.3.1 Comparison of LBP variants for samples with vertical polarized incident light

From Table 1, it is evident that when the incident light is vertically incident, MODP_ITD performance outperforms other LBP variants at different ($r$, $P$) values and the three zenith angles, with average classification accuracies of 81.06%, 79.87%, and 78.87%, respectively. Additionally, the standard deviation of classification accuracies at the three zenith angles is minimal, indicating the algorithm’s strong adaptability to polarization caused by varying zenith angles, changes in texture image brightness, and texture distortion. Above shows that our method has strong adaptability to the brightness change and texture distortion caused by different zenith angles. The performance of the CLBP in terms of average classification accuracy and standard deviation is second only to MODP_ITD. Although the LGONBP outperforms CLBP at zenith angle of 60$^{\circ }$, it is inferior to CLBP in terms of 30$^{\circ }$, 45$^{\circ }$, and standard deviation. The ${\rm LB}{P^{mr}}(K = 0.8)/GN{P_{{N^{'}}= 2}}$ is moderate compared to the other five methods that yielded average classification accuracies under different ($r$, $P$) conditions. In summary, the superiority of the MODP_ITD in classifying texture images when the incident light is vertically oriented can be demonstrated.

Tables Icon

Table 1. Comparison of accuracy of classification of vertical polarized incident light.

4.3.2 Comparison of LBP variants for samples with vertical polarized incident light

As shown in Table 2, when parallel incident light is used, the method MODP_ITD achieves average classification accuracies of 66.36%, 65.26%, and 65.04% at three zenith angles, respectively. These values are superior to the other five methods. Although the standard deviation of average classification accuracies for the method MODP_ITD is slightly weaker than that of $LB{P^{riu2}}$, its classification accuracies are higher. Meanwhile, we observed that classification accuracy is notably lower for parallel light compared to vertical incident light. This can be attributed to the reduction in polarization caused by diffuse reflection of light on the object’s surface, resulting in decreased image brightness and quality, which further diminishes classification accuracy. In summary, the feasibility of the method MODP_ITD for classifying texture images under parallel incident light conditions is demonstrated.

Tables Icon

Table 2. Comparison of accuracy of classification of parallel polarized incident light.

4.3.3 Comparison of LBP variants for samples with unpolarized incident light

As can be seen from Table 3, when the incident light is unpolarized, the illuminating light intensity on the object will be twice that at perpendicular or parallel polarization angles. At this point, the average classification accuracies of our method for different ($r$, $P$) values at the three zenith angles are 78.57%, 79.86%, and 79.76%, respectively. Compared with LGONBP, the average classification accuracy of our method is slightly lower, placing it in the middle among the six methods. However, the standard deviation of the average classification accuracy is the best. At a zenith angle of 30$^{\circ }$, our method exhibits certain advantages. Notably, configurations $(r,P)=(3,24)$ and $(r,P)=(5,24)$ achieve the highest classification accuracy, while $(r,P)=(7,24)$ is second only to CLBP. In summary, our method can sustain high classification accuracy when categorizing texture images captured with unpolarized light featuring higher light intensity, and the classification accuracy of texture at different zenith angles does not exhibit significant variation.

Tables Icon

Table 3. Comparison of accuracy of classification of unpolarized incident light.

5. Conclusion

In our work, we analyze the differences between multiple-angle orthogonal difference polarization images and explore their local neighborhoods. We propose a new method called MODP_ITD for extracting texture characteristics from polarization images. MODP_ITD comprises 3 patterns: MODP_LBP, MODP_LPCSP, and MODP_LDBP. The MODP_LBP effectively describes and relates the local neighborhoods of multiple-angle orthogonal difference polarization images. The MODP_LPCSP sorts associations of the order of the principal components corresponding to the sampling points of the orthogonal difference image for multiple angles, while the MODP_LDBP describes the local texture of polarization difference images. Experiments on self-built datasets with nine different incident light types and various detection zenith angles demonstrate that our method captures polarization texture details and exhibits both the advantages of suitability for different types of polarized and unpolarized light. Compared to existing LBP algorithms, our algorithm effectively extracts polarized texture characteristics and offers the benefits of low characteristic dimension and high classification performance. However, our method as well as traditional texture characteristics are more suitable for texture classification of small-scale databases in controlled parameter settings than deep learning characteristics. In future work, we will explore combining traditional texture characteristics with deep models to improve discrimination and reduce the cost of deep characteristic learning.

Funding

Science and Technology Development Plan Project of Jilin Province, China (20220508152RC); Natural Science Foundation of Chongqing (cstc2021jcyj-msxmX0145); Project of Industrial Technology Research and Development in Jilin Province (2023C031-3).

Disclosures

The authors declare that there are no conflicts of interest related to this paper.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. Lu, V. E. Liong, and J. Zhou, “Simultaneous local binary feature learning and encoding for homogeneous and heterogeneous face recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 40(8), 1979–1993 (2018). [CrossRef]  

2. T. Song, H. Li, F. Meng, Q. Wu, and J. Cai, “Letrist: Locally encoded transform feature histogram for rotation-invariant texture classification,” IEEE Trans. Circuits Syst. Video Technol. 28(7), 1565–1579 (2018). [CrossRef]  

3. X. Qi, R. Xiao, C.-G. Li, Y. Qiao, J. Guo, and X. Tang, “Pairwise rotation invariant co-occurrence local binary pattern,” IEEE Trans. Pattern Anal. Mach. Intell. 36(11), 2199–2213 (2014). [CrossRef]  

4. T. Randen and J. Husoy, “Filtering for texture classification: a comparative study,” IEEE Trans. Pattern Anal. Machine Intell. 21(4), 291–310 (1999). [CrossRef]  

5. R. M. Haralick, “Statistical and structural 12 approaches to texture,” Photogramm. Eng. Remote. Sens. 67(5), 786 (1978).

6. R. Antosik, D. R. Scott, and G. M. Flachs, “Markov random field texture models for classification,” Proceedings of SPIE - The International Society for Optical Engineering (1990).

7. M. Unser, “Texture classification and segmentation using wavelet frames,” IEEE Trans. on Image Process. 4(11), 1549–1560 (1995). [CrossRef]  

8. D. A. Clausi and M. Ed Jernigan, “Designing Gabor filters for optimal texture separability,” Pattern Recognit. 33(11), 1835–1849 (2000). [CrossRef]  

9. T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Trans. Pattern Anal. Machine Intell. 24(7), 971–987 (2002). [CrossRef]  

10. X. Tan and B. Triggs, “Enhanced local texture feature sets for face recognition under difficult lighting conditions,” IEEE Trans. on Image Process. 19(2), 374–383 (2010). [CrossRef]  

11. M. A. Akhloufi and A. Bendada, “Locally adaptive texture features for multispectral face recognition,” in 2010 IEEE International Conference on Systems, Man and Cybernetics (2010), pp. 3308–3314.

12. J. Ren, X. Jiang, and J. Yuan, “Noise-resistant local binary pattern with an embedded error-correction mechanism,” IEEE Trans. on Image Process. 22(10), 4049–4060 (2013). [CrossRef]  

13. Z. Guo, L. Zhang, and D. Zhang, “A completed modeling of local binary pattern operator for texture classification,” IEEE Trans. on Image Process. 19(6), 1657–1663 (2010). [CrossRef]  

14. Y. Zhao, D.-S. Huang, and W. Jia, “Completed local binary count for rotation invariant texture classification,” IEEE Trans. on Image Process. 21(10), 4492–4497 (2012). [CrossRef]  

15. T. Song, J. Feng, L. Luo, C. Gao, and H. Li, “Robust texture description using local grouped order pattern and non-local binary pattern,” IEEE Trans. Circuits Syst. Video Technol. 31(1), 189–202 (2021). [CrossRef]  

16. X. Shu, H. Pan, J. Shi, X. Song, and X.-J. Wu, “Using global information to refine local patterns for texture representation and classification,” Pattern Recognit. 131, 108843 (2022). [CrossRef]  

17. Y. Luo, J. Sa, Y. Song, H. Jiang, C. Zhang, and Z. Zhang, “Texture classification combining improved local binary pattern and threshold segmentation,” Multimed. Tools Appl. 82(17), 25899–25916 (2023). [CrossRef]  

18. T. Song and H. Li, “Wavelbp based hierarchical features for image classification,” Pattern Recognit. Lett. 34(12), 1323–1328 (2013). [CrossRef]  

19. Y. Ma, L. Deng, X. Chen, and N. Guo, “Integrating orientation cue with EOH-OLBP-based multilevel features for human detection,” IEEE Trans. Circuits Syst. Video Technol. 23(10), 1755–1766 (2013). [CrossRef]  

20. A. Kalra, V. Taamazyan, S. K. Rao, K. Venkataraman, R. Raskar, and A. Kadambi, “Deep polarization cues for transparent object segmentation,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020), pp. 8599–8608.

21. K. P. Gurton, A. J. Yuffa, and G. W. Videen, “Enhanced facial recognition for thermal imagery using polarimetric imaging,” Opt. Lett. 39(13), 3857–3859 (2014). [CrossRef]  

22. J.-A. Liang, X. Wang, Y.-J. Fang, J.-J. Zhou, S. He, and W.-Q. Jin, “Water surface-clutter suppression method based on infrared polarization information,” Appl. Opt. 57(16), 4649–4658 (2018). [CrossRef]  

23. E. Pirard, S. Lebichot, and W. Krier, “Particle texture analysis using polarized light imaging and grey level intercepts,” Int. J. Miner. Process. 84(1-4), 299–309 (2007). [CrossRef]  

24. B. Yuan, B. Xia, and D. Zhang, “Polarization image texture feature extraction algorithm based on CS-LBP operator,” Procedia Comput. Sci. 131, 295–301 (2018). [CrossRef]  

25. Y. Liu, Y. Dong, L. Si, R. Meng, Y. Dong, and H. Ma, “Comparison between image texture and polarization features in histopathology,” Biomed. Opt. Express 12(3), 1593–1608 (2021). [CrossRef]  

26. S. Oprisescu, R.-M. Coliban, and M. Ivanovici, “Polarization-based optical characterization for color texture analysis and segmentation,” Pattern Recognit. Lett. 163, 74–81 (2022). [CrossRef]  

27. F. Wang, S. Ainouz, C. Lian, and A. Bensrhair, “Multimodality semantic segmentation based on polarization and color images,” Neurocomputing 253, 193–200 (2017). [CrossRef]  

28. S. L. Cornell Chun, “Remote sensing using passive infrared stokes parameters,” Opt. Eng. 43(10), 2283–2291 (2004). [CrossRef]  

29. W. Zhi-she, Y. Feng-bao, P. Zhi-hao, C. Lei, and J. Li-e, “Multi-sensor image enhanced fusion algorithm based on NSST and top-hat transformation,” Optik 126(23), 4184–4190 (2015). [CrossRef]  

30. K. P. Gurton and R. Dahmani, “Effect of surface roughness and complex indices of refraction on polarized thermal emission,” Appl. Opt. 44(26), 5361–5367 (2005). [CrossRef]  

31. J.-H. Zhang, Y. Zhang, and Z. Shi, “Long-wave infrared polarization feature extraction and image fusion based on the orthogonality difference method,” J. Electron. Imaging 27(02), 023021 (2018). [CrossRef]  

32. S. Mo, J. Duan, W. Zhang, X. Wang, J. Liu, and X. Jiang, “Multi-angle orthogonal differential polarization characteristics and application in polarization image fusion,” Appl. Opt. 61(32), 9737–9748 (2022). [CrossRef]  

33. N. Li, Y. Zhao, Q. Pan, and S. G. Kong, “Demosaicking dofp images using Newton's polynomial interpolation and polarization difference model,” Opt. Express 27(2), 1376–1391 (2019). [CrossRef]  

34. Z. Wang, B. Fan, and F. Wu, “Local intensity order pattern for feature description,” in 2011 International Conference on Computer Vision (2011), pp. 603–610.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Orthogonal difference diagram of different angles.
Fig. 2.
Fig. 2. Relationship between polarization angle and pixel value.
Fig. 3.
Fig. 3. Local coding process of polarization orthogonal difference image.
Fig. 4.
Fig. 4. $ODP\_LBP_{r,P}^{n,eriu2}$ extended coding proportion diagram.
Fig. 5.
Fig. 5. Coding process of MODP_LPCSP.
Fig. 6.
Fig. 6. Comparison of evaluation indexes of polarized texture images in self-built data sets.
Fig. 7.
Fig. 7. Process of MODP_ITD cascade.
Fig. 8.
Fig. 8. Polarization texture image.
Fig. 9.
Fig. 9. Schematic diagram of polarization texture acquisition device.
Fig. 10.
Fig. 10. Comparison of classification accuracy of parameter ${N}$ under different incident light and different detection zenith angles.
Fig. 11.
Fig. 11. Comparison of classification accuracy of parameters ${r}$ and ${P}$ under different incident light and different detection zenith angles.

Tables (3)

Tables Icon

Table 1. Comparison of accuracy of classification of vertical polarized incident light.

Tables Icon

Table 2. Comparison of accuracy of classification of parallel polarized incident light.

Tables Icon

Table 3. Comparison of accuracy of classification of unpolarized incident light.

Equations (27)

Equations on this page are rendered with MathJax. Learn more.

S = [ S 0 S 1 S 2 S 3 ] = [ E X 2 + E Y 2 E X 2 E Y 2 2 E X E Y c o s δ 2 E X E Y s i n δ ] = [ I 0 + I 90 I 0 I 90 I 45 I 135 I L + I R ]
D O L P = S 1 2 + S 2 2 S 0 .
A O P = 0.5 a r c t a n ( S 2 S 1 ) .
I ( θ ) = S 0 + S 1 cos ( 2 θ ) + S 2 s i n ( 2 θ ) 2 .
I ( θ ) = I u p o l ( θ ) + I p o l ( θ ) = 1 2 S 0 + S 1 cos ( 2 θ ) + S 2 s i n ( 2 θ ) 2 .
I p o l ( θ ) = S 1 cos ( 2 θ ) + S 2 s i n ( 2 θ ) 2 .
I ( θ ) = I ( θ ) I ( θ + π 2 ) = 1 2 [ ( S 1 cos ( 2 θ ) S 1 cos ( 2 θ + π ) ) + ( S 2 sin ( 2 θ ) S 2 sin ( 2 θ + π ) ) ] = S 1 cos ( 2 θ ) + S 2 s i n ( 2 θ )
L B P R , P ( m ) = p = 0 P 1 s ( g p g c ) 2 p .
s ( x ) = { 1 x 1 0 x < 1
U ( L B P R , P ) = | s ( g p 1 g c ) s ( g 0 g c ) | + p = 0 P 1 | s ( g p g c ) s ( g p 1 g c ) | .
L B P R , P r i u 2 ( l ) = { p = 0 P 1 s ( g p g c ) U ( L B P R , P ) 2 P + 1 o t h e r w i s e
o ¯ n ( x , y ) = i = x ω 2 i = x + ω 2 j = y ω 2 j = y + ω 2 I n ( i , j ) / i = x ω 2 i = x + ω 2 j = y ω 2 j = y + ω 2 I n ( i , j ) ω 2 ω 2 .
( g r , 0 n , , g r , P 1 n ) = ( o ¯ r , 0 n , , o ¯ r , P 1 n ) o ¯ c n .
O D P _ L B P r , P n , e r i u 2 = { p = 0 P 1 s ( g r , p n ) U 2 P + 1 U = 4 P + 2 U = 6 P + 3 U = 8 P + 4 O t h e r s
U ( O D P _ L B P r , P n , e r i u 2 ) = | s ( g r , P 1 n ) s ( g r , 0 n ) | + p = 0 P 1 | s ( g r , p n ) s ( g r , P 1 n ) | .
L r , P n = ( l r , 0 n , , l r , P 1 n ) = a b s ( ( o ¯ r , 0 n , , o ¯ r , P 1 n ) o ¯ c n ) .
C = cov ( L r , P 0 , , L r , P N 1 ) .
( D r 0 , , D r N 1 ) = s o r t ( v 0 , , v N 1 ) .
M O D P _ L P C S P r , P , N = f ( D 0 , , D N 1 ) .
I O D P D ( i , j ) = ( max ( I 0 ( i , j ) , I N 1 ( i , j ) ) min ( I 0 ( i , j ) , I N 1 ( i , j ) ) ) .
H M O D P _ I T D = [ H M O D P _ L B P , H M O D P _ L P C S P , H M O D P _ L D B P ] .
H M O D P _ L B P = [ H O D P _ L B P 0 , H O D P _ L B P 1 , , H O D P _ L B P N 1 ] .
H O D P _ L B P ( k 1 ) = m = 0 M 1 h ( O D P _ L B P r , P N ( m ) , k 1 ) .
H M O D P _ L P C S P ( k 2 ) = m = 0 M 1 h ( M O D P _ L P C S P r , P ( m ) , k 2 ) .
H M O D P _ L D B P ( k 3 ) = m = 0 M 1 h ( M O D P _ L D B P r , P ( m ) , k 3 ) .
h ( x , y ) = { 1 x = y 0 o t h e r s
d x 2 ( H 1 , H 2 ) = k [ H 1 ( k ) H 2 ( k ) ] 2 H 1 ( k ) + H 2 ( k ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.