Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Definition of an error map for DoFP polarimetric images and its application to retardance calibration

Open Access Open Access

Abstract

With the recent development of division of focal plane (DoFP) polarization sensors, it is possible to perform polarimetric analysis of a scene with a reduced number of acquisitions. One drawback of these sensors is that polarization estimation can be perturbed by the spatial variations of the scene. We thus propose a method to compute a map that indicates where polarization estimation can be trusted in the image. It is based on two criteria: the consistency between the intensity measurements inside a super-pixel and the detection of spatial intensity variations. We design both criteria so that a constant false alarm rate can be set. We demonstrate the benefit of this method to improve the precision of dynamic retardance calibration of DoFP-based full Stokes imaging systems.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Polarimetric imaging systems can reveal contrasts that remain invisible in conventional intensity imaging [14]. Among the many existing architectures of polarimetric imagers, division of focal plane (DoFP) sensors make it possible to measure the Stokes vector with a reduced number of acquisitions. In particular, DoFP sensors based on linear polarizers engraved directly on the pixels have recently been developed at an industrial scale [5]. With such sensors, a single acquisition is needed to estimate the linear Stokes vector [610]. In order to measure the full Stokes vector, one can use a retarder in front of the camera and perform at least two acquisitions [1114]. If at least three acquisitions are performed, it is also possible to dynamically calibrate the retarder during acquisition [12,1517].

With this sensor architecture, each pixel provides a specific analysis of the polarization state at each spatial localization. Thus, it is necessary to use the information delivered by four neighboring pixels in order to estimate the linear Stokes vector. Since these pieces of information are obtained at different spatial localizations, this may lead to errors. These errors are not distributed homogeneously over the image. They are particularly present in areas of the scene where there are fast spatial variations of the polarization state.

A lot of work has been done to correct these errors by designing efficient interpolation methods [1825], often supported by image processing algorithms [26,27]. However, in some applications, rather than correcting the errors, it is important to determine where they occur in the image, in order to determine the locations where polarization estimation is unreliable. For that purpose, we propose a method to compute an error map of the super-pixels at which polarization estimation can be considered inaccurate. We first introduce a criterion based on information redundancy that enables one to detect incompatibilities between the measurement within a super-pixel. We then supplement this criterion with edge detection on the intensity measurements. We demonstrate the error assessment ability of this approach and illustrate its performance for dynamic calibration of retardance in a full Stokes polarization imaging setup.

2. Modeling of polarization measurements with a DoFP sensor

DoFP sensors use a grid of “super-pixels”, each one being composed of four pixels with four different polarizers engraved on them with orientations at 0${^\circ }$, 45${^\circ }$, 90${^\circ }$ and 135${^\circ }$. One super-pixel corresponds to $N_{\text {sp}} = 4$ measurements per acquisition. This number is sufficient to estimate the linear Stokes vector. In order to measure the full Stokes vector, one has to perform extra acquisitions with a linear retarder in front of the sensor. If $N_{\text {acq}}$ such acquisitions are performed, one has $N_{\text {mes}} = N_{\text {sp}} \times N_{\text {acq}}$ measurements for a super-pixel. If one groups these measurements in a $N_{\text {mes}}$-dimensional vector $\mathbf {I}$, the acquisition process can be modeled as follows:

$$\mathbf{I} = \mathbb{W}\mathbf{S} ,$$
where $\mathbf {S} = (S_0, S_1, S_2, S_3)^T$ is the incident Stokes vector on the super-pixel. Only its first $K = 3$ elements are considered if one measures the linear Stokes vector and the $K = 4$ if one measures the full Stokes vector. The measurement matrix $\mathbb {W}$ describes the acquisition process of the super-pixel, possibly for the $N_{\text {acq}}$ positions of the retarder. Its dimension is $N_{\text {mes}} \times K$. Note that if a retarder is used, the measurement matrix depends on its retardance $\delta$ and should be written $\mathbb {W}(\delta )$. We will use this explicit writing of the parameter $\delta$ only in Section 5, where calibration of the retardance $\delta$ is performed. The incident Stokes vector is estimated from the measurements as:
$$\mathbf{\hat{S}} = \mathbb{W}^+ \mathbf{I} ,$$
where $\mathbf {\hat {S}}$ is the estimated Stokes vector and the superscript $^+$ denotes the Moore-Penrose pseudo-inverse. This model assumes that the polarization state is uniform over the super-pixel. However, most of the time, this is not the case and each pixel within the super-pixel receives a different incident Stokes vector, which induces a bias in the estimation of the polarization state. The objective of the present article is to define a criterion to decide, in each super-pixel of the image, whether this bias is significant with respect to measurement noise.

3. General redundancy criterion

One way to determine if the measurements within a super-pixel are consistent is to use the redundancy in the information contained in those measurements [13,25]. For example, if the sensor is used without retarder, one has four measurements to estimate the linear Stokes vector which is composed of three parameters. Hence, these measurements are redundant. Let us denote $I_i$ the intensity measured by a pixel, with $i \in [0, 45, 90, 135]$ corresponding to the orientation of the polarizer engraved on the pixel. Due to the redundancy, there are different ways to estimate the light intensity $S_0$ on the super-pixel : $S_0 = I_{0} + I_{90} = I_{45} + I_{135}$. If the measurements on the super-pixel are not consistent, we may have $R_{\text {lin}} = I_{0} + I_{90} - (I_{45} + I_{135}) \neq 0$. This gives us a criterion which evaluates the validity of the estimation of the linear Stokes vector in each super-pixel. It has been used to assess the quality of polarization measurements from DoFP images [25] and to improve demosaicking algorithms [19,24,28]. We propose to generalize this criterion to take into account imperfections of the DoFP matrix and estimation of the full Stokes vector using a retarder and several acquisitions. The the DoFP matrix can be calibrated through the estimation of its measurement matrix $\mathbb {W}$ as described in [9].

3.1 Definition of $\mathbf {R}$

The first step is to consider the singular value decomposition (SVD) of $\mathbb {W}$:

$$\mathbb{W} = \mathbb{UDV}^T ,$$
where $\mathbb {U}$ and $\mathbb {V}$ are unitary matrices of dimensions respectively $N_{\text {mes}} \times N_{\text {mes}}$ and $K \times K$. The matrix $\mathbb {D}$ is of dimensions $N_{\text {mes}} \times K$, its first $K \times K$ block is diagonal and contains the singular values, and the last $N_{\text {mes}} - K$ rows contain only zeros. The superscript $^T$ denotes transposition. By substituting Eq. (3) in Eq. (1) one has:
$$\mathbb{U}^T \mathbf{I} = \mathbb{DV}^T \mathbf{S} .$$

Due to the structure of the matrix $\mathbb {D}$, the last $N_{\text {mes}}-K$ elements of the vector $\mathbb {U}^T \mathbf {I}$ are null in the ideal case, that is, when the incident Stokes vector is homogeneous within the super-pixel and there is no noise. Let us define $\mathbb {U}_R$ the matrix composed of the $N_{\text {mes}}-K$ last columns of the matrix $\mathbb {U}$, and:

$$\mathbf{R} = \mathbb{U}_R^T\mathbf{I} .$$

In the ideal case, all the elements of the vector $\mathbf {R}$ are null. As a particular case, when measuring the linear Stokes vector, $\mathbf {R}$ is scalar and if the sensor is assumed to have its nominal polarization properties, one obtains from the SVD of $\mathbb {W}$ the expression of the matrix $\mathbb {U}_R$ which is a vector in that case: $\mathbb {U}_R = \frac {1}{2}(-1, 1, -1, 1)^T$. Then Eq. (5) leads to $\mathbf {R} = \frac {1}{2}( -I_{0} + I_{45} - I_{90} + I_{135}) = -\frac {1}{2} R_{lin}$, which is the standard expression of the redundancy parameter [25]. Thus, the parameter ${\mathbf R}$ defined in Eq. (5) is a generalization of this redundancy criterion, which takes into account imperfections of the measurement matrix $\mathbb {W}$ and can be applied to full Stokes imaging systems.

3.2 Statistics of $\mathbf {R}$

In practice, the intensity measurements $\mathbf {I}$ are perturbed by photon noise due to input light and additive noise mainly due to the sensor. In normal conditions of acquisition, photon noise is highly dominant, and we will consider only this noise source. Moreover, its mean value is almost always high enough to consider it equivalent to Gaussian noise with mean equal to its variance. Hence, each element $I_n , n \in [1,N_{\text {mes}}]$ of the measurement vector $\mathbf {I}$ will be modeled as a Gaussian random variable of mean $\langle I_n \rangle = \mathbf {w}_n^T {\mathbf S}$ and variance $\langle I_n \rangle$, where $\langle \cdot \rangle$ denotes ensemble mean and the vector $\mathbf {w}_n^T$ is the $n^{\text {th}}$ row of the matrix $\mathbb {W}$. Since each measurement is statistically independent, the covariance matrix $\Gamma ^{\mathbf {I}}=\langle (\mathbf {I} - \langle \mathbf {I}\rangle )(\mathbf {I} - \langle \mathbf {I}\rangle )^T \rangle$ of the vector ${\mathbf I}$ is diagonal, with diagonal elements equal to:

$$\Gamma^{\mathbf{I}}(n,n) = \langle I_n \rangle = \mathbf{w}_n^T \mathbf{S} .$$

From Eq. (5), we deduce that $\mathbf {R}$ is also a vector of Gaussian random variables. Since, by definition, $\mathbb {U}_R^T \mathbb {W} = 0$, its mean value is equal to:

$$\langle\mathbf{R}\rangle = \mathbb{U}_R^T \langle\mathbf{I}\rangle = \mathbb{U}_R^T \mathbb{W} \mathbf{S} = 0 ,$$
and its covariance matrix has the following expression:
$$\Gamma^{\mathbf{R}} = \langle (\mathbf{R} - \langle\mathbf{R}\rangle)(\mathbf{R} - \langle\mathbf{R}\rangle)^T \rangle = \mathbb{U}_R^T \Gamma^{\mathbf{I}} \mathbb{U}_R ,$$
where $\Gamma ^{\mathbf {I}}$ is defined in Eq. (6). The variances of the elements of the vector $\mathbf {R}$ are the diagonal elements of its covariance matrix $\Gamma ^{\mathbf {R}}$. We can explicit them from Eq. (8) and Eq. (6):
$$\Gamma^{\mathbf{R}}(i,i) = \sum_{n=1}^{N_{\text{mes}}} [\mathbb{U}_R(i,n)]^2 \Gamma^{\mathbf{I}}(n,n) = \sum_{n=1}^{N_{\text{mes}}} [\mathbb{U}_R(i,n)]^2 \mathbf{w}_n^T \mathbf{S} .$$

One can express the vector of the variances of $\mathbf {R}$ from Eq. (9) as:

$$\mbox{VAR}[\mathbf{R}] = \mathbb{Q}^T \mathbf{S} ,$$
with $\mathbb {Q} = (\mathbb {U}_R \odot \mathbb {U}_R)^T \mathbb {W}$ where $( \odot )$ denotes element wise multiplication. Thus, in absence of other perturbation than the measurement noise, it can be considered that each element $R_i$ of the vector $\mathbf {R}$ is a Gaussian random variable of mean 0 and variance $\mathbf {q}_i^T \mathbf {S}$, where the vector $\mathbf {q}_i$ is the $i^{\text {th}}$ column of the matrix $\mathbb {Q}$.

3.3 Comparison with the standard redundancy criteria

Even in the linear polarization estimation scenario, the criterion $\mathbf {R}$ – which is not a vector but a scalar in this case – is more general than the difference of intensities $R_{\text {lin}} = I_0 + I_{90} - I_{45} - I_{135}$ since it takes into account the actual response of the sensor through the measurement matrix $\mathbb {W}$. In order to illustrate this benefit, we shall compare these two criteria on several polarimetric images acquired with a DoFP camera manufactured by LUCID Vision Labs and equipped with the SONY IMX250MZR CMOS sensor. For the computation of the parameter $\mathbf {R}$, we used a calibrated measurement matrix $\mathbb {W}$ obtained with the method described in [9]. The measurement matrices of each super-pixel of the sensor have been estimated, but since they are very similar, we globally characterize the sensor by a single average matrix:

$$\mathbb{W} = \frac{1}{2} \begin{bmatrix} 1.000 & 0.988 & 0.005 \\ 1.015 & {-}0.001 & 1.004 \\ 0.979 & {-}0.966 & 0.004 \\ 1.006 & 0.016 & {-}0.987 \end{bmatrix} .$$

We can derive from this matrix the average parameters of the polarimetric sensor. We have gathered them in Table 1 where $\mathbf {\phi }$ denotes the orientations of the polarizers, $\mathbf {t}$ their relative transmissions and $\mathbf {d}$ their diattenuations. We observe that the relative transmissions of the polarizers oriented at $0^\circ$ and $90^\circ$ are less than or equal to one, and those of the polarizers oriented at $45^\circ$ and $135^\circ$ are greater than one. Thus we expect a negative bias in the estimation of $R_{\text {lin}}$ as the intensities $I_{0}$ and $I_{90}$ will be underestimated and $I_{45}$ and $I_{135}$ will be overestimated.

Tables Icon

Table 1. Mean parameters for the four different pixels constituting the sensor. $\mathbf {\phi }$ is the orientations of the polarizers, $\mathbf {t}$ the relative transmissions and $\mathbf {d}$ the diattenuations.

Figure 1 represents the intensity components $S_0$ of 5 images acquired with this DoFP camera, and Fig. 2 shows the histograms of the values of $R_{\text {lin}}$ and $\mathbf {R}$ computed for all the super-pixels of these 5 images. We observe that the histograms of $R_{\text {lin}}$ are not centered on zero, strongly asymmetric, and, as expected, their mean values are negative. In sharp contrast, the histograms of $\mathbf {R}$ are centered on zero and symmetric. Furthermore, we have represented in Table 2 the ratios of the mean over standard deviation of both criteria in each image. It is observed that these ratios are always non-negligible in the case of $R_{\text {lin}}$, whereas they are significantly lower for $\mathbf {R}$. These results show that $\mathbf {R}$ follows the statistics expected from the theoretical analysis in Section 3.2: it is centered on zero and symmetric. This is not the case of $R_{\text {lin}}$, which is not centered and significantly dissymmetrical. This means that $R_{\text {lin}}$ - which assumes that the sensor is ideal - is significantly biased by the actual imperfections of the sensor. In consequence, the criterion $\mathbf {R}$ is more reliable to characterize the quality of polarization estimation.

 figure: Fig. 1.

Fig. 1. Intensities $S_0$ of the 5 images used to study the statistics of $R_{\text {lin}}$ and $\mathbf {R}$.

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. Histograms of $R_{\text {lin}}$ (left) and $\mathbf {R}$ (right) computed from the 5 images of Fig. 1

Download Full Size | PDF

Tables Icon

Table 2. Ratio of the mean over the standard deviation of $R_{\text {lin}}$ (left column) and $\mathbf {R}$ (right column) for each of the 5 images displayed in Fig. 1.

3.4 Normalization and thresholding of R

Our objective is to use the vector $\mathbf {R}$ as a criterion to decide whether the estimation of the polarization state at a given location in a polarization image is reliable. For that purpose, we compute the value of $\mathbf {R}$ at each super-pixel and compare its norm to a threshold. If the threshold is exceeded, the estimate of the polarization state at this super-pixel is declared unreliable. However, as the variances $\text {VAR}[\mathbf {R}]$ depend on $\mathbf {S}$, it is not easy to determine a threshold common to all the super-pixels of the image. Thus we propose to derive from $\mathbf {R}$ a criterion which does not depend on $\mathbf {S}$ in order to define a Constant False Alarm Rate (CFAR) detector. To this end, we define the vector $\mathbf {T}$ whose elements are defined as:

$$\mathbf{T}(i) = \frac{\mathbf{R}(i)}{\sqrt{\text{VAR}[\mathbf{R}(i)]}} = \frac{\mathbf{R}(i)}{\sqrt{\mathbf{q}_i^T \mathbf{S}}} .$$

These elements are Gaussian random variables with zero mean and unit variance. Moreover, it is easily shown that they are statistically independent. Hence, the squared norm of ${\mathbf T}$ $\|{\mathbf T}\|^2 = \sum _{i=1}^{N_{\text {mes}}-K}\mathbf {T}^2(i)$ follows a $\chi ^2$ law of order $N_{\text {mes}}-K$. It is then possible to use a thresholding of $\|{\mathbf T}\|^2$ at a constant false alarm rate to determine if a super-pixel is perturbed by something else than the measurement noise.

However, in practice, we do not have access to the real value of the incident Stokes vector $\mathbf {S}$ and we thus cannot compute ${\mathbf T}$. However, we have the estimate $\mathbf {\hat {S}}$ given by Eq. (2), which is unbiased ($\langle \mathbf {\hat {S}}\rangle = \mathbf {S}$) and with variance proportional to the intensity $S_0$. We can use it in place of ${\mathbf S}$ in order to compute $\text {VAR}[\mathbf {R}]$ in Eq. (9), and consider:

$$\mathbb{Q}^T \mathbf{\hat{S}} = \mathbb{Q}^T \mathbf{S} + \mathbf{a} ,$$
where $\mathbf {a}$ is a random vector which elements have zero mean and variance proportional to the intensity $S_0$. The elements of the vector $\mathbf {T}$ can be estimated as:
$$\mathbf{\hat{T}}(i) = \frac{\mathbf{R}(i)}{\sqrt{\mathbf{q}_i^T \mathbf{\hat{S}}}} = \frac{\mathbf{R}(i)}{\sqrt{\mathbf{q}_i^T \mathbf{S} + {a}(i)}} = \frac{\mathbf{R}(i)}{\sqrt{\mathbf{q}_i^T \mathbf{S}\left(1 + \frac{\mathbf{a}(i)}{\mathbf{q}_i^T \mathbf{S}}\right)}} \simeq \mathbf{T}(i)\left(1 - \frac{1}{2}\frac{{a}(i)}{\mathbf{q}_i^T \mathbf{S}}\right) ,$$
where ${a}(i) \sim \text {STD}[\mathbf {q}_i^T \mathbf {\hat {S}}] \sim \sqrt {S_0}$ and $\mathbf {q}_i^T \mathbf {S} \sim S_0$ thus $\frac {{a}(i)}{\mathbf {q}_i^T \mathbf {S}} \sim \frac {1}{\sqrt {S_0}}$. In practice, in standard illumination conditions, $S_0$ is almost always high enough to consider that $\frac {{a}(i)}{\mathbf {q}_i^T \mathbf {S}} \ll 1$. In conclusion, we can assume the random variable $\|\hat {{\mathbf T}}\|^2$ to follow a $\chi ^2(N_{\text {mes}}-K)$ law. As a consequence, the threshold $th(P_{\text {fa}})$ that makes it possible to reach a false alarm rate equal to $P_{\text {fa}}$ is equal to:
$$th(P_{\text{fa}}) = \arg\underset{x}{\min}\left[|1 - \text{CDF}_{\chi^2(N_{\text{mes}}-K)}(x) - P_{\text{fa}}|\right] ,$$
where $\text {CDF}_{\chi ^2(N_{\text {mes}}-K)}$ denotes the cumulative density function of the $\chi ^2(N_{\text {mes}}-K)$ law.

3.5 Difficulties at intensity edges

If the parameter ${\mathbf R}$ is far from zero, this means that there is an inconsistency within the super-pixel. On the other hand, if it is close to zero, it does not necessarily mean that the Stokes vector is homogeneous within the super-pixel: there may be inhomogeneous regions that are not detected by the criterion defined in the previous section. Indeed, we have noticed that this criterion does not detect well pure intensity variations within a pixel, in particular in the case of estimation of the linear Stokes vector with a single acquisition. This fact is illustrated Fig. 3 which shows a variation of intensity, represented in gray level, within a super-pixel. The notations in the pixels are the orientations of the polarizers which are considered ideal. In this example, we consider an unpolarized incident light with an horizontal intensity edge. The measured intensities are such that $I_{0} = I_{135}$ and $I_{90} = I_{45}$. As a consequence, the criterion $\mathbf {R}$ ($= R_{\text {lin}}$ in the ideal case) is null. However, using these measurements in Eq. (2) leads to a significantly polarized estimated Stokes vector, which does not correspond to the actual unpolarized one. This scenario can be reproduced for any orientation of the intensity edge and thus highlights that the criterion ${\mathbf R}$ is not able to detect intensity variations when using a single DoFP acquisition.

 figure: Fig. 3.

Fig. 3. An example of intensity variation for which it is easily seen that the criterion $\mathbf {R}$ is null.

Download Full Size | PDF

The global behaviour of this criterion is illustrated Fig. 4 on the polarimetric images of two scenes illuminated with unpolarized light. The first one displays a plastic lamp (upper row) and the second one is composed of white cardboard with large letters printed in black (bottom row). The intensity image $S_0$ is represented on the left, and we have superimposed on it, in red, the error map obtained by thresholding $\|\hat {{\mathbf T}}\|^2$ with a false alarm rate of $P_{\text {fa}} = 10^{-3}$. We have also represented the images of the estimated degree of linear polarization: $\text {DoLP} = \sqrt {\hat {S}_1^2 + \hat {S}_2^2} / \hat {S}_0$ (middle) and angle of polarization: $\text {AoP} = \frac {1}{2} \arctan (\frac {\hat {S}_1}{\hat {S}_2})$ (right). We can observe on the left side of the lamp a reflection with high DoLP and where many pixels are declared unreliable (they appear in red on the $S_0$ image). A zoom on this area is displayed on the second row of Fig. 4. It is seen that the part of this polarized reflection that is detected as problematic on the error map contains complex spatial fluctuations of the polarization state. As this area has a high intensity and high DoLP, which characterize a strong signal, these fluctuations are significant with respect to the noise. This induces inhomogeneity within the super-pixels which is well detected by the redundancy criterion as we can see on the error map. This illustrates the ability of this criterion to detect spatial variations of the polarization state

 figure: Fig. 4.

Fig. 4. Images of the intensity coefficient $S_0$ with the error map in red (left), of the degree of linear polarization ’DoLP’ (middle) and of the angle of polarization ’AoP’ (right). The second row shows a zoom of the images of the first row. This zoom focuses on the polarized reflection on the lamp highlighted by the green rectangle.

Download Full Size | PDF

Concerning the second scene (last row of Fig. 4), we can tell from the images of DoLP and AoP that there are errors in the estimation of the Stokes vector at the edges of the black letters. Indeed, at these edges, the estimated DoLP is significantly higher than in the rest of the image, without any physical reason, and the values of AoP have no physical meaning. These estimation errors are due to intensity variations within super-pixels, but they are not detected by the redundancy criterion. Thus, even if this criterion can detect many estimation errors, it may not detect correctly the errors caused by intensity variations. As these variations are very common in images, it is necessary to use an additional criterion to help detect these errors.

4. Detection of intensity variations

In order to detect estimation errors due to intensity variations within super-pixels, we use an intensity variation detector. As for the redundancy criterion, we want this detector to be CFAR, that is, independent on the polarization state actually present on the super-pixel if this polarization state is homogeneous.

4.1 Chosen model

To be independent of the polarization state, one can use the estimations of the intensity $S_0$ at super-pixels surrounding the super-pixel of interest, as illustrated Fig. 5. By comparing these intensity values, we will decide whether there is a significant intensity variation within the super-pixel of interest. The estimation of $S_0$ in the case where the sensor is considered ideal is: $\hat {S}_0 = \frac {1}{2 N_{\text {acq}}} \sum _n^{N_{\text {mes}}} I_n$, where the measurements $I_n$ follow Poisson statistics. Since the variable $2 N_{\text {acq}} \hat {S}_0$ also follows Poisson statistics, it is easier to consider it instead of $\hat {S}_0$. Thus we have a two-dimensional discrete signal $\mathbf {n}$ that takes the values $(n_{i,j}) , (i,j) \in [1,N]^2$ where each $n_{i,j}$ correspond to an estimation of $2 N_{\text {acq}} \hat {S}_0$ which follows Poisson statistics. This signal forms an image of size $N \times N$ centered on the super-pixel of interest. A way to determine whether there is an intensity variation inside this image is to compare the likelihood of the two hypotheses illustrated Fig. 6 :

  • $H_0$ : there is no variation of intensity, the signal follows a Poisson law of constant parameter $\lambda _0$;
  • $H_1$ : there is intensity variation. The signal can be separated in four regions as shown in Fig. 5. The signal follows four different Poisson laws of parameters respectively $\lambda _a$, $\lambda _b$, $\lambda _c$ and $\lambda _d$ in the four different regions.

Detection of intensity variation within the considered set of super-pixels will be performed by thresholding the log-likelihood ratio of these two hypotheses. This log-likelihood ratio depends on the parameters $\lambda _k, k\in \{0,a,b,c,d\}$ which are nuisance parameters. Their maximum likelihood estimates $\hat {\lambda }_k$ are the empirical means over the corresponding region of the signal shown Fig. 5. They verify the following relation; $\hat {\lambda }_0 = \hat {\lambda }_a + \hat {\lambda }_b + \hat {\lambda }_c + \hat {\lambda }_d$. Substituting the parameter $\lambda _0$ with its estimate $\hat {\lambda }_0$ in the expression of the log-likelihood of the hypothesis $H_0$, one obtains the pseudo log-likelihood:

$$\ell (\mathbf{n}|H_0) ={-}N^2 \hat{\lambda}_0 + N^2 \hat{\lambda}_0\ln(\hat{\lambda}_0) - \sum_{i=1}^{N}\sum_{j=1}^{N}\ln(n_{i,j}!) .$$

In the same way, we can express the pseudo log-likelihood of the hypothesis $H_1$:

$$\ell (\mathbf{n}|H_1) ={-}N^2\hat{\lambda}_0 + \frac{N^2}{4}\sum_{k=a,b,c,d} \hat{\lambda}_{k} \ln(\hat{\lambda}_k) - \sum_{i=1}^{N}\sum_{j=1}^{N}\ln(n_{i,j}!) .$$

 figure: Fig. 5.

Fig. 5. The super-pixel of interest is at the center of a $4 \times 4$ pixel square (left). We describe the four regions of the signal where we consider the parameters $\lambda _a$, $\lambda _b$, $\lambda _c$ and $\lambda _d$ (right). The parameter $\lambda _0$ is the mean value of the whole signal.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Hypothesis tested for intensity variation detection: Hypothesis $H_0$ there is no intensity variation (left); Hypothesis $H_1$ there is a variation of intensity (right). This data are synthetic and only for illustration purpose.

Download Full Size | PDF

4.2 Likelihood ratio

We then compute the generalized log-likelihood ratio $\ln (\mathcal {R})$ by using Eq. (15) and (16):

$$\ln(\mathcal{R}) = N^2 \left[ \frac{1}{4} \sum_k (\hat{\lambda}_k \ln(\hat{\lambda}_k)) - \frac{\sum_k \hat{\lambda}_k}{4} \ln\left(\frac{\sum_k \hat{\lambda}_k}{4}\right) \right] .$$

In order to set a threshold corresponding to a prescribed false alarm rate, we need to know the statistics of $\ln (\mathcal {R})$ in the case where the hypothesis $H_0$ is verified. Since it is difficult to obtain a closed form expression of this statistics, we will determine an approximate form by assuming that in hypothesis $H_0$, the fluctuations are small. We thus consider that $\hat {\lambda }_k = \lambda _0 + d_k$ where $d_k$ is a zero mean Gaussian random variable of variance $\lambda _0$ and $d_k \ll \lambda _0$, where $k \in [a, b, c, d]$ indicates the corresponding region of the signal. By substituting these expressions in Eq. (17) and using a second order Taylor expansion (see Appendix A), we obtain:

$$\ln(\mathcal{R}) \simeq \frac{N^2}{8}\underbrace{\left[\sum_{k_1}\left( \frac{d_{k_1}}{\sqrt{\lambda_0}} - \frac{1}{4} \displaystyle\sum_{k_2} \frac{d_{k_2}}{\sqrt{\lambda_0}} \right)^2\right]}_{\mathcal{C}}$$

As $\frac {d_{k}}{\sqrt {\lambda _0}}$ is a standard normal random variable, the Cochran’s theorem tells us that the criterion $\mathcal {C} = \frac {8}{N^2}\ln (\mathcal {R})$ follows a $\chi ^2$ law of order 3. It is then possible to obtain CFAR detection of intensity variation with prescribed $P_{\text {fa}}$ by applying to $\mathcal {C}$ a threshold determined by replacing $N_{\text {mes}}-K$ with $3$ in Eq. (14). Our tests showed no interest in taking $N > 2$ since detection is then influenced by more signal coming from outside the super-pixel of interest. This method enables us to obtain a map of the super-pixels for which we can assume the estimation to be perturbed by an intensity variation.

4.3 Combined error map

We can build a combined error map by using jointly the two error maps described above. The thresholds are set to yield the same false alarm rate $P_{\text {fa}}$ for both criteria. The false alarm rate to be chosen depends on the application. We decide here to set it to $P_{\text {fa}} = 10^{-6}$. Because our camera has a resolution of $1224 \times 1024$ super-pixels, we expect an order of one false alarm on the whole image. In Figs. 7 and 8, we display the combined error map superposed to the $S_0$ image. The pixels detected only by the redundancy criterion appear in green and those detected only by the intensity variation detector in red. The pixels detected by both criteria are displayed in yellow.

 figure: Fig. 7.

Fig. 7. The polarimetric images used are the same as in Fig. 4. The two criteria are used to create the error maps. The redundancy criterion detection appears in green and the intensity variation detection in red. In yellow are the pixels where both criteria lead to an error detection.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Scene composed of a plastic turtle ($S_0$ image) and the combined error map. The redundancy criterion detection appears in green and the intensity variation detection in red. In yellow are the pixels where both criteria lead to error detection.

Download Full Size | PDF

Figure 7 shows the combined error map obtained on the image used in Fig. 4 where a single DoFP acquisition was performed for the estimation of the linear Stokes vector. In Section 3.5, we highlighted that in this case, the redundancy criterion does not detect well estimation errors on the intensity edges. Here, we see that the pixels in green or yellow are mainly located in the reflection on the lamp (left) where there is a varying polarization state. The edge regions appear in red on both images, which means that they are well detected by the intensity variation detector, particularly the edges around the black letters (right) where are localized important estimation errors. In conclusion, intensity variation detection compensates the flaws of the redundancy criterion in those images, where most of the estimation errors are due to intensity variations.

Let us now consider full Stokes measurements using an achromatic quarter wave-plate Thorlabs “AQWP10M-580”. We placed it in front of the camera and rotated it to three different orientations $[0^\circ, 60^\circ, 120^\circ ]$, so that $N_{acq}=3$ and $N_{\text {mes}}=12$. The scene is composed of a 3D printed plastic turtle lying on a plastic cube and appearing on a background made of blank paper. Figure 8 displays the combined error map on the $S_0$ image. We keep the same color code for the error map as in Fig. 7. We observe on this image that the edges are detected by the intensity variation detector as expected. The redundancy criterion mostly detects errors due to the texture of the material, such as on the blank paper where the roughness of the paper leads to estimation errors at some locations.

5. Application to retardance autocalibration

Let us now demonstrate an application of the combined error map to help retardance autocalibration of the wave-plate used for full Stokes estimation with a linear DoFP camera. This autocalibration is possible if at least three acquisitions for three different angular positions of the wave-plate are used [12,1517].

5.1 Retardance autocalibration

The autocalibration method consists in jointly estimating the retardance $\delta$ of the wave-plate and the Stokes vector $\mathbf {S}$ from the measurements of a super-pixel, thanks to the redundancy of intensity measurements within a super-pixel. This is done by minimizing the criterion $\mathcal {F}(\delta )$:

$$\hat{\delta} = \arg\underset{\delta}{\min}[\mathcal{F}(\delta)] ,$$
with
$$\mathcal{F}(\delta) = \| [\mathbb{I}_d - \mathbb{W}(\delta)\mathbb{W}(\delta)^+]\mathbf{I} \|^2 ,$$
where $\mathbf {I}$ is the vector of the measured intensities and $\mathbb {I}_d$ is the identity matrix. Then the Stokes vector is estimated by introducing the estimated retardance $\hat {\delta }$ into the measurement matrix $\mathbb {W}(\delta )$ and substituting it in Eq. (2):
$$\hat{\mathbf{S}}=\mathbb{W}(\hat{\delta})^+ \mathbf{I} ,$$

A single super-pixel is enough to perform this autocalibration. However, since the retardance $\delta$ can be assumed constant over the image, we propose to use the measurements from several super-pixels in order to obtain a more reliable estimation. To do so, minimization of the sum of the criteria obtained from those $N$ different super-pixels is performed:

$$\mathcal{F}_{\Sigma}(\delta) = \sum_{n=1}^{N} \| [\mathbb{I}_d - \mathbb{W}(\delta)\mathbb{W}(\delta)^+]\mathbf{I}_n \|^2 ,$$

This leads to a more accurate estimation of the retardance $\delta$ thanks to averaging. We call this process "cumulative estimation".

It has been shown in [17] that in the presence of additive noise of variance $\sigma ^2$, the precision of estimation of $\delta$ and $S_3$ depends on the reduced signal to noise ratio $\text {SNR}_{\delta } = \text {SNR} \times \text {DoLP}$, where $\text {SNR}=S_0/\sigma$. The greater $\text {SNR}_{\delta }$, the better the precision, and for $\text {SNR}_{\delta } < 8$, autocalibration is not possible. Thus, to apply autocalibration, it is preferable to choose the $N$ pixels that maximize the estimated $\hat {\text {SNR}}_{\delta }$ and respect the condition $\hat {\text {SNR}}_{\delta }$ is greater than 8. However, this only criterion is not sufficient since most of the time, the highest values of the estimated $\hat {\text {SNR}}_{\delta }$ occur at intensity edges, where polarization state estimation is inaccurate. We will use the error mapping strategy presented in the previous sections in order to discard these pixels and retain only those where estimation is reliable.

5.2 Application of the error map to retardance autocalibration

Figure 9 shows the result of this pixel selection method on the image of the 3D printed turtle shown Fig. 8. In Fig. 9 (left), we have marked in red the N=1000 super-pixels in which $\hat {\text {SNR}}_{\delta }$ is maximum, and in green the 1000 pixels where estimation is reliable, according to the combined error map, and where $\hat {\text {SNR}}_{\delta }$ is maximum. One can see that these two sets of super-pixels are quite different. A significant part of the red super-pixels, chosen without using the error map, are located on edges while the green super-pixels, chosen with the use of the error map, are located on smooth areas with little spatial variations.

 figure: Fig. 9.

Fig. 9. A scene composed of a 3D printed plastic turtle. In red appear the super-pixels used without taking into account the error map and in green are the super-pixels used with taking into account the error map. The location of these super-pixels is shown on the intensity image (left). The results of the autocalibration using those two sets of super-pixels is displayed as histograms (right).

Download Full Size | PDF

In Fig. 9 (right), we have displayed the histograms of the estimated retardances $\delta$ (one estimation per super-pixel) from the two sets, with the same color code. In the legend are displayed the mean and standard deviation of both histograms, and the cumulative estimations of $\delta$. It is clearly seen that when using the error map, autocalibration is more precise since the standard deviation is five times smaller and the cumulative estimation closer to the nominal value $\delta _0 = 90^\circ$. Another interesting observation is that cumulative estimation is more robust to perturbations than simply computing the mean value of the $N$ estimations. In the case where the error map is not used, there is a significant difference between the two estimations: one obtains $83.15^\circ$ with the mean value and $85.79^\circ$ with cumulative estimation. In the case where the error map is used, the data set is much more reliable and the two estimation methods give the same value $87.68^\circ$.

6. Conclusion

We have defined a generalized criterion $\mathbf {R}$ to measure inconsistency between the redundant measurements within a super-pixel of a DoFP polarimetric camera. Contrary to the standard definition of the redundancy criterion for ideal linear DoFP images, this new definition takes into account the actual measurement matrices of the pixels and can be used in full Stokes imaging strategies where several acquisitions are performed. However, this criterion may fail to detect inconsistencies due to intensity edges. We thus supplemented it with another criterion based on an intensity variation detector. These two criteria have been normalized so as to provide CFAR detection of inconsistent super-pixels with a prescribed false alarm rate. Used jointly, they provide a combined error map that flags the super-pixels where estimation of the polarization state is not reliable. This map provides important information of the local distribution of quality in a DoFP polarization image. In particular, it is of great help for selecting super-pixels aimed at performing retardance autocalibration in full Stokes imaging setups based DoFP camera and rotating retarder. We showed that using this error map enables us to significantly improve the estimation precision of the retardance.

This work has many perspectives. The main one is to leverage the new combined error map to improve algorithms for demosacing linear and full Stokes DoFP images [25].

Appendix A. Development of $\ln (\mathcal {R})$

In order to determine the statistics of the criterion $\ln (\mathcal {R})$, one can develop its expression from Eq. (17):

$$\begin{aligned} \ln(\mathcal{R}) &= N^2 \left[ \frac{1}{4} \sum_k \hat{\lambda}_k \ln(\hat{\lambda}_k) - \frac{\sum_k \hat{\lambda}_k}{4} \ln\left(\frac{\sum_k \hat{\lambda}_k}{4}\right) \right]\\ &= N^2 \left[ \frac{1}{4} \sum_k f(\hat{\lambda}_k) - f\left(\frac{\sum_k \hat{\lambda}_k}{4}\right) \right] , \end{aligned}$$
where $f(x) = x \ln (x)$. Using the expression: $\hat {\lambda }_k = \lambda _0 - d_k$ one obtains:
$$\ln(\mathcal{R}) = N^2 \left[ \frac{1}{4} \sum_k f(\lambda_0 - d_k) - f\left(\lambda_0 - \frac{\sum_k d_k}{4}\right) \right] .$$

Considering $d_k \ll \lambda _0$ and $\frac {\sum _k d_k}{4} \ll \lambda _0$, a second order Taylor expansion of $f$ gives us:

$$f(\lambda_0 - d_k) = f(\lambda_0) + d_k f'(\lambda_0) + \frac{d_k^2}{2}f''(\lambda_0)$$
and
$$f\left(\lambda_0 - \frac{\sum_k d_k}{4}\right) = f(\lambda_0) + \frac{\sum_k d_k}{4} f'(\lambda_0) + \frac{1}{2} \left(\frac{\sum_k d_k}{4}\right)^2 f''(\lambda_0) .$$

Then, substituting Eq. (25) and (26) in Eq. (24) and using the fact that $f''(x) = \frac {1}{x}$ one has:

$$\begin{aligned} \ln(\mathcal{R}) &\simeq \frac{N^2}{2 \lambda_0} \left[ \sum_k \frac{d_k^2}{4} - \left(\frac{1}{4}\sum_k d_k\right)^2 \right]\\ &= \frac{N^2}{8} \sum_{k_1}\left(\frac{d_{k_1}}{\sqrt{\lambda_0}} - \frac{1}{4}\sum_{k_2} \frac{d_{k_2}}{\sqrt{\lambda_0}} \right)^2 . \end{aligned}$$

Funding

Agence de l'innovation de Défense.

Acknowledgments

The work reported in this study is supported in part by the Agence de l’Innovation de Défense (AID) that provides half of a PhD fellowship to Benjamin Le Teurnier.

Disclosures

The authors declare no conflicts of interest.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. S. Tyo, D. L. Goldstein, D. B. Chenault, and J. A. Shaw, “Review of passive imaging polarimetry for remote sensing applications,” Appl. Opt. 45(22), 5453 (2006). [CrossRef]  

2. V. Thilak, D. G. Voelz, and C. D. Creusere, “Polarization-based index of refraction and reflection angle estimation for remote sensing applications,” Appl. Opt. 46(30), 7527 (2007). [CrossRef]  

3. M. Foldyna, A. D. Martino, R. Ossikovski, E. Garcia-Caurel, and C. Licitra, “Characterization of grating structures by mueller polarimetry in presence of strong depolarization due to finite spot size,” Opt. Commun. 282(5), 735–741 (2009). [CrossRef]  

4. T. Mu, D. Bao, F. Han, Y. Sun, Z. Chen, Q. Tang, and C. Zhang, “Optimized design, calibration, and validation of an achromatic snapshot full-stokes imaging polarimeter,” Opt. Express 27(16), 23009 (2019). [CrossRef]  

5. Sony Semiconductor Solutions Corporation, “Polarization image sensor with four-directional on-chip polarizer and global shutter function,” (2021), https://www.sony-semicon.co.jp/e/products/IS/industry/product/polarization.html.

6. S. B. Powell and V. Gruev, “Calibration methods for division-of-focal-plane polarimeters,” Opt. Express 21(18), 21040 (2013). [CrossRef]  

7. Z. Chen, X. Wang, and R. Liang, “Calibration method of microgrid polarimeters with image interpolation,” Appl. Opt. 54(5), 995 (2015). [CrossRef]  

8. B. Feng, Z. Shi, H. Liu, L. Liu, Y. Zhao, and J. Zhang, “Polarized-pixel performance model for DoFP polarimeter,” J. Opt. 20(6), 065703 (2018). [CrossRef]  

9. S. Roussel, M. Boffety, and F. Goudail, “Polarimetric precision of micropolarizer grid-based camera in the presence of additive and poisson shot noise,” Opt. Express 26(23), 29968 (2018). [CrossRef]  

10. C. Lane, D. Rode, and T. RÖsgen, “Calibration of a polarization image sensor and investigation of influencing factors,” Appl. Opt. 61(6), C37 (2022). [CrossRef]  

11. J. Qi, C. He, and D. S. Elson, “Real time complete stokes polarimetric imager based on a linear polarizer array camera for tissue polarimetric imaging,” Biomed. Opt. Express 8(11), 4933 (2017). [CrossRef]  

12. S. Shibata, N. Hagen, and Y. Otani, “Robust full stokes imaging polarimeter with dynamic calibration,” Opt. Lett. 44(4), 891 (2019). [CrossRef]  

13. S. Roussel, M. Boffety, and F. Goudail, “On the optimal ways to perform full stokes measurements with a linear division-of-focal-plane polarimetric imager and a retarder,” Opt. Lett. 44(11), 2927 (2019). [CrossRef]  

14. X. Li, H. Hu, F. Goudail, and T. Liu, “Fundamental precision limits of full stokes polarimeters based on DoFP polarization cameras for an arbitrary number of acquisitions,” Opt. Express 27(22), 31261 (2019). [CrossRef]  

15. F. Goudail, X. Li, M. Boffety, S. Roussel, T. Liu, and H. Hu, “Precision of retardance autocalibration in full-stokes division-of-focal-plane imaging polarimeters,” Opt. Lett. 44(22), 5410 (2019). [CrossRef]  

16. X. Li, B. L. Teurnier, M. Boffety, T. Liu, H. Hu, and F. Goudail, “Theory of autocalibration feasibility and precision in full stokes polarization imagers,” Opt. Express 28(10), 15268 (2020). [CrossRef]  

17. B. Le Teurnier, X. Li, M. Boffety, H. Hu, and F. Goudail, “When is retardance autocalibration of microgrid-based full stokes imagers possible and useful?” Opt. Lett. 45(13), 3474 (2020). [CrossRef]  

18. J. S. Tyo, C. F. LaCasse, and B. M. Ratliff, “Total elimination of sampling errors in polarization imagery obtained with integrated microgrid polarimeters,” Opt. Lett. 34(20), 3187–3189 (2009). [CrossRef]  

19. B. M. Ratliff, C. F. LaCasse, and J. S. Tyo, “Interpolation strategies for reducing IFOV artifacts in microgrid polarimeter imagery,” Opt. Express 17(11), 9112 (2009). [CrossRef]  

20. S. Gao and V. Gruev, “Bilinear and bicubic interpolation methods for division of focal plane polarimeters,” Opt. Express 19(27), 26161 (2011). [CrossRef]  

21. S. Gao and V. Gruev, “Gradient-based interpolation method for division-of-focal-plane polarimeters,” Opt. Express 21(1), 1137 (2013). [CrossRef]  

22. J. Zhang, H. Luo, B. Hui, and Z. Chang, “Image interpolation for division of focal plane polarimeters with intensity correlation,” Opt. Express 24(18), 20799 (2016). [CrossRef]  

23. A. Ahmed, X. Zhao, V. Gruev, J. Zhang, and A. Bermak, “Residual interpolation for division of focal plane polarization image sensors,” Opt. Express 25(9), 10651 (2017). [CrossRef]  

24. N. Li, Y. Zhao, Q. Pan, and S. G. Kong, “Demosaicking DoFP images using newton’s polynomial interpolation and polarization difference model,” Opt. Express 27(2), 1376 (2019). [CrossRef]  

25. N. Li, B. L. Teurnier, M. Boffety, F. Goudail, Y. Zhao, and Q. Pan, “No-reference physics-based quality assessment of polarization images and its application to demosaicking,” IEEE Trans. on Image Process. 30, 8983–8998 (2021). [CrossRef]  

26. K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–1409 (2013). [CrossRef]  

27. A. Abubakar, X. Zhao, S. Li, M. Takruri, E. Bastaki, and A. Bermak, “A block-matching and 3-d filtering algorithm for gaussian noise in DoFP polarization images,” IEEE Sens. J. 18(18), 7429–7435 (2018). [CrossRef]  

28. B. M. Ratliff, J. S. Tyo, J. K. Boger, W. T. Black, D. L. Bowers, and M. P. Fetrow, “Dead pixel replacement in LWIR microgrid polarimeters,” Opt. Express 15(12), 7596 (2007). [CrossRef]  

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Intensities $S_0$ of the 5 images used to study the statistics of $R_{\text {lin}}$ and $\mathbf {R}$.
Fig. 2.
Fig. 2. Histograms of $R_{\text {lin}}$ (left) and $\mathbf {R}$ (right) computed from the 5 images of Fig. 1
Fig. 3.
Fig. 3. An example of intensity variation for which it is easily seen that the criterion $\mathbf {R}$ is null.
Fig. 4.
Fig. 4. Images of the intensity coefficient $S_0$ with the error map in red (left), of the degree of linear polarization ’DoLP’ (middle) and of the angle of polarization ’AoP’ (right). The second row shows a zoom of the images of the first row. This zoom focuses on the polarized reflection on the lamp highlighted by the green rectangle.
Fig. 5.
Fig. 5. The super-pixel of interest is at the center of a $4 \times 4$ pixel square (left). We describe the four regions of the signal where we consider the parameters $\lambda _a$, $\lambda _b$, $\lambda _c$ and $\lambda _d$ (right). The parameter $\lambda _0$ is the mean value of the whole signal.
Fig. 6.
Fig. 6. Hypothesis tested for intensity variation detection: Hypothesis $H_0$ there is no intensity variation (left); Hypothesis $H_1$ there is a variation of intensity (right). This data are synthetic and only for illustration purpose.
Fig. 7.
Fig. 7. The polarimetric images used are the same as in Fig. 4. The two criteria are used to create the error maps. The redundancy criterion detection appears in green and the intensity variation detection in red. In yellow are the pixels where both criteria lead to an error detection.
Fig. 8.
Fig. 8. Scene composed of a plastic turtle ($S_0$ image) and the combined error map. The redundancy criterion detection appears in green and the intensity variation detection in red. In yellow are the pixels where both criteria lead to error detection.
Fig. 9.
Fig. 9. A scene composed of a 3D printed plastic turtle. In red appear the super-pixels used without taking into account the error map and in green are the super-pixels used with taking into account the error map. The location of these super-pixels is shown on the intensity image (left). The results of the autocalibration using those two sets of super-pixels is displayed as histograms (right).

Tables (2)

Tables Icon

Table 1. Mean parameters for the four different pixels constituting the sensor. ϕ is the orientations of the polarizers, t the relative transmissions and d the diattenuations.

Tables Icon

Table 2. Ratio of the mean over the standard deviation of R lin (left column) and R (right column) for each of the 5 images displayed in Fig. 1.

Equations (28)

Equations on this page are rendered with MathJax. Learn more.

I = W S ,
S ^ = W + I ,
W = U D V T ,
U T I = D V T S .
R = U R T I .
Γ I ( n , n ) = I n = w n T S .
R = U R T I = U R T W S = 0 ,
Γ R = ( R R ) ( R R ) T = U R T Γ I U R ,
Γ R ( i , i ) = n = 1 N mes [ U R ( i , n ) ] 2 Γ I ( n , n ) = n = 1 N mes [ U R ( i , n ) ] 2 w n T S .
VAR [ R ] = Q T S ,
W = 1 2 [ 1.000 0.988 0.005 1.015 0.001 1.004 0.979 0.966 0.004 1.006 0.016 0.987 ] .
T ( i ) = R ( i ) VAR [ R ( i ) ] = R ( i ) q i T S .
Q T S ^ = Q T S + a ,
T ^ ( i ) = R ( i ) q i T S ^ = R ( i ) q i T S + a ( i ) = R ( i ) q i T S ( 1 + a ( i ) q i T S ) T ( i ) ( 1 1 2 a ( i ) q i T S ) ,
t h ( P fa ) = arg min x [ | 1 CDF χ 2 ( N mes K ) ( x ) P fa | ] ,
( n | H 0 ) = N 2 λ ^ 0 + N 2 λ ^ 0 ln ( λ ^ 0 ) i = 1 N j = 1 N ln ( n i , j ! ) .
( n | H 1 ) = N 2 λ ^ 0 + N 2 4 k = a , b , c , d λ ^ k ln ( λ ^ k ) i = 1 N j = 1 N ln ( n i , j ! ) .
ln ( R ) = N 2 [ 1 4 k ( λ ^ k ln ( λ ^ k ) ) k λ ^ k 4 ln ( k λ ^ k 4 ) ] .
ln ( R ) N 2 8 [ k 1 ( d k 1 λ 0 1 4 k 2 d k 2 λ 0 ) 2 ] C
δ ^ = arg min δ [ F ( δ ) ] ,
F ( δ ) = [ I d W ( δ ) W ( δ ) + ] I 2 ,
S ^ = W ( δ ^ ) + I ,
F Σ ( δ ) = n = 1 N [ I d W ( δ ) W ( δ ) + ] I n 2 ,
ln ( R ) = N 2 [ 1 4 k λ ^ k ln ( λ ^ k ) k λ ^ k 4 ln ( k λ ^ k 4 ) ] = N 2 [ 1 4 k f ( λ ^ k ) f ( k λ ^ k 4 ) ] ,
ln ( R ) = N 2 [ 1 4 k f ( λ 0 d k ) f ( λ 0 k d k 4 ) ] .
f ( λ 0 d k ) = f ( λ 0 ) + d k f ( λ 0 ) + d k 2 2 f ( λ 0 )
f ( λ 0 k d k 4 ) = f ( λ 0 ) + k d k 4 f ( λ 0 ) + 1 2 ( k d k 4 ) 2 f ( λ 0 ) .
ln ( R ) N 2 2 λ 0 [ k d k 2 4 ( 1 4 k d k ) 2 ] = N 2 8 k 1 ( d k 1 λ 0 1 4 k 2 d k 2 λ 0 ) 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.