Abstract

This article addresses the estimation of polarization signatures in the Mueller imaging framework by non-local means filtering. This is an extension of previous work dealing with Stokes signatures. The extension is not straightforward because of the gap in complexity between the Mueller framework and the Stokes framework. The estimation procedure relies on the Cholesky decomposition of the coherency matrix, thereby ensuring the physical admissibility of the estimate. We propose an original parameterization of the boundary of the set of Mueller matrices, which makes our approach possible. The proposed method is fully unsupervised. It allows noise removal and the preservation of edges. Applications to synthetic as well as real data are presented.

© 2013 OSA

1. Introduction

Mueller imaging is facing an ever growing scientific attention owing to the vast amount of information that it can reveal. In this framework, the Stokes-Mueller formalism linearly links polarization parameters to raw radiance acquisitions. This observation model is defined pixel-wise.

Pixelwise data reduction (inversion) approaches are classically used, the system being optimized so that the observation matrix (the polarization measurement matrix, PMM) reduces the impact of noise and system errors. The limitations of classical inversion procedures are known [1]. Even for a well-calibrated imaging polarimeter, errors and noise may be amplified, thus yielding unphysical Mueller matrices. Moreover, pixelwise inversion does not account for the spatial distribution of information. Exploiting the bidimensional structure of the information distribution has not been used systematically in the Mueller imaging framework. We note however that this was partially exploited in [2, 3] and fully considered in [4] but in the particular case of classwise constant signatures.

A joint filtering-estimation procedure has been introduced in [5], to tackle the aforementioned shortcomings in the Stokes imaging modality. This procedure allows to estimate polarization signatures for Stokes images, while preserving sharp transitions. The procedure is based on non-local means (NLM) filtering, which is an efficient denoising algorithm that outperforms popular denoising methods regarding the preservation of sharp edges and fine texture details [6]. It is here extended to the Mueller imaging context: the noise is filtered while yielding physically admissible Mueller matrices at each pixel location. The proposed joint filtering-estimation procedure is expressed as a constrained optimization problem. Interestingly, we show that it can be equivalently seen as a two step method: a filtering stage based on the NLM approach followed by an estimation step ensuring physical admissibility. Our algorithm takes advantage of a nontrivial and original parameterization of the boundary of the set of Mueller matrices.

The article is organized as follows. Section 2 recalls the physical admissibility criteria related to Mueller matrices as well as the main lines of the approach developed in [5] for the Stokes imaging case. Section 3 extends the NLM-based polarimetric data reduction method to the Mueller imaging case. Section 4 deals with the application of the approach to synthetic and real data. Conclusions are drawn in section 5.

2. Related work

In this section, we briefly recall the physical criteria that must be met by any valid Mueller matrix. We summarize the approach that has been proposed in the Stokes imaging framework [5].

2.1. Physical admissibility of Mueller matrices

Whenever Mueller matrices are obtained from noisy polarized raw radiances, it is important to ensure that the estimation procedure provides physically admissible entities. A physical admissibility criterion for Mueller matrices has to be defined. This issue has been widely addressed in the literature where several authors have dealt with the physical admissibility of experimentally measured Mueller matrices [711]. Many criteria have been proposed to assess the validity of such matrices.

Amongst them, the Jones criterion (also called the criterion for “physical” Mueller matrices) requires that a Mueller matrix is a linear combination, with non-negative coefficients, of at most four pure Mueller matrices (a pure Mueller matrix is obtained from a Jones matrix, see [11] for the complete definition). As stated in [11], the Jones criterion enables to represent a random assembly of Jones filters, thus covering perhaps every possible case of interest in polarization optics. Beyond any controversy about such a criterion, we use in the following the Jones criterion so that a Mueller matrix is defined hereafter as a matrix that verifies this criterion. With this definition, it has been established that a Mueller matrix is admissible if and only if the eigenvalues of the related coherency matrix are positive or null [12]. Moreover this criterion allows an interesting parameterization of the Mueller matrix in terms of Cholesky decomposition. This parameterization was first used by [13] in a probabilistic context and transferred efficiently to the Mueller imaging framework [4].

2.2. Joint filtering-estimation of Stokes vectors

In the context of the non-local means filtering of a one-channel noisy image I, the estimate of the denoised image Inlm at pixel x is a weighted average of all pixels in the image:

xΩ,Inlm(x)=yΩw(x,y)I(y)=argminayΩw(x,y)(I(y)a)2,
where Ω is the support of the digital image (Ω ⊂ 𝕑2), and where w(x, y) represents the similarity between pixels x and y with 0 ≤ w(x, y) ≤ 1, and ∑yw(x, y) = 1. The similarity between two pixels derives from Euclidean distance between patches, a patch being a square window centered on x or y. More precisely, the similarity w(x, y) is proportional to exp(1β2PxPy2), where Px (resp. Py) is the vectorized set of gray level intensities of the x-patch (resp. y-patch).

For Stokes imaging, K (K ≥ 4) measured intensities are available at each pixel location. They can be grouped in a K-component image I = (I(1), I(2), I(3),...,I(K)). Denoising I can be performed for each channel independently as follows:

xΩ,Inlm(x)=yΩDx(y)I(y)=argminayΩ(I(y)a)tDx(y)(I(y)a),
where Dx(y) is a K × K diagonal matrix. The diagonal element dii (i = 1 ... K) is the similarity (or weight) between pixel x and pixel y for the ith channel, ∑yDx(y) being the identity matrix. At each pixel location x, the K intensity measurements I(x) are related to the Stokes vector S(x) by the linear equation I(x) ≃ P· S(x), where P is the K × 4 PMM. Equation (2) becomes:
xΩ,S^(x)=argminSyΩ(I(y)PS)tDx(y)(I(y)PS).

The Stokes vector at pixel x is thereby estimated using the whole set of pixels in the image, in a filtering-estimation procedure. Since the gradient of Eq. (3) is equal to the gradient of ‖Inlm(x) − P · S2 (see [5]), the estimate writes:

xΩ,S^(x)=argminSInlm(x)PS2.
The estimate of Eq. (4) amounts to denoising each channel using a NLM approach, and then estimating the Stokes vector at each pixel separately using the denoised image. The criterion of Eq. (4) has a unique minimum which is the pseudo-inverse solution: (x) = (Pt · P)−1 · P · Inlm(x). However, this solution may not verify the admissibility constraints: (x) may not be a Stokes vector. The following constrained criterion has been proposed:
xΩ,S^(x)=argminSBInlm(x)PS2,
where B is the set of admissible Stokes vectors. Note that optimizing the criterion of Eq. (5) is equivalent to optimizing the one of Eq. (3) under the physical admissibility constraint since both criteria are equal up to a constant.

Since the criterion of Eq. (5) is strictly convex, and since B is a convex set, the criterion of Eq. (5) has a unique minimum. Instead of using a constrained optimization procedure, we use a much simpler procedure, which was defined in [5]. Convergence to the global minimum is ensured. It has been made possible thanks to the fact that the boundary B of B can be easily parameterized.

3. Joint filtering-estimation of Mueller matrices

The filtering-estimation procedure which is suitable for Stokes vectors can be extended to the case of Mueller matrices provided (i) the criterion related to the estimation of the Mueller matrix can be written in the form of a weighted least squares problem (see Eq. (3) for the Stokes vector case), (ii) the Mueller matrix set is a convex set, and (iii) the boundary of the Mueller matrix set can be parameterized.

3.1. Definition of the criterion

A Mueller imaging polarimeter allows the indirect measurement of the Mueller matrix of each pixel of the image. The scene is successively illuminated by p (p ≥ 4) independent polarization states through a polarization state generator (PSG). Each of these states interacts with the object, thus yielding an outgoing Stokes vector that is sensed through the polarization state analyzer (PSA) with q independent probing states (q ≥ 4). For each pixel location x of the image, the p × q intensity measurements I are related to the Mueller matrix M(x) at pixel x by:

I(x)=AM(x)W(+noise),
where A and W are the p × 4 PSA matrix, and the 4 × q PSG matrix, respectively. Equation (6) can be written as:
I(x)_=(WtA)M(x)_(+noise)=PM(x)_(+noise),
where the underline symbol is the operator which maps an m × n into an mn × 1 vector formed by stacking up its columns, and ⊗ is the Kronecker product. The known pq × 16 matrix P is the PMM.

In the context of Mueller imaging, the criterion of Eq. (3) that is suitable for the Stokes case becomes:

xΩ,M^(x)_=argminMyΩ(I(y)_PM_)tDx(y)(I(y)_PM_),
where Dx(y) is a pq × pq diagonal matrix, ∑yDx(y) being the identity matrix. The diagonal element dii (i = 1 ... pq) is the similarity (or weight) between pixel x and pixel y for the ith channel. As in the Stokes vector case, the estimate of Eq. (8) is equivalent to:
xΩ,M^(x)_=argminMInlm(x)_PM_2,
where Inlm is defined as:
xΩ,Inlm(x)_=yΩDx(y)I(y)_.
Finally, incorporating the physical admissibility constraints in Eq. (8) (or equivalently in Eq. (9)) leads to:
xΩ,M^(x)_=argminMBInlm(x)_PM_2,
where B denotes in this part the set of admissible Mueller matrices.

The proposed criterion does not depend on the way the weights are computed. Consequently, an algorithm proposed for a one-channel image and computing the denoised value at a pixel by linearly combining pixel values of the original image (see Eq. (1)) can be extended to Mueller images using Eq. (8), or equivalently Eq. (9). Several authors have proposed strategies to determine the weights under various noise distributions (see for example [14]). Therefore, the proposed approach is well-suited for different kinds of noise distributions and may benefit from future advances in denoising methods.

3.2. Properties of the Mueller matrix set

There is a one-to-one correspondence between a Mueller matrix M and the system coherency matrix H, which reads:

M_=TH_,
where T is a 16 × 16 invertible complex transformation matrix whose expression can be found for example in [4]. Given that a Mueller matrix M is defined here as a matrix that verifies the Jones criterion (see Sec. 2.1), M is physically acceptable iffH is positive semidefinite ( H+4) [12]. Since +4 is a convex set, and since the image of a convex set under a linear transformation is also convex, B is a convex set.

In order to extend the optimization algorithm that was proposed for Stokes vectors, we have to find a parameterization of the boundary B of B. As B is the image of the boundary +4 of +4 by the linear transformation T, the key point is to define a parameterization for +4. To this end, the following property is of great interest.

Property 1: H+4 iff H+4 and H is not invertible.

Property 1 is well-known in the case of the set 𝒮+4 of the symmetric positive semidefinite (real) matrices. The interior of 𝒮+4 consists of the positive definite (full-rank) matrices and all singular positive semidefinite matrices (having at least one null eigenvalue) reside on the boundary [15, p. 43]. This result can be extended to (complex) Hermitian positive semidefinite matrices because the eigenvalues of any matrix are continuous functions of the elements of the matrix [16, p. 36].

Moreover, every H+4 admits a Cholesky decomposition which reads:

H=ΛΛt,
where Λt is the conjugate transpose of Λ. Matrix Λ is a lower triangular matrix composed of 16 real parameters {λi},
Λ=(λ1000λ5+iλ6λ200λ7+iλ8λ9+iλ10λ30λ11+iλ12λ13+iλ14λ15+iλ16λ4).
Property 1 and the fact that det(Λ · Λt) = (λ1λ2λ3λ4)2 lead to property 2.

Property 2: ΛΛt+4 iff at least one value amongst λi (i = 1 ... 4) is null.

Property 2 enables to parameterize +4, and consequently B, easily. However, the parameterization of the boundary +4 related to Property 2 is not practical for use in an optimization framework because a displacement on B may require to enforce a new λi (i = 1 ... 4) to 0, and eventually to relax the constraint for another one. A more useful property can be defined to parameterize +4.

Property 3: if H+4, then there exists Λ with λ4 = 0 such that H = Λ · Λt.

This property is proved in Appendix A. From Property 3 and from Eq. (12), we derive:

B={M:M_=TΛΛt_withλ4=0}.

Since B can be easily parameterized (Eq. (15)), the optimization algorithm that has been proposed for Stokes vectors can be extended to Mueller matrices. Optimization of Eq. (11) is briefly described in the next section.

3.3. Optimization algorithm

Since the criterion of Eq. (11) is strictly convex and B is convex, Eq. (11) has a unique minimum denoted M(x).

The optimization algorithm of Eq. (11) proceeds as follows. If the pseudo-inverse solution (x) ∈ B, then M(x) = (x). Otherwise, M(x) belongs to B and can be estimated by using these two strategies iteratively:

  1. local descent on the boundary B (see property 3):
    M^(x)_=TΛ^Λ^t_withΛ^=argminΛ,λ4=0Inlm(x)_PTΛΛt_2
  2. local descent on the interior of B.

The algorithm is initialized on B, with the orthogonal projection of (x) onto B, and strategy (i) is considered. When a minimum is obtained, computing the gradient of criterion of Eq. (11) enables to determine if this minimum is the global one or a local one: the gradient gives the direction of the interior of Biff the minimum is the global one. If this minimum is a local one, the criterion can be decreased by entering into B. The descent is then continued with strategy (ii). In this strategy, the boundary is ensured to be met since there is no local minimum in . Then, strategy (i) is used again. This procedure is repeated until the global minimum is reached with strategy (i).

The optimization algorithm for strategy (i) is a subspace trust region method that is based on the interior-reflective Newton method described in [17, 18]. For strategy (ii), a simple gradient approach is used: the maximal admissible step constraining (x) to be a Mueller matrix is computed at each iteration. The proposed algorithm is finally much simpler than a standard constrained optimization procedure, and it is ensured to converge to the unique minimum.

Note however that one major problem arises during the implementation of the optimization algorithm. Strategy (i) is used starting from a Mueller matrix M that belongs to B. This matrix has been obtained either from the orthogonal projection of (x) onto B, or from strategy (ii) that is ensured to converge to a Mueller matrix of B. In both cases, starting from MB, initialization of strategy (i) requires the determination of Λ such that λ4 = 0 and M = T · Λ · Λt. Due to numerical problems, this computation has to be done carefully (see Appendix B).

Finally, if denotes the estimated Mueller matrix image, the image Î defined as Î(x) = P · (x) for each x, can be considered as the denoised version of I. For pq = 16, the images Î and Inlm differ only for the pixels for which the pseudo-inverse solution does not satisfy the admissibility constraints.

4. Results

4.1. Results on synthetic data

We synthesized a 256 × 256 pixel Mueller image Mgt composed of two distinct regions: (i) a background with a uniform polarization signature M, and (ii) a 100 pixel radius circle with a smoothly varying polarized Mueller signature placed in the center of the image (see Fig. 1). The Mueller matrices of the synthetic target lie on B and even a small noise level may lead to non physical solutions if one uses the pseudo-inverse approach.

 

Fig. 1 Mueller image of the synthetic scene. For convenience, 16 images (one for each channel of the Mueller matrix) are represented on a 4 × 4 grid. Row i column j image corresponds to the mij Mueller matrix element image.

Download Full Size | PPT Slide | PDF

The Mueller matrices are defined randomly based on the λ parameterization with λ4 = 0 as follows:

  • The Mueller matrix associated to the background has been defined by drawing each component of Λ (except λ4 that is set to 0) according to a normal distribution of standard deviation 1.
  • For the circle, 15 images of size 256×256 (an image for each λi, i ≠ 4) are computed by drawing the value at each pixel according to a normal distribution of standard deviation 1. Each image is then filtered using an isotropic Gaussian filter of standard deviation of 5 pixels, thereby creating 15 smooth images. A constant is then added to each image (each constant is drawn according to a normal distribution of standard deviation 1).

The Mueller image Mgt can then be computed. A standard observation model was finally used to generate intensity images Iigt (i = 1...16) that were degraded by adding white Gaussian noise of variance σ2.

Since the ground truth is known, estimation accuracy can be evaluated by comparing the estimated values (the Mueller matrices and the associated intensity values Î) with the original ones (Mgt and Igt). The method is first evaluated by comparing the original image Igt (noise-free intensity image) with its estimation Î using the Peak Signal-to-Noise Ratio (PSNR):

PSNR(Igt,I^)=10log10(d2116.Pj=116αj2x(Ijgt(x)I^j(x))2),
where αj is computed so that the dynamic of αj.Ijgt is d (for example 255), and where P is the number of pixels. Complementary to the PSNR, a Mueller matrix dedicated measurement is also used to evaluate the estimation accuracy:
e(Mgt,M^)=1001PxMgt(x)M^(x)2Mgt(x)2.

Four methods, MPI, MPIortho, MPIproj, and PM are evaluated:

  • for MPI: (x) is the pseudo-inverse solution;
  • for MPIortho: (x) is the pseudo-inverse solution, further orthogonally projected onto the Mueller matrix set if it does not verify the admissibility constraints;
  • for MPIproj: (x) is the pseudo-inverse solution, further projected onto the Mueller matrix set if it does not verify the admissibility constraints (the projection is performed with the proposed optimization algorithm so as to reduce at most the error reconstruction between the observed measurements and the predicted ones);
  • MP is the proposed approach.

Note that the methods MPI, MPIortho, MPIproj do not use any spatial filtering.

The PSNR (Eq. (17)) and the Mueller matrix estimation error (Eq. (18)) corresponding to MPI, MPIortho, MPIproj and MP are given in Tab. 1 for different values of σ (from σ = 0.05 to 1).

Tables Icon

Table 1. PSNR (left) and Mueller matrix estimation error (right) obtained with the four different methods, and for different values of σ. Bold values correspond to the best results.

Table 1 shows that the proposed approach outperforms the other methods in the particular context of this experiment. This illustrates the benefit of accounting for spatial information for the estimation, and in particular, the efficiency of the NLM approach. The PSNR (left part of Tab. 1) obtained with the proposed approach is increased of at least 10 dB, and the estimation error (right part of Tab. 1) is decreased of a factor varying approximately from 4 to 10. The same conclusion can be drawn from Fig. 2 which presents the Mueller matrix images obtained from the noisy images (σ = 0.1) with the NLM approach (a), and with MPI (b). Since results obtained with the methods MPIproj and MPIortho are visually very similar to Fig. 2(b), they are not presented here. The channels associated to the third line of the Mueller matrix are for example noisy with MPI but not with the NLM approach. Besides performing noise reduction, we can observe that edges have been preserved. This nice property is inherited from NLM filtering: NLM filtering preserves sharp edges and fine texture details [6].

 

Fig. 2 Mueller image of the synthetic scene estimated with the proposed (NLM) approach (a) and pseudo-inverse (b) from the noisy intensity images (σ = 0.1). Data are presented with the convention used in Fig. 1.

Download Full Size | PPT Slide | PDF

Finally, amongst the methods which do not consider spatial information, MPIproj is the one providing the best results. This shows that the traditional way of projecting onto the Mueller matrix set (method MPIortho, orthogonal projection) is not an efficient approach, compared to the proposed method which reduces the most the error reconstruction between the observed measurements and the predicted ones, while constraining the solution to be physically acceptable. Note also that MPIortho provides better results than MPI in terms of accuracy of the Mueller matrix estimation, but less satisfactory results in terms of PSNR. This highlights the fact that the choice of orthogonal projection is arbitrary.

4.2. Results on real data

To evaluate the potential of the method to handle real images, we used a well-calibrated Mueller polarimeter [1] to image two real scenes with different kinds of polarization signatures: (i) the first scene (Fig. 3) corresponds to two shapes made from two different transparent thin layers (cellophane for wrapping food) sandwiched between two glass sections. This object is expected to show class-wise constant polarization responses; (ii) the second one consists of an ensemble of objects (Fig. 4) leading to an image with various polarization responses and complex geometrical properties. Observations were carried out through a narrow band interferential filter to ensure that the PMM is known with high accuracy. The Mueller image was reconstructed from the raw intensity images using MP (the proposed approach) and MPIortho (the classical pseudo-inverse approach except that the solution is further projected orthogonally onto the Mueller matrix set if it does not verify the admissibility constraints).

 

Fig. 3 Mueller image of the first scene estimated with the proposed (NLM) approach (a) and projected pseudo-inverse (b). Data are presented with the convention used in Fig. 1 except that all channels but m11 have been pixelwise normalized with respect to m11 (mij(x) = mij(x)/m11(x)).

Download Full Size | PPT Slide | PDF

 

Fig. 4 Mueller image of the second scene estimated with the proposed (NLM) approach (left column) and projected pseudo-inverse (right column). For convenience, only channels m22 (a) and m44 (c) are presented. Images (b) and (d) correspond to a zoom in of (a) and (c), respectively.

Download Full Size | PPT Slide | PDF

Figure 3 and Fig. 4 present the results for the first scene and second scene respectively. For both scenes, the images reconstructed by our approach are less noisy than the pseudo-inverse images. As an example, for the first scene, the information content of some of the Mueller channels are lost in the projected pseudo-inverse solution (e.g. m11, m12, m21, and m22 element images). In particular, a lot of fine details are definitely lost (channels m12 and m21). All channels estimated with MPIortho are particularly noisy for the second scene which is not the case with the proposed approach. For the sake of conciseness, only channels m22 (Fig. 4(a)) and m44 (Fig. 4(c)) are presented. The MPIortho approach propagates intensity noises to the Mueller channels leading to less workable Mueller images. Results obtained with the second scene illustrate also the efficiency of the NLM filtering approach to preserve edges and thin structures (see Fig. 4(b) and (d)).

Since the NLM solution is much less noisy than the projected pseudo-inverse one, it provides also a better estimation of physical characteristics (e.g. diattenuation, depolorization). Figure 5 shows the mean diattenuation calculated from the MP and the MPIortho solutions for the first scene. The poor quality of the result obtained with the MPIortho approach can be explained by the fact that the upper left 2 × 2 block of the Mueller matrix is in this case particularly noisy (see Fig. 3(b)).

 

Fig. 5 Mean diattenuation of the first scene obtained by the polar decomposition applied to the NLM (left column) and to the MPIortho solutions (right column).

Download Full Size | PPT Slide | PDF

More generally, we found that, for all cases considered here, our approach performed better than the pseudo-inverse solution that is widely used in Mueller data reduction phase. Our approach is fully unsupervised and requires no parameter tuning.

5. Conclusion

We introduced a new reconstruction-estimation method that allows obtaining Mueller images from raw measured intensities. The proposed approach performs better than state-of-the-art methods, while yielding admissible Mueller signatures. The major interest of this study comes from the association of up-to-date denoising algorithms with physical admissibility criteria of Mueller channels that allow better exploitation and interpretation of the final images. This approach turns out to be of great benefit since the reconstructed signature images are little affected by measurement noise while sharp transitions and thin structures remain preserved.

A. Characterization of the boundary +4 of the set of H-type matrices

The objective of this appendix is to prove property 3 (see page 6), which reads: if H+4, then there exists Λ with λ4 = 0 such that H = Λ · Λt.

Let us write

H=(h1h5ih6h7ih8h11+ih12h5+ih6h2h9ih10h13ih14h7+ih8h9+ih10h3h15ih16h11ih12h13+ih14h15+ih16h4).

By substituting Eq. (14) and Eq. (19) into Eq. (13), we get a system of equations in λ’s that has at least one solution if H+4. In practice, there are several solutions when H+4. As an example, if H+4 and if H is invertible, then the signs of λ1, λ2, λ3, and λ4 can be set arbitrarily leading to 24 different solutions. In the general case where H+4, one possible solution in λ’s writes:

{λ1=h1λi=hiλ1(fori=5to8andfori=11to12)ifλ1>0,and0otherwiseλ2=h2λ52λ62λ9=1λ2(h9λ5λ7λ6λ8)ifλ2>0,and0otherwiseλ10=1λ2(h18λ6λ7λ5λ8)ifλ2>0,and0otherwiseλ13=1λ2(h13λ5λ11λ6λ12)ifλ2>0,and0otherwiseλ14=1λ2(h14+λ6λ11λ5λ12)ifλ2>0,and0otherwiseλ3=h3λ72λ82λ92λ102λ15=1λ3(h15λ7λ11λ8λ12λ9λ13λ10λ14)ifλ3>0,and0otherwiseλ16=1λ3(h16+λ8λ11λ7λ12+λ10λ13λ9λ14)ifλ3>0,and0otherwiseλ4=h4λ112λ122λ132λ142λ152λ162

If H is invertible (λi > 0 for i = 1...4), Eq. (20) can be either obtained by solving the system of equations in λ’s directly or by using the algorithm of [19, p. 145]. In [19], the algorithm is described for symmetric positive definite (real) matrices but the extension to (complex) Hermitian positive definite matrices is straightforward.

If H is not invertible (i.e. ∃ i ∈ [1, 4] such that λi = 0), the i-th column of Λ is set to 0 (see Eq. (20)). This is justified in [19, p. 148] for real matrices, but the same reasoning applies to complex ones.

We suppose that H+4, which implies that λ1 = 0, or λ2 = 0, or λ3 = 0, or λ4 = 0. In the following, Λ denotes the lower triangular matrix whose elements are defined in Eq. (20). By construction, Λ verifies H = Λ · Λt but λ4 may be greater than 0. To obtain a solution with λ4 = 0, an equivalence class is defined on the set of the lower triangular matrices (see Eq. (14)): Λa and Λb are equivalent if there exists an orthonormal matrix R such that Λa = Λb · R. Matrices Λa and Λb correspond then to the same underlying coherency matrix Λa · Λat = Λb · Λbt, and consequently to the same Mueller matrix. If H+4, we show hereafter that we can find in the class of Λ a matrix with λ4 = 0. Three cases have to be considered, depending on which λi vanishes:

  1. if H is such that λ1 = 0 then from Eq. (20), we have:
    Λ=(00000λ2000λ9+iλ10λ300λ13+iλ14λ15+iλ16λ4),
    with H = Λ · Λt. A matrix equivalent to Λ can be derived from Λ by swapping its first and last columns. This can be formally defined as follows: Λ is equivalent to Λ · R, with
    R=(0001010000101000).
  2. In the class of Λ, we thereby have a matrix (Λ · R) with λ4 = 0. This means that Λ · R is a lower triangular matrix with λ4 = 0 that verifies H = (Λ · R) · (Λ · R)t;
  3. if H is such that λ2 = 0 in Eq. (20), then the second column of Λ vanishes and the reasoning above applies with
    R=(1000000100100100);
  4. if H is such that λ3 = 0 in Eq. (20), then the third column of Λ vanishes and the reasoning above applies with
    R=(1000010000010010).

This proves that every H+4 admits a decomposition H = Λ · Λt, with λ4 = 0.

Note that when λi = 0 (i = 1 ... 3), the elements of the i-th column cannot be uniquely determined. In this case, these elements have been set to 0 (see Eq. (20)). This has provided us a way to easily find a matrix equivalent to Λ that verifies λ4 = 0. If the undetermined elements had not been set to 0, it would not have been possible to use such a simple procedure, because swapping columns would have led to a non-triangular matrix.

B. Algorithm for the estimation of Λ from M ∈ B

Let M be a Mueller matrix that belongs to the boundary B of the Mueller matrix set. Let H be the associated coherency matrix ( H+4). It has been shown in Appendix A that there exists Λ with λ4 = 0 such that H = Λ · Λt. The goal of this Appendix is to explain how to determine Λ. At first sight, an algorithm can be easily derived from Appendix A:

  1. Determine Λ using Eq. (20).
  2. Let i0 be an integer in [1, 4] verifying λi0 = 0. Such an integer exists since at least one value of λi is null (i = 1 ... 4).
  3. Swap the i0-th column of Λ with its last one to obtain the desired result.

However, such an algorithm cannot be used to properly estimate Λ. Indeed, due to numerical problems, it may happen that λ1, λ2, λ3 and λ4 are all strictly positive. We could then define a threshold beyond which the values of λi would be set to 0 (i = 1 ... 4). However, there is in this case a risk that λi is set to 0 while it should not be, thereby leading to a matrix Λ with Λ · Λt highly different from H.

To solve this problem, several matrices Λ are computed by setting some values of λi to 0 (i = 1...4). This can be formally described as follows. Let 𝒜 denote a combination of p elements of {λ1, λ2, λ3, λ4} (1 ≤ p ≤ 4). Each λi of 𝒜 is set to 0, and the other values of Λ are computed from Eq. (20). All combinations are tested, leading to 15 different matrices Λ. The matrix Λ that minimizes the quadratic error between H and Λ · Λt is chosen. The equivalence class defined in Apprendix A enables finally to derive a lower triangular matrix Λ0 from Λ, with λ4 = 0 that verifies HΛ0 · Λ0t.

Acknowledgment

Giorgos Sfikas was supported by a grant from Région Alsace (France).

References and links

1. J. Zallat, S. Anouz, and M. P. Stoll, “Optimal configurations for imaging polarimeters: impact of image noise and systematic errors,” J. Opt. A: Pure Appl. Opt. 8, 807–814 (2006). [CrossRef]  

2. J. Zallat, Ch. Collet, and Y. Takakura, “Clustering of polarization-encoded images,” Appl. Opt. 43, 283–292 (2004). [CrossRef]   [PubMed]  

3. Ch. Collet, J. Zallat, and Y. Takakura, “Clustering of Mueller matrix images for skeletonized structure detection,” Opt. Express 12, 1271–1280 (2004). [CrossRef]   [PubMed]  

4. J. Zallat, Ch. Heinrich, and M. Petremand, “A Bayesian approach for polarimetric data reduction: the Mueller imaging case,” Opt. Express 16, 7119–7133 (2008). [CrossRef]   [PubMed]  

5. S. Faisan, Ch. Heinrich, F. Rousseau, A. Lallement, and J. Zallat, “Joint filtering-estimation of Stokes vector images based on a non-local means approach,” J. Opt. Soc. Am. A 29, 2028–2037 (2012). [CrossRef]  

6. A. Buades, B. Coll, and J. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Model. Simul. 4, 490–530 (2005). [CrossRef]  

7. Z. Xing, “On the deterministic and nondeterministic Mueller matrix,” J. Mod. Opt. 39, 461–484 (1992). [CrossRef]  

8. C. Givens and A. Kostinski, “A simple necessary and sufficient condition on physically realizable Mueller matrices,” J. Mod. Opt. 40, 471–481 (1993). [CrossRef]  

9. C. Vandermee, “An eigenvalue criterion for matrices transforming Stokes parameters,” J. Math. Phys. 34, 5072–5088 (1993). [CrossRef]  

10. D. Anderson and R. Barakat, “Necessary and sufficient conditions for a Mueller matrix to be derivable from a Jones matrix,” J. Opt. Soc. Am. A 11, 2305–2319 (1994). [CrossRef]  

11. A. Gopala Rao, K. Mallesh, and Sudha, “On the algebraic characterization of a Mueller matrix in polarization optics – I. Identifying a Mueller matrix from its N matrix,” J. Mod. Opt. 45, 955–987 (1998).

12. S. Cloude and E. Pottier, “Concept of polarization entropy in optical scattering: polarization analysis and measurement,” Opt. Eng. 34, 1599–1610 (1995). [CrossRef]  

13. A. Aiello, G. Puentes, D. Voigt, and J. P. Woerdman, “Maximum-likelihood estimation of Mueller matrices,” Opt. Lett. 31, 817–819 (2006). [CrossRef]   [PubMed]  

14. C. Deledalle, L. Denis, and F. Tupin, “Iterative weighted maximum likelihood denoising with probabilistic patch-based weights,” IEEE Trans. Image Process. 18, 2661–2672 (2009). [CrossRef]   [PubMed]  

15. S. Boyd and L. Vandenberghe, Convex Optimization (Cambridge University Press, 2004).

16. G. Stewart, Matrix algorithms. Vol II: eigensystems (SIAM, 2001). [CrossRef]  

17. T. Coleman and Y. Li, “On the convergence of reflective Newton methods for large-scale nonlinear minimization subject to bounds,” Math. Program. 67, 189–224 (1994). [CrossRef]  

18. T. Coleman and Y. Li, “An interior, trust region approach for nonlinear minimization subject to bounds,” SIAM J. Optim. 6, 418–445 (1996). [CrossRef]  

19. G. H. Golub and C. F. Van Loan, Matrix Computations (3rd Edition) (The Johns Hopkins University Press, 1996).

References

  • View by:
  • |
  • |
  • |

  1. J. Zallat, S. Anouz, and M. P. Stoll, “Optimal configurations for imaging polarimeters: impact of image noise and systematic errors,” J. Opt. A: Pure Appl. Opt.8, 807–814 (2006).
    [CrossRef]
  2. J. Zallat, Ch. Collet, and Y. Takakura, “Clustering of polarization-encoded images,” Appl. Opt.43, 283–292 (2004).
    [CrossRef] [PubMed]
  3. Ch. Collet, J. Zallat, and Y. Takakura, “Clustering of Mueller matrix images for skeletonized structure detection,” Opt. Express12, 1271–1280 (2004).
    [CrossRef] [PubMed]
  4. J. Zallat, Ch. Heinrich, and M. Petremand, “A Bayesian approach for polarimetric data reduction: the Mueller imaging case,” Opt. Express16, 7119–7133 (2008).
    [CrossRef] [PubMed]
  5. S. Faisan, Ch. Heinrich, F. Rousseau, A. Lallement, and J. Zallat, “Joint filtering-estimation of Stokes vector images based on a non-local means approach,” J. Opt. Soc. Am. A29, 2028–2037 (2012).
    [CrossRef]
  6. A. Buades, B. Coll, and J. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Model. Simul.4, 490–530 (2005).
    [CrossRef]
  7. Z. Xing, “On the deterministic and nondeterministic Mueller matrix,” J. Mod. Opt.39, 461–484 (1992).
    [CrossRef]
  8. C. Givens and A. Kostinski, “A simple necessary and sufficient condition on physically realizable Mueller matrices,” J. Mod. Opt.40, 471–481 (1993).
    [CrossRef]
  9. C. Vandermee, “An eigenvalue criterion for matrices transforming Stokes parameters,” J. Math. Phys.34, 5072–5088 (1993).
    [CrossRef]
  10. D. Anderson and R. Barakat, “Necessary and sufficient conditions for a Mueller matrix to be derivable from a Jones matrix,” J. Opt. Soc. Am. A11, 2305–2319 (1994).
    [CrossRef]
  11. A. Gopala Rao, K. Mallesh, and Sudha, “On the algebraic characterization of a Mueller matrix in polarization optics – I. Identifying a Mueller matrix from its N matrix,” J. Mod. Opt.45, 955–987 (1998).
  12. S. Cloude and E. Pottier, “Concept of polarization entropy in optical scattering: polarization analysis and measurement,” Opt. Eng.34, 1599–1610 (1995).
    [CrossRef]
  13. A. Aiello, G. Puentes, D. Voigt, and J. P. Woerdman, “Maximum-likelihood estimation of Mueller matrices,” Opt. Lett.31, 817–819 (2006).
    [CrossRef] [PubMed]
  14. C. Deledalle, L. Denis, and F. Tupin, “Iterative weighted maximum likelihood denoising with probabilistic patch-based weights,” IEEE Trans. Image Process.18, 2661–2672 (2009).
    [CrossRef] [PubMed]
  15. S. Boyd and L. Vandenberghe, Convex Optimization (Cambridge University Press, 2004).
  16. G. Stewart, Matrix algorithms. Vol II: eigensystems (SIAM, 2001).
    [CrossRef]
  17. T. Coleman and Y. Li, “On the convergence of reflective Newton methods for large-scale nonlinear minimization subject to bounds,” Math. Program.67, 189–224 (1994).
    [CrossRef]
  18. T. Coleman and Y. Li, “An interior, trust region approach for nonlinear minimization subject to bounds,” SIAM J. Optim.6, 418–445 (1996).
    [CrossRef]
  19. G. H. Golub and C. F. Van Loan, Matrix Computations (3rd Edition) (The Johns Hopkins University Press, 1996).

2012 (1)

2009 (1)

C. Deledalle, L. Denis, and F. Tupin, “Iterative weighted maximum likelihood denoising with probabilistic patch-based weights,” IEEE Trans. Image Process.18, 2661–2672 (2009).
[CrossRef] [PubMed]

2008 (1)

2006 (2)

A. Aiello, G. Puentes, D. Voigt, and J. P. Woerdman, “Maximum-likelihood estimation of Mueller matrices,” Opt. Lett.31, 817–819 (2006).
[CrossRef] [PubMed]

J. Zallat, S. Anouz, and M. P. Stoll, “Optimal configurations for imaging polarimeters: impact of image noise and systematic errors,” J. Opt. A: Pure Appl. Opt.8, 807–814 (2006).
[CrossRef]

2005 (1)

A. Buades, B. Coll, and J. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Model. Simul.4, 490–530 (2005).
[CrossRef]

2004 (2)

1998 (1)

A. Gopala Rao, K. Mallesh, and Sudha, “On the algebraic characterization of a Mueller matrix in polarization optics – I. Identifying a Mueller matrix from its N matrix,” J. Mod. Opt.45, 955–987 (1998).

1996 (1)

T. Coleman and Y. Li, “An interior, trust region approach for nonlinear minimization subject to bounds,” SIAM J. Optim.6, 418–445 (1996).
[CrossRef]

1995 (1)

S. Cloude and E. Pottier, “Concept of polarization entropy in optical scattering: polarization analysis and measurement,” Opt. Eng.34, 1599–1610 (1995).
[CrossRef]

1994 (2)

T. Coleman and Y. Li, “On the convergence of reflective Newton methods for large-scale nonlinear minimization subject to bounds,” Math. Program.67, 189–224 (1994).
[CrossRef]

D. Anderson and R. Barakat, “Necessary and sufficient conditions for a Mueller matrix to be derivable from a Jones matrix,” J. Opt. Soc. Am. A11, 2305–2319 (1994).
[CrossRef]

1993 (2)

C. Givens and A. Kostinski, “A simple necessary and sufficient condition on physically realizable Mueller matrices,” J. Mod. Opt.40, 471–481 (1993).
[CrossRef]

C. Vandermee, “An eigenvalue criterion for matrices transforming Stokes parameters,” J. Math. Phys.34, 5072–5088 (1993).
[CrossRef]

1992 (1)

Z. Xing, “On the deterministic and nondeterministic Mueller matrix,” J. Mod. Opt.39, 461–484 (1992).
[CrossRef]

Aiello, A.

Anderson, D.

Anouz, S.

J. Zallat, S. Anouz, and M. P. Stoll, “Optimal configurations for imaging polarimeters: impact of image noise and systematic errors,” J. Opt. A: Pure Appl. Opt.8, 807–814 (2006).
[CrossRef]

Barakat, R.

Boyd, S.

S. Boyd and L. Vandenberghe, Convex Optimization (Cambridge University Press, 2004).

Buades, A.

A. Buades, B. Coll, and J. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Model. Simul.4, 490–530 (2005).
[CrossRef]

Cloude, S.

S. Cloude and E. Pottier, “Concept of polarization entropy in optical scattering: polarization analysis and measurement,” Opt. Eng.34, 1599–1610 (1995).
[CrossRef]

Coleman, T.

T. Coleman and Y. Li, “An interior, trust region approach for nonlinear minimization subject to bounds,” SIAM J. Optim.6, 418–445 (1996).
[CrossRef]

T. Coleman and Y. Li, “On the convergence of reflective Newton methods for large-scale nonlinear minimization subject to bounds,” Math. Program.67, 189–224 (1994).
[CrossRef]

Coll, B.

A. Buades, B. Coll, and J. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Model. Simul.4, 490–530 (2005).
[CrossRef]

Collet, Ch.

Deledalle, C.

C. Deledalle, L. Denis, and F. Tupin, “Iterative weighted maximum likelihood denoising with probabilistic patch-based weights,” IEEE Trans. Image Process.18, 2661–2672 (2009).
[CrossRef] [PubMed]

Denis, L.

C. Deledalle, L. Denis, and F. Tupin, “Iterative weighted maximum likelihood denoising with probabilistic patch-based weights,” IEEE Trans. Image Process.18, 2661–2672 (2009).
[CrossRef] [PubMed]

Faisan, S.

Givens, C.

C. Givens and A. Kostinski, “A simple necessary and sufficient condition on physically realizable Mueller matrices,” J. Mod. Opt.40, 471–481 (1993).
[CrossRef]

Golub, G. H.

G. H. Golub and C. F. Van Loan, Matrix Computations (3rd Edition) (The Johns Hopkins University Press, 1996).

Gopala Rao, A.

A. Gopala Rao, K. Mallesh, and Sudha, “On the algebraic characterization of a Mueller matrix in polarization optics – I. Identifying a Mueller matrix from its N matrix,” J. Mod. Opt.45, 955–987 (1998).

Heinrich, Ch.

Kostinski, A.

C. Givens and A. Kostinski, “A simple necessary and sufficient condition on physically realizable Mueller matrices,” J. Mod. Opt.40, 471–481 (1993).
[CrossRef]

Lallement, A.

Li, Y.

T. Coleman and Y. Li, “An interior, trust region approach for nonlinear minimization subject to bounds,” SIAM J. Optim.6, 418–445 (1996).
[CrossRef]

T. Coleman and Y. Li, “On the convergence of reflective Newton methods for large-scale nonlinear minimization subject to bounds,” Math. Program.67, 189–224 (1994).
[CrossRef]

Mallesh, K.

A. Gopala Rao, K. Mallesh, and Sudha, “On the algebraic characterization of a Mueller matrix in polarization optics – I. Identifying a Mueller matrix from its N matrix,” J. Mod. Opt.45, 955–987 (1998).

Morel, J.

A. Buades, B. Coll, and J. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Model. Simul.4, 490–530 (2005).
[CrossRef]

Petremand, M.

Pottier, E.

S. Cloude and E. Pottier, “Concept of polarization entropy in optical scattering: polarization analysis and measurement,” Opt. Eng.34, 1599–1610 (1995).
[CrossRef]

Puentes, G.

Rousseau, F.

Stewart, G.

G. Stewart, Matrix algorithms. Vol II: eigensystems (SIAM, 2001).
[CrossRef]

Stoll, M. P.

J. Zallat, S. Anouz, and M. P. Stoll, “Optimal configurations for imaging polarimeters: impact of image noise and systematic errors,” J. Opt. A: Pure Appl. Opt.8, 807–814 (2006).
[CrossRef]

Sudha,

A. Gopala Rao, K. Mallesh, and Sudha, “On the algebraic characterization of a Mueller matrix in polarization optics – I. Identifying a Mueller matrix from its N matrix,” J. Mod. Opt.45, 955–987 (1998).

Takakura, Y.

Tupin, F.

C. Deledalle, L. Denis, and F. Tupin, “Iterative weighted maximum likelihood denoising with probabilistic patch-based weights,” IEEE Trans. Image Process.18, 2661–2672 (2009).
[CrossRef] [PubMed]

Van Loan, C. F.

G. H. Golub and C. F. Van Loan, Matrix Computations (3rd Edition) (The Johns Hopkins University Press, 1996).

Vandenberghe, L.

S. Boyd and L. Vandenberghe, Convex Optimization (Cambridge University Press, 2004).

Vandermee, C.

C. Vandermee, “An eigenvalue criterion for matrices transforming Stokes parameters,” J. Math. Phys.34, 5072–5088 (1993).
[CrossRef]

Voigt, D.

Woerdman, J. P.

Xing, Z.

Z. Xing, “On the deterministic and nondeterministic Mueller matrix,” J. Mod. Opt.39, 461–484 (1992).
[CrossRef]

Zallat, J.

Appl. Opt. (1)

IEEE Trans. Image Process. (1)

C. Deledalle, L. Denis, and F. Tupin, “Iterative weighted maximum likelihood denoising with probabilistic patch-based weights,” IEEE Trans. Image Process.18, 2661–2672 (2009).
[CrossRef] [PubMed]

J. Math. Phys. (1)

C. Vandermee, “An eigenvalue criterion for matrices transforming Stokes parameters,” J. Math. Phys.34, 5072–5088 (1993).
[CrossRef]

J. Mod. Opt. (3)

A. Gopala Rao, K. Mallesh, and Sudha, “On the algebraic characterization of a Mueller matrix in polarization optics – I. Identifying a Mueller matrix from its N matrix,” J. Mod. Opt.45, 955–987 (1998).

Z. Xing, “On the deterministic and nondeterministic Mueller matrix,” J. Mod. Opt.39, 461–484 (1992).
[CrossRef]

C. Givens and A. Kostinski, “A simple necessary and sufficient condition on physically realizable Mueller matrices,” J. Mod. Opt.40, 471–481 (1993).
[CrossRef]

J. Opt. A: Pure Appl. Opt. (1)

J. Zallat, S. Anouz, and M. P. Stoll, “Optimal configurations for imaging polarimeters: impact of image noise and systematic errors,” J. Opt. A: Pure Appl. Opt.8, 807–814 (2006).
[CrossRef]

J. Opt. Soc. Am. A (2)

Math. Program. (1)

T. Coleman and Y. Li, “On the convergence of reflective Newton methods for large-scale nonlinear minimization subject to bounds,” Math. Program.67, 189–224 (1994).
[CrossRef]

Multiscale Model. Simul. (1)

A. Buades, B. Coll, and J. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Model. Simul.4, 490–530 (2005).
[CrossRef]

Opt. Eng. (1)

S. Cloude and E. Pottier, “Concept of polarization entropy in optical scattering: polarization analysis and measurement,” Opt. Eng.34, 1599–1610 (1995).
[CrossRef]

Opt. Express (2)

Opt. Lett. (1)

SIAM J. Optim. (1)

T. Coleman and Y. Li, “An interior, trust region approach for nonlinear minimization subject to bounds,” SIAM J. Optim.6, 418–445 (1996).
[CrossRef]

Other (3)

G. H. Golub and C. F. Van Loan, Matrix Computations (3rd Edition) (The Johns Hopkins University Press, 1996).

S. Boyd and L. Vandenberghe, Convex Optimization (Cambridge University Press, 2004).

G. Stewart, Matrix algorithms. Vol II: eigensystems (SIAM, 2001).
[CrossRef]

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1

Mueller image of the synthetic scene. For convenience, 16 images (one for each channel of the Mueller matrix) are represented on a 4 × 4 grid. Row i column j image corresponds to the mij Mueller matrix element image.

Fig. 2
Fig. 2

Mueller image of the synthetic scene estimated with the proposed (NLM) approach (a) and pseudo-inverse (b) from the noisy intensity images (σ = 0.1). Data are presented with the convention used in Fig. 1.

Fig. 3
Fig. 3

Mueller image of the first scene estimated with the proposed (NLM) approach (a) and projected pseudo-inverse (b). Data are presented with the convention used in Fig. 1 except that all channels but m11 have been pixelwise normalized with respect to m11 (mij(x) = mij(x)/m11(x)).

Fig. 4
Fig. 4

Mueller image of the second scene estimated with the proposed (NLM) approach (left column) and projected pseudo-inverse (right column). For convenience, only channels m22 (a) and m44 (c) are presented. Images (b) and (d) correspond to a zoom in of (a) and (c), respectively.

Fig. 5
Fig. 5

Mean diattenuation of the first scene obtained by the polar decomposition applied to the NLM (left column) and to the MPIortho solutions (right column).

Tables (1)

Tables Icon

Table 1 PSNR (left) and Mueller matrix estimation error (right) obtained with the four different methods, and for different values of σ. Bold values correspond to the best results.

Equations (24)

Equations on this page are rendered with MathJax. Learn more.

x Ω , I nlm ( x ) = y Ω w ( x , y ) I ( y ) = arg min a y Ω w ( x , y ) ( I ( y ) a ) 2 ,
x Ω , I nlm ( x ) = y Ω D x ( y ) I ( y ) = arg min a y Ω ( I ( y ) a ) t D x ( y ) ( I ( y ) a ) ,
x Ω , S ^ ( x ) = arg min S y Ω ( I ( y ) P S ) t D x ( y ) ( I ( y ) P S ) .
x Ω , S ^ ( x ) = arg min S I nlm ( x ) P S 2 .
x Ω , S ^ ( x ) = arg min S B I nlm ( x ) P S 2 ,
I ( x ) = A M ( x ) W ( + noise ) ,
I ( x ) _ = ( W t A ) M ( x ) _ ( + noise ) = P M ( x ) _ ( + noise ) ,
x Ω , M ^ ( x ) _ = arg min M y Ω ( I ( y ) _ P M _ ) t D x ( y ) ( I ( y ) _ P M _ ) ,
x Ω , M ^ ( x ) _ = arg min M I nlm ( x ) _ P M _ 2 ,
x Ω , I nlm ( x ) _ = y Ω D x ( y ) I ( y ) _ .
x Ω , M ^ ( x ) _ = arg min M B I nlm ( x ) _ P M _ 2 ,
M _ = T H _ ,
H = Λ Λ t ,
Λ = ( λ 1 0 0 0 λ 5 + i λ 6 λ 2 0 0 λ 7 + i λ 8 λ 9 + i λ 10 λ 3 0 λ 11 + i λ 12 λ 13 + i λ 14 λ 15 + i λ 16 λ 4 ) .
B = { M : M _ = T Λ Λ t _ with λ 4 = 0 } .
M ^ ( x ) _ = T Λ ^ Λ ^ t _ with Λ ^ = arg min Λ , λ 4 = 0 I nlm ( x ) _ P T Λ Λ t _ 2
P S N R ( I g t , I ^ ) = 10 log 10 ( d 2 1 16. P j = 1 16 α j 2 x ( I j g t ( x ) I ^ j ( x ) ) 2 ) ,
e ( M g t , M ^ ) = 100 1 P x M g t ( x ) M ^ ( x ) 2 M g t ( x ) 2 .
H = ( h 1 h 5 i h 6 h 7 i h 8 h 11 + i h 12 h 5 + i h 6 h 2 h 9 i h 10 h 13 i h 14 h 7 + i h 8 h 9 + i h 10 h 3 h 15 i h 16 h 11 i h 12 h 13 + i h 14 h 15 + i h 16 h 4 ) .
{ λ 1 = h 1 λ i = h i λ 1 ( for i = 5 to 8 and for i = 11 to 12 ) if λ 1 > 0 , and 0 otherwise λ 2 = h 2 λ 5 2 λ 6 2 λ 9 = 1 λ 2 ( h 9 λ 5 λ 7 λ 6 λ 8 ) if λ 2 > 0 , and 0 otherwise λ 10 = 1 λ 2 ( h 18 λ 6 λ 7 λ 5 λ 8 ) if λ 2 > 0 , and 0 otherwise λ 13 = 1 λ 2 ( h 13 λ 5 λ 11 λ 6 λ 12 ) if λ 2 > 0 , and 0 otherwise λ 14 = 1 λ 2 ( h 14 + λ 6 λ 11 λ 5 λ 12 ) if λ 2 > 0 , and 0 otherwise λ 3 = h 3 λ 7 2 λ 8 2 λ 9 2 λ 10 2 λ 15 = 1 λ 3 ( h 15 λ 7 λ 11 λ 8 λ 12 λ 9 λ 13 λ 10 λ 14 ) if λ 3 > 0 , and 0 otherwise λ 16 = 1 λ 3 ( h 16 + λ 8 λ 11 λ 7 λ 12 + λ 10 λ 13 λ 9 λ 14 ) if λ 3 > 0 , and 0 otherwise λ 4 = h 4 λ 11 2 λ 12 2 λ 13 2 λ 14 2 λ 15 2 λ 16 2
Λ = ( 0 0 0 0 0 λ 2 0 0 0 λ 9 + i λ 10 λ 3 0 0 λ 13 + i λ 14 λ 15 + i λ 16 λ 4 ) ,
R = ( 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 ) .
R = ( 1 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 ) ;
R = ( 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 ) .

Metrics