Abstract

In this paper, we extend to the Mueller imaging framework a formerly introduced Bayesian approach dealing with polarimetric data reduction and robust clustering of polarization encoded images in the piecewise constant case. The extension was made possible thanks to a suitable writing of the observation model in the Mueller context that relies on the system’s coherency matrix and Cholesky decomposition such that the admissibility constraints are easily captured. This generalization comes at the cost of nonlinearity with respect to the parameters that have to be estimated. This estimation-clustering problem is tackled in a Bayesian framework where a hierarchical stochastic model based on a Markov random field proposed by Potts is used. This fully unsupervised approach is extensively tested over synthetic data as well as real Mueller images.

© 2008 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. J. Zallat, and Ch. Heinrich, "Polarimetric data reduction: a Bayesian approach," Opt. Express 15, 83-96 (2007), http://www.opticsinfobase.org/abstract.cfm?URI=oe-15-1-83.
    [CrossRef] [PubMed]
  2. J. Zallat, S. A¨ınouz, and M.-P. Stoll, "Optimal configurations for imaging polarimeters: impact of image noise and systematic errors," J. Opt. A: Pure Appl. Opt. 8, 807-814 (2006).
    [CrossRef]
  3. S. Cloude, and E. Pottier, "Concept of polarization entropy in optical scattering: polarization analysis and measurement," Opt. Eng. 34, 1599-1610 (1995).
    [CrossRef]
  4. G. Golub, and C. Van Loan, Matrix computation (The Johns Hopkins U. Press, Third edition, 1996).
  5. G. Winkler, Image analysis, random fields and Markov chain Monte Carlo methods: a mathematical introduction (Springer, Second edition, 2006).
  6. P. Green, and S. Richardson, "Hidden Markov models and disease mapping," J. Am. Stat. Assoc. 97, 1055-1070 (2002).
    [CrossRef]
  7. S.-Y. Lu, and R. Chipman, "Interpretation of Mueller matrices based on polar decomposition," J. Opt. Soc. Am. A 13, 1106-1113 (1996).
    [CrossRef]

2007

2006

J. Zallat, S. A¨ınouz, and M.-P. Stoll, "Optimal configurations for imaging polarimeters: impact of image noise and systematic errors," J. Opt. A: Pure Appl. Opt. 8, 807-814 (2006).
[CrossRef]

2002

P. Green, and S. Richardson, "Hidden Markov models and disease mapping," J. Am. Stat. Assoc. 97, 1055-1070 (2002).
[CrossRef]

1996

1995

S. Cloude, and E. Pottier, "Concept of polarization entropy in optical scattering: polarization analysis and measurement," Opt. Eng. 34, 1599-1610 (1995).
[CrossRef]

A¨inouz, S.

J. Zallat, S. A¨ınouz, and M.-P. Stoll, "Optimal configurations for imaging polarimeters: impact of image noise and systematic errors," J. Opt. A: Pure Appl. Opt. 8, 807-814 (2006).
[CrossRef]

Chipman, R.

Cloude, S.

S. Cloude, and E. Pottier, "Concept of polarization entropy in optical scattering: polarization analysis and measurement," Opt. Eng. 34, 1599-1610 (1995).
[CrossRef]

Green, P.

P. Green, and S. Richardson, "Hidden Markov models and disease mapping," J. Am. Stat. Assoc. 97, 1055-1070 (2002).
[CrossRef]

Heinrich, Ch.

Lu, S.-Y.

Pottier, E.

S. Cloude, and E. Pottier, "Concept of polarization entropy in optical scattering: polarization analysis and measurement," Opt. Eng. 34, 1599-1610 (1995).
[CrossRef]

Richardson, S.

P. Green, and S. Richardson, "Hidden Markov models and disease mapping," J. Am. Stat. Assoc. 97, 1055-1070 (2002).
[CrossRef]

Stoll, M.-P.

J. Zallat, S. A¨ınouz, and M.-P. Stoll, "Optimal configurations for imaging polarimeters: impact of image noise and systematic errors," J. Opt. A: Pure Appl. Opt. 8, 807-814 (2006).
[CrossRef]

Zallat, J.

J. Zallat, and Ch. Heinrich, "Polarimetric data reduction: a Bayesian approach," Opt. Express 15, 83-96 (2007), http://www.opticsinfobase.org/abstract.cfm?URI=oe-15-1-83.
[CrossRef] [PubMed]

J. Zallat, S. A¨ınouz, and M.-P. Stoll, "Optimal configurations for imaging polarimeters: impact of image noise and systematic errors," J. Opt. A: Pure Appl. Opt. 8, 807-814 (2006).
[CrossRef]

J. Am. Stat. Assoc.

P. Green, and S. Richardson, "Hidden Markov models and disease mapping," J. Am. Stat. Assoc. 97, 1055-1070 (2002).
[CrossRef]

J. Opt. A: Pure Appl. Opt.

J. Zallat, S. A¨ınouz, and M.-P. Stoll, "Optimal configurations for imaging polarimeters: impact of image noise and systematic errors," J. Opt. A: Pure Appl. Opt. 8, 807-814 (2006).
[CrossRef]

J. Opt. Soc. Am. A

Opt. Eng.

S. Cloude, and E. Pottier, "Concept of polarization entropy in optical scattering: polarization analysis and measurement," Opt. Eng. 34, 1599-1610 (1995).
[CrossRef]

Opt. Express

Other

G. Golub, and C. Van Loan, Matrix computation (The Johns Hopkins U. Press, Third edition, 1996).

G. Winkler, Image analysis, random fields and Markov chain Monte Carlo methods: a mathematical introduction (Springer, Second edition, 2006).

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1.

The directed acyclic graph representing the dependence relations between the variables. According to usual conventions, square boxes represent fixed or observed quantities whereas circles represent quantities to be estimated. More precisely, β is the abstract inverse temperature parameter of the Markov random field,σλ is the standard deviation of the λi ’s, ρ is related to the quantum efficiency of the camera, I 0 is the underlying class segmentation map, Φ is the Mueller parameter matrix encompassing the λi (k)’s, Θ is the matrix encompassing the parameters of the noise distribution, and I is the intensity observations matrix.

Algorithm 1:
Algorithm 1:

Optimization phase 1: initialization.

Algorithm 2:
Algorithm 2:

Optimization phase 2: refinement. In algorithm (a), all pixels of a given block are forced to the same label.

Fig. 2.
Fig. 2.

128-pixels×128-pixels mask used to generate the synthetic Mueller images. Each color (white, light gray, heavy gray and black) corresponds to one label.

Fig. 3.
Fig. 3.

The 16 channels of the synthetic Mueller image obtained by the commonly used pseudo-inverse estimate obtained for the noise of variance σ 2 n =0.5.

Fig. 4.
Fig. 4.

The optimal number of classes for the simulated data. The graph corresponds to σ 2 n =0.5.

Fig. 5.
Fig. 5.

The Mueller image of the test object as obtained from classical pseudo-inverse PDR. The image in the upper left (m 00) element corresponds to a conventional intensity image.

Fig. 6.
Fig. 6.

The final map obtained by the Bayesian approach (Potts’ model). Each gray level corresponds to one different class associated to a different Mueller matrix. The circular cross section delimits the footprint of the illuminating beam. Classes (2) and (3) are the dichroic patches and class (4) corresponds to the thin cellophane film.

Fig. 7.
Fig. 7.

The optimal number of classes for the true image. We observe that the graph is consistent with the a priori knowledge of the object.

Tables (4)

Tables Icon

Table 1. The four different matrices assigned to the classes of the label map of Fig. 2. Matrix (a) was assigned to the black pixels, matrix (b) to the heavy gray pixels, matrix (c) to the light gray pixels and matrix (d) to the white pixels in the label map

Tables Icon

Table 2. Mean percentage of misclassified pixels for the Bayesian approach (with ( β ^ ) and without (β=0) Markov random field prior) and for K-means clustering applied to the intensity images. Averaging has been done over noise realizations. Numbers between parentheses represents standard-deviation of related quantities.

Tables Icon

Table 3. Mean values of the relative error RE (in percent) of the four Mueller matrices corresponding to the four classes of Fig. 2. Averaging has been done over noise realizations. The first values correspond to σ 2 n =0.1, the second values correspond to σ 2 n =0.5. The K-means were used to derive a segmentation map over which the pseudo-inverse estimate was class-wise averaged, yielding the k ’s. We ensured physical admissibility conditions over those averages according to M ¯ ̂ k = T H ¯ ̂ k , with H ̂ k = Λ ̂ k Λ ̂ k t and Λ ̂ k = arg min Λ M ¯ ~ k T ΛΛ ¯ t 2 , the minimization being achieved under the admissibility conditions.

Tables Icon

Table 4. The five retrieved Mueller matrices that correspond to the label map output of our algorithm. These matrices have to be interpreted with the class numbering that is shown in Fig. 6.

Equations (22)

Equations on this page are rendered with MathJax. Learn more.

I ( j y , j x ) = A M ( j y , j x ) W ,
I ¯ ( j y , j x ) = ( W t A ) M ¯ ( j y , j x ) = P ( η ) M ¯ ( j y , j x ) ,
I ¯ ( j y , j x ) = ( 1 + ρ ) P ( η ) M ¯ ( j y , j x ) + ε ( j y , j n ) ,
H = k , l = 0 3 m k l [ σ k σ l ] ,
M ¯ = T H ¯ ,
H = Λ Λ t ,
Λ = [ λ 0 0 0 0 λ 4 + i λ 5 λ 1 0 0 λ 10 + i λ 11 λ 6 + i λ 7 λ 2 0 λ 14 + i λ 15 λ 12 + i λ 13 λ 8 + i λ 9 λ 3 ] .
I ¯ ( j y , j x ) = ( 1 + ρ ) P ( η ) T H ¯ ( { λ i } ; j y , j x ) + ε ( j y , j n ) .
I 0 = { I 0 ( j y , j x ) ; I 0 ( j y , j x ) [ 1 , K ] , j y [ 1 , n ] , j x [ 1 , m ] }
p ( I 0 | β ) = 1 Z ( β ) exp ( β U ( I 0 ) ) ,
U ( I 0 ) = t ~ s 𝕀 ( I 0 ( t ) = I 0 ( s ) )
d log Z d β = 1 Z d Z d β = I 0 U ( I 0 ) exp ( β U ( I 0 ) ) Z = E [ U ( I 0 ) ] ,
log Z = n m log K + 0 β E [ U ( I 0 ) ] d β .
Φ = [ λ 0 ( 1 ) λ 0 ( K ) λ 15 ( 1 ) λ 15 ( K ) ] ,
ε ( j y , j x , j n ) I 0 ( j y , j x ) ~ 𝒩 ( μ I 0 ( j y , j x ) ( j n ) , ( σ I 0 ( j y , j x ) ( j n ) ) 2 ) ,
Θ ( j n ) = [ μ 1 ( j n ) μ 2 ( j n ) μ K ( j n ) σ 1 ( j n ) σ 2 ( j n ) σ K ( j n ) ] .
p ( I , Φ , I 0 | ρ , Θ , β , σ λ ) = p ( I | Φ , I 0 , ρ , Θ ) p ( Φ | σ λ ) p ( I 0 | β ) .
{ λ i ~ 𝒩 ( 0 , σ λ 2 ) 𝕀 ( λ i 0 ) i [ 0 , 3 ] , λ i ~ 𝒩 ( 0 , σ λ 2 ) i [ 4 , 15 ] .
{ p ( I Φ , I 0 , ρ , Θ ) = p ( ε ) = p ( I ¯ ( 1 + ρ ) P ( η ) T H ¯ ( { λ i } ) ) = j y , j x , j n 𝒩 ( ζ ( j y , j x , j n ) , ( σ I 0 ( j y , j x ) ( j n ) ) 2 ) ,
ζ ( j y , j x , j n ) = I ( j y , j x , j n ) ( 1 + ρ ) P ( η ) T H ¯ ( { Φ [ I 0 ( j y , j x ) ] } , j y , j x ) [ j n ] .
RE = M true M estimated M true .
M 2 M 3 = [ 1 0.03 0.01 0.00 0.05 0.00 0.00 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ] ,

Metrics