## Abstract

We develop a method for obtaining 3D polarimetric integral images from elemental images recorded in low light illumination conditions. Since photon-counting images are very sparse, calculation of the Stokes parameters and the degree of polarization should be handled carefully. In our approach, polarimetric 3D integral images are generated using the Maximum Likelihood Estimation and subsequently reconstructed by means of a Total Variation Denoising filter. In this way, polarimetric results are comparable to those obtained in conventional illumination conditions. We also show that polarimetric information retrieved from photon starved images can be used in 3D object recognition problems. To the best of our knowledge, this is the first report on 3D polarimetric photon counting integral imaging.

© 2015 Optical Society of America

## 1. Introduction

The vector nature of the electromagnetic fields can be utilized to provide useful information for explaining a large variety of problems involving interaction of light with matter [1,2]. Analysis performed using polarization properties of light make available data that remain hidden to conventional devices able to record only scalar parameters such as irradiance or wavelength. In particular, polarimetric imaging systems provide extra information that can be used for classification, segmentation, or pattern recognition among many other applications [3, 4]. The Stokes parameters and the Degree of Polarization (DoP) are a conventional mean to display polarization information in a scene [5]. The experimental procedure for obtaining the Stokes distributions is straightforward but requires recording six different images, which may require modifying the polarization setup each time. This fact does not pose practical difficulties provided the scene is still and properly illuminated; but this approach is not valid in dynamic or low light conditions. Since the information contained in the six polarimetric images is redundant, Azzam demonstrated that only four measures are required for obtaining full polarimetric data [6,7]. Recently, many authors have concentrated their efforts on developing one-shot polarimeters. These devices generate the four polarimetric distributions using birefringent components or specially designed camera sensors [8–11]. This technology is becoming commercial and nowadays different companies are developing real-time single-shot polarimetric cameras.

Much attention has been given to 3D imaging systems based on integral imaging [12–14]. In particular, some authors focused on developing solutions for different practical problems related to integral imaging implementation [15–22]. Integral imaging under extremely low light level scenes using photon-counting elemental images has been shown as a particularly interesting topic. Several papers have demonstrated the feasibility of 3D image visualization using a low number of photons [23–26]. Moreover, polarization and 3D data are closely related: for instance, 3D information can be derived from the analysis of the state of polarization of the light reflected by the analyzed object [4,27]. Integral imaging using polarized light was suggested in [28,29], and more recently some authors reported the use of a set of multiple perspective 2D polarimetric images to reconstruct the DoP of the corresponding 3D integral image [30].

In this paper, we propose 3D polarimetric integral imaging in low light illumination conditions. Since photon-counting images are very sparse, calculation of the Stokes parameters and the DoP using the conventional formulas becomes challenging. Polarimetric photon-counting 3D integral images are generated using the maximum likelihood estimation [23]. This approach is equivalent to calculating the average of the photon-counting elemental images but the 3D image reconstruction becomes noisy when there are few photons in the scene. In order to reduce the amount of noise, the 3D integral imaging distributions are subsequently reconstructed by means of total variation algorithms [31–34]. Finally, Stokes parameters and the DoP are calculated [5].

The paper is organized as following: in section 2, we describe the procedure for generating polarimetric 3D integral imaging in photon starved conditions using photon-counting elemental images. In section 3, the experimental setup for acquiring polarimetric elemental images is introduced. A method for estimating the DoP in photon-counting 3D integral images scenes is proposed in section 4. Also, as an additional application of the proposed approach, we show how to take advantage of polarization information for pattern recognition purposes. Finally, the main conclusions are summarized in section 5.

## 2. Photon-counting polarimetric 3d InI

Let us first consider a quasi-monochromatic paraxial electromagnetic field propagating in the *z*-axis direction; therefore, the electric field is transverse to the direction *z*, i.e.:

**r**are defined as [1,5]:

*T*i.e.:

A set of six polarimetric images (*I*^{0°,0}, *I*^{90°,0}, *I*^{45°,0}, *I*^{135°,0}, *I*^{45°,π/2} and *I*^{135°,π/2}) have to be recorded to calculate the four Stokes distributions (*S _{0}, S_{1}, S_{2}, S_{3}*).

*I*

^{α}^{,0}stands for the recorded intensity when the linear polarizer is set at an angle

*α*with respect to the

*x*direction and

*I*

^{α}^{, π/2}is the image recorded when a quarter wave-plate is used in addition to the polarizer. The Stokes parameters and the DoP are calculated by means of [1,5]:

An integral image is calculated as the average of shifted elemental images recorded obtained from different perspectives of a 3D scene using a single camera with an array of lenslets, or an array of cameras, or by moving a camera to different locations while the scene is in the field of view. Integral imaging can be extended to 3D polarimetric integral imaging by recording the polarimetric distributions ${i}_{k,l}^{0\xba,0}\text{,}{i}_{k,l}^{90\xba,0}\text{,}{i}_{k,l}^{45\xba,0}\text{,}{i}_{k,l}^{135\xba,0}\text{,}{i}_{k,l}^{45\xba,\pi /2}\text{,}\text{and}{i}_{k,l}^{135\xba,\pi /2}$for each elemental image of the scene indexed by *k*,*l*. Then, these sets of polarimetric elemental images are used to calculate the corresponding polarimetric 3D integral distributions *I ^{α, β}* according to the following equation:

*N*and

_{x}*N*are the number of elemental images in the

_{y}*x*- and

*y*- directions,

*c*x

_{x}*c*is the size of the CCD pixel,

_{y}*p*is the relative displacement of the camera for recording each elemental images,

*f*is the focal length of the objective lens and

*z*is the pick-up distance between the scene and the camera. Finally, the 3D integral imaging Stokes parameters (

*S*) are calculated using again Eq. (4).

_{0}, S_{1}, S_{2}, S_{3}If a system works in very low light illumination conditions, irradiance is recorded according to the photon-counting model. In these conditions, it is assumed that the image is statistically modeled by the Poisson distribution [35]:

*m*is the number of photons detected at pixel (

*x*,

*y*), ${n}_{k,l}^{\alpha ,\beta}\left(x,y\right)$is the normalized irradiance:

*N*is the predetermined number of photon counts in the entire scene. Using the maximum likelihood estimate for integral imaging as shown in [23], the irradiance of the 3D object is reconstructed by calculating the average of the photon-counting normalized irradiance ${\widehat{i}}_{k,l}^{\alpha ,\beta}$of the polarimetric elemental images${i}_{k,l}^{\alpha ,\beta}$, i.e.

_{P}## 3. Polarimetric 3D InI

A sketch of the integral imaging system used for recording the polarimetric elemental images is shown in Fig. 1. The information coming from the 3D scene is recorded using the synthetic aperture integral imaging approach [18]. A camera is assembled on a two-axis translation stage and the elemental images are recorded after shifting the camera at periodic positions. Stokes parameters can be measured provided a linear polarizer and a quarter wave plate are set in front of the camera, according to what was described in the previous section.

The scene is composed of two toy cars located at *z* = 530 mm from the camera. One of the cars is partially occluded by a tree at *z* = 450 mm. The scene was illuminated using natural light but since the quarter wave plate was designed for a wavelength λ = 543 nm, only the green channel of the camera is taken into account. The camera recorded the 6 required polarimetric distributions at 6x6 different perspective positions. Full details of the pick-up process for this particular set of images can be found in [30]. Table 1 summarizes the values of the different variables required to perform integral imaging calculations.

Figure 2 shows the results of a preliminary test without photon counting imaging. Figure 2(a) displays the reconstruction of the 3D scene with integral imaging at *z* = 530 mm using conventional images, that is, those obtained without using the polarizer and the quarter wave plate. Figure 2(b) shows the polarimetric integral imaging results with the DoP corresponding to the image of Fig. 2(a) using the *jet* color map. The light reflected on the background and the trees is non-polarized (navy-blue pixels); the light coming from the car frames is partially polarized (cyan to yellow) whereas the windshield of the right car totally polarizes the wave-front (red).

## 4. DoP estimation in photon-counting 3D integral imaging

The model of recording device used for generating polarimetric photon-counting images is a binary photon-counting camera. Despite the fact that gray level images can be produced with such cameras, the output is the result of averaging several frames [36]. In this paper, we take into account the distribution of photons corresponding to a single frame.

As described in section 2, the six polarimetric photon-counting integral imaging distributions *Î ^{α,β}* are calculated using the maximum likelihood approach [Eq. (8)]. Then, the Stokes images (

*S*) and the DoP could be estimated by means of Eq. (4). However, when the images are recorded with very few photons, the maximum likelihood estimate produces noisy reconstructed scenes with limited dynamic range. For illustrative purposes, Fig. 3 shows the integral imaging reconstruction using maximum likelihood estimation when few photons in each elemental image are used.

_{0}, S_{1}, S_{2}, S_{3}Since the DoP is a nonlinear combination of the Stokes parameters, the noise present in integral imaging reconstruction due to the low number of photons is amplified, thus producing a wrong measure of the polarization state. Figure 4 shows the DoP obtained using the maximum likelihood estimation [Eq. (8) and (4)]. It is apparent that when the number of photons per pixel is set to 0.01 [see Fig. 4(a)], the entire scene seems to be completely polarized. For a larger number of photons [see Fig. 4(b)], the result improves. However, a visual comparison between Figs. 2(b) and 4(b) indicates that the latter still displays high polarization values in the background areas.

In order to provide a comparative measure of the quality of integral imaging reconstructions, the Mean Structural Similarity Index (MSSIM) is used [37]. MSSIM is designed to provide a consistent comparison between two images taking into account how human visual perception works. Moreover, it provides a better assessment when compared with other evaluation methods such as the peak signal-to-noise ratio or the mean square error. To compare two different images *X* and *Y* using MSSIM, the algorithm subdivides the images in a set of *M* 8x8 pixels windows {*x _{j}*} and {

*y*}. The local structural similarity between two corresponding subimages

_{j}*x*and

_{j}*y*is calculated according to

_{j}*μ*and

_{xj}*μ*are the averages of subimages

_{yj}*x*and

_{j}*y*, and

_{j}*σ*,

_{xj}*σ*and

_{yj}*σ*are the variances of

_{xyj}*x*and

_{j}*y*and the covariance, respectively.

_{j}*c*and

_{1}*c*are two adjustable parameters related to the square of the dynamic range L defined as the ratio between the largest and smallest pixel values of subimage, i.e.

_{2}*c*= (

_{1}*k*L)

_{1}^{2}and

*c*= (

_{2}*k*L)

_{2}^{2}. In [37] the authors suggested to use

*k*= 0.01 and

_{1}*k*= 0.03. Then, MSSIM is calculated as the average of the SSIM values for the

_{2}*M*local windows:

*X,Y*) = 1 only when both images are identical. Table 2 shows the MSSIM values for the integral imaging reconstruction shown in Fig. (3) and the estimation of the DoP using maximum likelihood shown in Fig. (4). Reference images are those presented in Fig. (2). Note that MSSIM values for DoP are extremely low.

Different methods have been proposed for improving photon-count integral imaging reconstruction using statistical approaches [24–26]. In this paper, we use total variation denoising. These filters have been demonstrated to be very efficient in removing noise from the image as long as they keep the information content of the signal [31,32]. Among the different algorithms for total variation in this paper we use the Chambolle approach [33] implemented in the scikit-image library [34]. Let *u _{0}* and

*u*be the noisy and the reconstructed (target) images, respectively. Using the nomenclature introduced in [31], the total variation denoising problem consists in finding an image

*u*that minimizes the following equation:

*γ*is a regularization parameter.

Figures 5 and 6 show the total variation-filtered photon-counting 3D images and the corresponding DoP, respectively. The DoP is evaluated by denoising the six photon-counting 3D integral imaging scenes *Î ^{α, β}* [Eq. (8)] using total variation denoising. Then, the Stokes parameters and the DoP [see Eq. (4)] are calculated. Images presented in Figs. (5) and (6) look similar to those obtained in conventional illumination conditions. This subjective perception is corroborated by the values of the MSSIM index presented in Table 3. Note that when 0.05 photons/pixels are used, MSSIM is close to 1 for the 3D reconstructed image and MSSIM~0.7 for the evaluation of the DoP.

Polarimetric distributions provide additional information about the scene and the target that can be used in pattern recognition problems. To illustrate this concept, we calculate the nonlinear correlation [38,39] for the two DoP photon-counting 3D integral imaging scenes presented in Fig. (4a) and in Fig. (6a). The target is the car obtained from the DoP of the reconstructed scene using conventional integral imaging in Fig. 2b. This reference image is presented in Fig. (7). Correlation is calculated according to the following formula:

*R*stands for the reference or target image and

*D*stands for the polarimetric scene. Indices

*r*and

*d*are selected to provide a good discrimination capability with a high peak-to-correlation energy (PCE) while maintaining robust noise performance. In this paper the following values are used:

*r*= 0.7 and

*d*= 0.

Figures (8a) and (8c) show the correlation plane for the two cases considered. Note that in both cases the number of photons per pixel is set to 0.01. Correlations are normalized to their corresponding maximum value, and the area within the superimposed yellow rectangle is shown with a 3D plot. It is apparent that the amount of noise is very low when the DoP is reconstructed using the total variation filter. In particular, the peak to correlation energy (PCE) ratio obtained by total variation is three times higher when compared with the scene reconstructed using MLE. The PCE is defined as the ratio of the correlation peak to the total average of the correlation plane intensity. It should be pointed out that a variety of other pattern recognition approaches may be considered [39–41].

## 5. Concluding remarks

A method for estimating the DoP in photon-counting 3D integral images has been developed. Since the information recorded in low light illumination condition is very sparse, calculation of the Stokes parameters and the DoP becomes a difficult task. The irradiance of the 3D object can be reconstructed by calculating the weighted average of photon-counting elemental images using maximum likelihood estimation. However, direct estimation of the polarimetric parameters produce erroneous results due to noise and limited dynamic range. This fact is corroborated by measuring the structural similarity index between the original and 3D photon counting reconstructed images. Filters based on total variation denoising algorithms have been demonstrated to be very efficient in removing noise from the image as long as they keep the information content of the signal. Polarimetric photon-counting 3D reconstructed scenes dramatically improve after being processed by such filters. Moreover, the DoP produces comparable results to those obtained in conventional illumination conditions. Finally, we have shown that polarimetric information retrieved from photon starved images can also be used for pattern recognition purposes.

## Acknowledgments

The authors are indebted to Dr. Xiao Xiao for providing the set of polarimetric elemental images. This work has been partially supported by Ministerio de Economía y Competitividad (Spain) projects FIS2013-46475. This work was supported in part by DARPA and US Army under contract number W911NF-13-1-0485.

## References and links

**1. **E. Wolf, *Introduction to the Theory of Coherence and Polarization of Light*, (Cambridge University Press, 2007).

**2. **G. P. Können, *Polarized Light in Nature* (Cambridge University Press Archive, 1985).

**3. **L. B. Wolff, “Polarization vision: a new sensory approach to image understanding,” Image Vis. Comput. **15**(2), 81–93 (1997). [CrossRef]

**4. **F. A. Sadjadi, “Passive three-dimensional imaging using polarimetric diversity,” Opt. Lett. **32**(3), 229–231 (2007). [CrossRef] [PubMed]

**5. **M. Born and E. Wolf, Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light (Cambridge University, 1999).

**6. **R. M. A. Azzam, “Division-of-amplitude photopolarimeter (DOAP) for the simultaneous measurement of all four Stokes parameters of light,” Opt. Acta (Lond.) **29**(5), 685–689 (1982). [CrossRef]

**7. **R. M. A. Azzam, “Arrangement of four photodetectors for measuring the state of polarization of light,” Opt. Lett. **10**(7), 309–311 (1985). [CrossRef] [PubMed]

**8. **H. Luo, K. Oka, E. DeHoog, M. Kudenov, J. Schiewgerling, and E. L. Dereniak, “Compact and miniature snapshot imaging polarimeter,” Appl. Opt. **47**(24), 4413–4417 (2008). [CrossRef] [PubMed]

**9. **K. Fujita, Y. Itoh, and T. Mukai, “Development of simultaneous imaging polarimeter for asteroids,” Adv. Space Res. **43**(2), 325–327 (2009). [CrossRef]

**10. **W. Sparks, T. A. Germer, J. W. MacKenty, and F. Snik, “Compact and robust method for full Stokes spectropolarimetry,” Appl. Opt. **51**(22), 5495–5511 (2012). [CrossRef] [PubMed]

**11. **N. J. Brock, C. Crandall, and J. E. Millerd, “Snap-shot imaging polarimeter: performance and applications,” Proc. SPIE **9099**, 909903 (2014). [CrossRef]

**12. **G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. Theor. Appl. **7**(1), 821–825 (1908). [CrossRef]

**13. **H. E. Ives, “Optical properties of a Lippman lenticulated sheet,” J. Opt. Soc. Am. **21**(3), 171–176 (1931). [CrossRef]

**14. **C. B. Burckhardt, “Optimum parameters and resolution limitation of integral photography,” J. Opt. Soc. Am. **58**(1), 71–74 (1968). [CrossRef]

**15. **T. Okoshi, “Three-dimensional displays,” Proc. IEEE **68**(5), 548–564 (1980). [CrossRef]

**16. **H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. A **15**(8), 2059–2065 (1998). [CrossRef]

**17. **H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. **26**(3), 157–159 (2001). [CrossRef] [PubMed]

**18. **J. S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. **27**(13), 1144–1146 (2002). [CrossRef] [PubMed]

**19. **S. H. Hong, J. S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express **12**(3), 483–491 (2004). [CrossRef] [PubMed]

**20. **A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE **94**(3), 591–607 (2006). [CrossRef]

**21. **R. Martinez-Cuenca, G. Saavedra, M. Martinez-Corral, and B. Javidi, “Progress in 3-D multiperspective display by integral imaging,” Proc. IEEE **97**(6), 1067–1077 (2009). [CrossRef]

**22. **X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications [Invited],” Appl. Opt. **52**(4), 546–560 (2013). [PubMed]

**23. **B. Tavakoli, B. Javidi, and E. Watson, “Three dimensional visualization by photon counting computational integral imaging,” Opt. Express **16**(7), 4426–4436 (2008). [CrossRef] [PubMed]

**24. **J. Jung, M. Cho, D. K. Dey, and B. Javidi, “Three-dimensional photon counting integral imaging using Bayesian estimation,” Opt. Lett. **35**(11), 1825–1827 (2010). [CrossRef] [PubMed]

**25. **D. Aloni, A. Stern, and B. Javidi, “Three-dimensional photon counting integral imaging reconstruction using penalized maximum likelihood expectation maximization,” Opt. Express **19**(20), 19681–19687 (2011). [CrossRef] [PubMed]

**26. **A. Stern, A. Doroni, and B. Javidi, “Experiments with three-dimensional integral imaging under low light levels,” IEEE Phot. J. **4**(4), 1188–1195 (2012). [CrossRef]

**27. **D. Miyazaki and K. Ikeuchi, “Inverse polarization raytracing: estimating surface shapes of transparent objects,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 910–917. [CrossRef]

**28. **O. Matoba and B. Javidi, “Three-dimensional polarimetric integral imaging,” Opt. Lett. **29**(20), 2375–2377 (2004). [CrossRef] [PubMed]

**29. **B. Javidi, S. H. Hong, and O. Matoba, “Multidimensional optical sensor and imaging system,” Appl. Opt. **45**(13), 2986–2994 (2006). [CrossRef] [PubMed]

**30. **X. Xiao, B. Javidi, G. Saavedra, M. Eismann, and M. Martinez-Corral, “Three-dimensional polarimetric computational integral imaging,” Opt. Express **20**(14), 15481–15488 (2012). [CrossRef] [PubMed]

**31. **L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D **60**(1-4), 259–268 (1992). [CrossRef]

**32. **T. F. Chan, G. H. Golub, and P. Mulet, “A nonlinear primal-dual method for total variation-based image restoration,” SIAM J. Sci. Comput. **20**(6), 1964–1977 (1999). [CrossRef]

**33. **A. Chambolle, “An algorithm for total variation minimization and applications,” J. Math. Imaging Vis. **20**(1/2), 89–97 (2004). [CrossRef]

**34. **S. van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, T. Yu, and scikit-image contributors, “scikit-image: image processing in Python,” PeerJ **2**, e453 (2014). [CrossRef] [PubMed]

**35. **J. W. Goodman, Statistical optics, (John Wiley & Sons, inc., 1985)

**36. **F. Yang, Y. M. Lu, L. Sbaiz, and M. Vetterli, “Bits from Photons: Oversampled Image Acquisition Using Binary Poisson Statistics,” IEEE Trans. Image Process. **21**(4), 1421–1436 (2012). [CrossRef] [PubMed]

**37. **Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. **13**(4), 600–612 (2004). [CrossRef] [PubMed]

**38. **B. Javidi, “Nonlinear joint power spectrum based optical correlation,” Appl. Opt. **28**(12), 2358–2367 (1989). [CrossRef] [PubMed]

**39. **S. Yeom, B. Javidi, and E. Watson, “Photon counting passive 3D image sensing for automatic target recognition,” Opt. Express **13**(23), 9310–9330 (2005). [CrossRef] [PubMed]

**40. **M. Cho, A. Mahalanobis, and B. Javidi, “3D passive photon counting automatic target recognition using advanced correlation filters,” Opt. Lett. **36**(6), 861–863 (2011). [CrossRef] [PubMed]

**41. **F. A. Sadjadi and A. Mahalanobis, “Target-adaptive polarimetric synthetic aperture radar target discrimination using maximum average correlation height filters,” Appl. Opt. **45**(13), 3063–3070 (2006). [CrossRef] [PubMed]