Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Multispectral integral imaging acquisition and processing using a monochrome camera and a liquid crystal tunable filter

Open Access Open Access

Abstract

This paper presents an acquisition system and a procedure to capture 3D scenes in different spectral bands. The acquisition system is formed by a monochrome camera, and a Liquid Crystal Tunable Filter (LCTF) that allows to acquire images at different spectral bands in the [480, 680]nm wavelength interval. The Synthetic Aperture Integral Imaging acquisition technique is used to obtain the elemental images for each wavelength. These elemental images are used to computationally obtain the reconstruction planes of the 3D scene at different depth planes. The 3D profile of the acquired scene is also obtained using a minimization of the variance of the contribution of the elemental images at each image pixel. Experimental results show the viability to recover the 3D multispectral information of the scene. Integration of 3D and multispectral information could have important benefits in different areas, including skin cancer detection, remote sensing and pattern recognition, among others.

© 2012 Optical Society of America

1. Introduction

Applications of three-dimensional (3D) optical image sensing and visualization technologies span a wide range of areas including TV broadcasting, 3D displays, entertainment, medical sciences and robotics, to name a few [13]. An advantage of 3D in relation to traditional 2D imaging techniques is their capability to capture the structural information of the target.

Integral Imaging (II) is an autostereoscopic imaging method used to capture 3D information and to visualize it optically or computationally [4, 5]. We may consider that II started with the works of G. Lippmann [6] and H. E. Ives [7] in the beginning of the 20th century, but only recently these principles have seen all its potential with the advancements in optoelectronic (CMOS and CCD sensors), display devices such as liquid crystal displays (LCDs), and the tremendous increase in computing power [810].

Acquisition and visualization of 3D objects can be divided into two different steps, called pickup and reconstruction. In the pickup stage, multiple 2D images (elemental images) are captured through an array of small lenses (lenslet array). Each lenslet contains a unique projective transformation that maps the 3D object space onto 2D elemental images and is a function of the lenslet position and the focal length. As a result, an array of inverted real images is formed on the image sensor. Scene reconstruction can be made optically or computationally. Computational reconstruction of the 3D image can be achieved by simulating the optical backprojection of elemental images in the computer. This reconstruction method uses a computer synthesized virtual pinhole array for inverse mapping of each elemental image into the object space. All elemental images are computationally overlapped afterwards. With this process, the intensity distribution can be reconstructed at arbitrary planes inside the 3D object volume.

Multispectral imaging consists of the acquisition of images in the portion of the spectrum extending from the visible region through the nearinfrared and midinfrared. These techniques have recently found applications in remote sensing [1113], medical imaging [14, 15], fine arts [16], just to cite a few.

Integration of 3D and multispectral information may be important in: a) underwater 3D visualization [17], since water absorption is wavelength dependent, b) dermatology [18], for instance, in skin cancer detection, because melanoma pigmented skin lesions are wavelength dependent and have a 3D structure, c) remote sensing applications, for sensors onboard airplanes or satellites that may be able to create 3D models with the inclusion of multispectral information, d) remote sensing pattern recognition (i. e., identification of the 3D structure and the spectral response of an object form the distance), e) in photon starved or ”hard to visualize” conditions (at night, in foggy conditions, etc.) [19]. Multispectral 3D reconstruction has already been applied in microscopy [20]. Other 3D acquisition and visualization techniques have already incorporated multispectral information [2123].

In this paper we present 3D multispectral bands integral imaging, including scene capture and the 3D reconstruction results. The acquisition of the elemental images is made using a monochrome camera and a liquid crystal tunable filter (LCTF), applying the Integral Imaging technique, at different wavelengths from 480 to 680nm. Section 2 describes the methods used for the multispectral integral image acquisition. Section 3 explains the methodology applied to obtain the 3D profile of the imaged scene. Section 4 shows the 3D reconstruction results at different depth planes as well as the reconstructed scene profile. Conclusions are given in Section 5.

2. 3D Multispectral integral image acquisition

The multispectral 3D integral image acquisition system used is shown in Fig. 1. It consists of a Marlin F080B model camera, whose CCD sensor size is 4.80 × 3.62mm (and the image size is 1032 × 778 pixels) and a Liquid Crystal Tunable Filter (LCTF) covering the [400, 720]nm spectral range. It can acquire up to 33 bands with a 10nm spectral resolution. A Canon TV zoom lens is attached to the front part of the filter, and a Macro Schneider system [24] is used between the filter and the camera. The illumination system is a Dolan-Jenner MI–150 fiber optic illuminator system with a 150W bulb lamp (3250 Kelvin colour temperature). A uniform diffuser is attached to the fiber optic guide in order to homogenize the illumination. The system was adjusted to focus at a distance of 209mm from the Canon TV zoom lens. The effective focal length of the system was f = 10.35mm, the lateral magnification was M = 0.052 and the resulting angular field of view was 21.8°. The depth of field was big enough to ensure that all parts of the 3D scene were imaged sharply.

 figure: Fig. 1

Fig. 1 Integral Imaging data acquisition system setup.

Download Full Size | PDF

The acquisition time per band used for the LCTF filter is assessed using an ideal reflectance diffuser object, called spectralon. Its reflectance curve is flat for the visible part of the wavelength spectrum. A Region-Of-Interest (ROI) is selected for the spectralon to avoid the influence of some defects in its surface. The mean grey scale value of the ROI per band is used for the calibration. This time is adjusted to obtain the same grey scale value for all the wavelength bands. An automatic calibration method was implemented. For one band, the acquisition time and grey level value are considered to increase linearly, if illumination and distance to object do not vary. The method consists of selecting two (time, grey – level) pairs not allocated at extreme values, and assessing the exposure time considering a linear dependence between the grey scale value and the exposure time.

2.1. Integral Imaging Image Capture

Lenslet-based II systems are diffraction-limited in spatial resolution due to the small numerical aperture of the lenslets. In essence, three parameters must be considered: the camera pixel size, the lenslet point spread function, and the lenslet depth of focus [2527]. In addition, aberrations can negatively affect the resolution. Nevertheless, II can also be performed using a single 2D imaging sensor, scanning the aperture and capturing intermittent images over a large area. This approach is described as synthetic aperture integral imaging and overcomes some of the limitations of traditional lenslet-based integral imaging systems [28, 29]. This is the approach we followed in our paper. Other data capture approaches may be used as well. For example, see [30].

3. Volumetric reconstruction

Once the elemental images per wavelength were acquired, we applied the computational method developed in [31] to obtain the 3D reconstruction. This method starts from the elemental images and projects them directly through a virtual pinhole array. In particular, for a fixed distance z = L from this pinhole array, each elemental image is inversely projected through each synthesized pinhole in the array. Each inversely projected image is magnified applying the magnification factor M, where M is the ratio of the distance between the pinhole array and the plane of reconstruction at z = L, to the distance between the pinhole array and the elemental image plane (z = g). With this definition: M=Lg. In order to assess the volumetric information, we repeat this process for all the reconstruction planes. Let us consider EIpq the pth row and the qth column elemental image, and EMpq(x, y, z) the elemental image corresponding to EIpq that has been inversely mapped at (x,y,z). It was shown in [31] that:

EMpq=EIpq(sxpxsxpM,syqysyqM)(z+g)2+[(xsxp)2+(ysyq)2](1+1M)2
with the following conditions: sx(pM2)xsx(p+M2), sy(qM2)ysy(q+M2), where sx and sy are the sizes of the elemental image EIpq. Summing over all EMpq we obtain the 3D reconstructed image:
EM(x,y,z)=p=0m1q=0n1EMpq(x,y,z)
where m and n denote the number of elemental images in each (x or y) direction. The reconstruction algorithm (Eqs. (1) and (2)) may work using small to high resolution elemental images. Please refer to [31] for further details. The general idea can also be seen in Fig. 2.

 figure: Fig. 2

Fig. 2 Image reconstruction step using a projection of the elemental images through a virtual pinhole array

Download Full Size | PDF

The distance between the objects of the scene and the sensor can be inferred as well. Let us consider each elemental image as an information source of the light distribution at each volumetric pixel in space, and define the spectral radiation pattern (SRP, ℒ(θ, ϕ, λ)) in order to capture the radiation intensity at a certain wavelength and direction. In [32] DaneshPanah et al. define a profilometry operator, and assume that object surfaces are Lambertian or semi-Lambertian. This implies that the following deviation metric:

D(x,y,z)=[(θ,ϕ,λ)(λ)](x,y,z)2dθdϕdλ
where ℐ is the Intensity function, can be used to infer the depth of each point in a surface. In this case, ℐ is assumed not to depend on the (θ, ϕ) coordinates. If an object surface appears at point (x,y,z), the function D should reach a local minimum. Therefore, the depth of each transversal point can be obtained by finding z such that D(x,y,z) is minimized in a predefined range Z = [zmin, zmax]. This idea can be formulated as:
z^(x,y)=argminzZD(x,y,z)
For the case with N intensity images and M wavelength bands for each image, Eq. (4) reduces to
z^(x,y)=argminzZj=1Mi=1N[(θi,ϕi,λj)(λj)](x,y,z)2

4. Reconstruction results

Moving the camera in an array of positions is equivalent to moving the scene in positions of the same grid. Due to the weight of the optical capture system and the specifications of the small motors, we chose to move the scene to capture the elemental images (see Fig. 3). In particular, we moved the scene in a regular 11×11 grid. Applying the calibration method described in the second paragraph of Section 2, we found that 510 seconds are needed to acquire the 33 bands for any of the positions in the grid. Therefore, we finally decided to acquire elemental images only in seven wavelengths of {480, 510, 550, 570, 600, 650, 680} nm for this grid. Acquisition time in this case was 56 seconds per position.

 figure: Fig. 3

Fig. 3 Image acquisition using the Integral Imaging technique.

Download Full Size | PDF

The pitch of the grid was fixed to p = 4.2mm in both horizontal and vertical direction. On the other hand the sensor size was 4.80×3.62mm. Consequently, the elemental images overlapped in the horizontal direction but not in the vertical one. The lack of overlapping is not important for reconstruction planes far from the camera. Figure 4 shows a subset of the acquired elemental images for wavelength λ = 650nm. Inverse-projecting each elemental image through the virtual pinhole array {see [31] and Section 3}, we can obtain the corresponding images at the different perpendicular planes (z) of interest. Figure 5(a)–(c) shows false colour RGB reconstructed images (using the grey scale images and the following assumption 480nm ← B, 550nm ← G and 650nm ← R) of the 3-dices scene at values of the optical axis depth (z) equal to the positions for which the rear face of each dice is in focus. Reconstruction results are available for any wavelength inside the filter spectral range and therefore the spectral information of the scene is available as well. Figure 6(a) shows the corresponding 3D profile for the scene of Figure 6(b), which is obtained by applying the method explained in [32]. The z axis represents the distance (in mm) from the acquisition system to the objects that define the scene. The x and y axes represent the lateral pixels of the scene. We can infer from the 3D profile that each plane corresponding to closest face of each dice is at a different and well defined distance. However, there are some points in which the estimation quality is lower, which may be due in part to the specular reflections on the surface of the dice. In order to assess the quality of the recovered depth information, we considered the pixels that belong to the rear orthogonal face of each dice. We applied the Root Mean Square Error formula (defined as: RMSE=1NPij(DE,ijDL,ij)2, where NP means total number of pixels, DE,ij means the depth obtained with the minimization of variance method, and DL,ij refers to the depth measured in the laboratory) as a measure of error. In this case the RMSE for all the pixels considered is RMSETotal = 13.37mm. For each dice, RMSEBlue = 13.57mm, RMSEGreen = 6.03mm, and RMSERed = 16.52mm.

 figure: Fig. 4

Fig. 4 Elemental images corresponding to positions: (2, 1), (2, 6), (2, 11), (6, 1), (6, 6), (6, 11), (9, 1), (9, 6), and (9, 11), for λ = 650nm.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 False colour RGB reconstructed images at planes corresponding to z = 230, z = 249 and z = 267mm. The colour images were created using the spectral assumption: 480nm ← B, 550nm ← G and 650nm ← R.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 (a) 3D profile of the 3 dice scene for λ = 650nm. (b) Elemental image corresponding to position (6, 5).

Download Full Size | PDF

On the other hand, the mean and standard deviation of the depths are: DE,red¯=(252±11)mm, DE,green¯=(262.11±6.03)mm, and DE,blue¯=(232.8±4.5)mm. The corresponding relative errors for the blue, red and green dices were: 5.8%, 6.6%, and 2.3%, respectively.

From the above results, we can see that the error is highest for the red dice. This may be because it is in the middle of the three dices and the region on the left may be blocked by the blue dice in some of elemental images. Therefore depth estimation may have a higher error.

5. Conclusion

We have shown the viability to implement a multispectral 3D integral imaging acquisition and reconstruction system. We have used a monochrome camera and a liquid crystal tunable filter, for the data acquisition stage, a computational volumetric reconstruction method [31] for the 3D reconstruction, and a minimisation of the variance algorithm among elemental images [32] for the creation of the 3D profile of the scene. These results open the way to the analysis of the relationship between the spectral and the spatial content among elemental images. These relationships are important in order to develop compression algorithms for these datasets. 3D and spectral information can also be combined to help improve classification accuracy. This would have an immediate and important effect in research areas like melanoma cancer detection, where it has already been shown that the best way to classify them is using a combination of volumetric and spectral information [33].

Acknowledgments

This work was supported in part by the Spanish Ministry of Science and Innovation under the projects ALFI3D TIN 2009-14103-C 03-01, Consolider Ingenio 2010CSD 2007- 00018, EODIX AYA 2008-05965-C04-04/ESP, and FIS2009-9135, and also by the Generalitat Valenciana through the projects PROMETEO/2010/028 and PROMETEO2009-077.

References and links

1. J.-Y. Son, W.-H. Son, S.-K. Kim, K.-H. Lee, and B. Javidi, “Three-dimensional imaging for creating real-world-like environments,” Proc. IEEE, doc. ID 6145598, to be published (2012). [CrossRef]  

2. H. H. Tran, H. Suenaga, K. Kuwana, K. Masamune, T. Dohi, S. Nakajima, and H. Liao, “Augmented reality system for oral surgery using 3D stereoscopic visualization,” Lecture Notes in Computer Science 6891, 81–88 (2011). [CrossRef]  

3. H. Liao, T. Inomata, I. Sakuma, and T. Dohi, “3D augmented reality for MRI-guided surgery using integral videography autostereoscopic image overlay,” IEEE Trans. Biomed. Eng. 57, 1476–1486 (2010). [CrossRef]   [PubMed]  

4. J. Arai, F. Okano, M. Kawakita, M. Okui, Y. Haino, M. Yohimura, M. Furuya, and M. Sato, “Integral three-dimensional television using a 33-megapixel imaging system,” J. Disp. Technol. 6, 422–430 (2010). [CrossRef]  

5. F. Okano, J. Arai, K. Mitani, and M. Okui, “Real-time integral imaging based on extremely high resolution video system,” Proc. IEEE 94, 490–501 (2006). [CrossRef]  

6. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. (Paris) 7, 821–825 (1908).

7. H. E. Ives, “Optical properties of a Lippmann lenticulated sheet,” J. Opt. Soc. Am. 21, 171–176 (1931). [CrossRef]  

8. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36, 1598–1603 (1997). [CrossRef]   [PubMed]  

9. A. Stern and B. Javidi, “Three dimensional Sensing, visualization, and processing using integral imaging,” Proc. IEEE 94, 591–607 (2006). [CrossRef]  

10. R. Martinez-Cuenca, G. Saavedra, M. Martinez-Corral, and B. Javidi, “Progress in 3-D multiperspective display by integral imaging,” Proc. IEEE 97, 1067–1077 (2009). [CrossRef]  

11. G. P. Asner and R. E. Martin, “Airborne spectranomics: mapping canopy chemical and taxonomic diversity in tropical forests,” Front. Ecol. Environ. 7, 269–276 (2009). [CrossRef]  

12. G. Moser and S. B. Serpico, “Automatic parameter optimization for support vector regression for land and sea surface temperature estimation from remote sensing data,” IEEE Trans. Geosci. Remote Sens. 47, 909–921 (2009). [CrossRef]  

13. E. J. Kwiatkowska and G. S. Fargion, “Application of machine-learning Techniques toward the creation of a consistent and calibrated global chlorophyll concentration baseline dataset using remotely sensed ocean color data,” IEEE Trans. Geosci. Remote Sens. 41, 2844–2860 (2003). [CrossRef]  

14. I. Kuzmina, I. Diebele, D. Jakovels, J. Spigulis, L. Valeine, J. Kapostinsh, and A. Berzina, “Towards noncontact skin melanoma selection by multispectral imaging analysis,” J. Biom. Opt. 16, 0605021 (2011). [CrossRef]  

15. I. Kuzmina, I. Diebele, L. Valeine, D. Jakovels, A. Kempele, J. Kapostinsh, and J. Spigulis, “Multispectral imaging analysis of pigmented and vascular skin lesions: results of a clinical trial,” Proc. SPIE 7883, 7883121 (2011).

16. R. S. Berns, Y. Zhao, L. A. Taplin, J. Coddington, C. McGlinchey, and A. Martins, “The use of spectral imaging as an analytical tool for art conservation,” American Institute of Conservation, Annual Meeting, Los Angeles, California, United States (2009).

17. M. Cho and B. Javidi, “Three-dimensional visualization of objects in turbid water using integral imaging,” J. Disp. Technol. 6, 544–547 (2010). [CrossRef]  

18. I. Quinzán Suárez, P. Latorre Carmona, P. García Sevilla, E. Boldo, F. Pla, V. García Jiménez, R. Lozoya, and G. Pérez de Lucía, “Non-Invasive melanoma diagnosis using multispectral imaging,” Int. Conf. on Pat. Rec. Applic. and Methods 386–393 (2012).

19. C. G. Lee, I. Moon, and B. Javidi, “Photon-counting three-dimensional integral imaging with compression of elemental images,” J. Opt. Soc. Am. A 29, 854–860 (2012). [CrossRef]  

20. S. Ahn, A. J. Chaudhuri, F. Darvas, C. A. Bouman, and R. M. Leahy, “Fast iterative image reconstruction methods for fully 3D multispectral bioluminiscence tomography,” Phys. Med. Biol. 53, 3921–3942 (2008). [CrossRef]   [PubMed]  

21. J. F. Andresen, J. Busck, and H. Heiselberg, “Pulsed raman fiber laser and multispectral imaging in three dimensions,” Appl. Opt. 45, 6198–6204 (2006).

22. M. A. Powers and C. C. Davis, “Spectral LADAR: active range-resolved three dimensional imaging spectroscopy,” Appl. Opt. 51, 1468–1478 (2012). [CrossRef]   [PubMed]  

23. A. Wallace, C. Nichol, and I. Woodhouse, “Recovery of forest canopy parameters by inversion of multispectral LiDAR data,” Remote Sens. 4, 509–531 (2012). [CrossRef]  

24. Schneider, “Industrial Optics: OEM,” http://www.schneiderkreuznach.com, 2011.

25. C. B. Burckhardt, “Optimum parameters and resolution limitation of integral photography,” J. Opt. Soc. Am. 58, 71–76 (1968). [CrossRef]  

26. T. Okoshi, “Optimum design and depth resolution of lens-sheet and projection-type three-dimensional displays,” Appl. Opt. 10, 2284–2291 (1971). [CrossRef]   [PubMed]  

27. L. Yang, M. McCornick, and N. Davies, “Discussion of the optics of a new 3-D imaging system,” Appl. Opt. 27, 4529–4534 (1988). [CrossRef]   [PubMed]  

28. S.-H. Hong and B. Javidi, “Three-dimensional visualization of partially occluded objects using integral imaging,” IEEE/OSA J. Disp. Technol. 1, 354–359 (2005). [CrossRef]  

29. J. S. Jang and B. Javidi, “Three dimensional synthetic aperture integral imaging,” Opt. Lett. 27, 1144–1146 (2002). [CrossRef]  

30. S. Sinha, D. Steedly, R. Szeliski, M. Agrawala, and M. Pollefeys, “Interactive 3D architectural modeling from unordered photo collections,” ACM Trans. Graphics 27, 1–10 (2008). [CrossRef]  

31. S.-H. Hong, J.-S. Jang, and B. Javidi, “Three dimensional volumetric object reconstruction using computational integral imaging ” Opt. Express 12, 483–491 (2004). [CrossRef]   [PubMed]  

32. M. DansehPanah and B. Javidi, “Profilometry and optical slicing by passive three-dimensional imaging,” Opt. Lett. 34, 1105–1107 (2009). [CrossRef]  

33. D. S. Rigel, J. Russak, and R. Friedman, “The evolution of melanoma diagnosis: 25 years beyond the abcds. CA: A Cancer Journal for Clinicians 60, 301–316 (2010). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 Integral Imaging data acquisition system setup.
Fig. 2
Fig. 2 Image reconstruction step using a projection of the elemental images through a virtual pinhole array
Fig. 3
Fig. 3 Image acquisition using the Integral Imaging technique.
Fig. 4
Fig. 4 Elemental images corresponding to positions: (2, 1), (2, 6), (2, 11), (6, 1), (6, 6), (6, 11), (9, 1), (9, 6), and (9, 11), for λ = 650nm.
Fig. 5
Fig. 5 False colour RGB reconstructed images at planes corresponding to z = 230, z = 249 and z = 267mm. The colour images were created using the spectral assumption: 480nm ← B, 550nm ← G and 650nm ← R.
Fig. 6
Fig. 6 (a) 3D profile of the 3 dice scene for λ = 650nm. (b) Elemental image corresponding to position (6, 5).

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

E M p q = E I p q ( s x p x s x p M , s y q y s y q M ) ( z + g ) 2 + [ ( x s x p ) 2 + ( y s y q ) 2 ] ( 1 + 1 M ) 2
E M ( x , y , z ) = p = 0 m 1 q = 0 n 1 E M p q ( x , y , z )
D ( x , y , z ) = [ ( θ , ϕ , λ ) ( λ ) ] ( x , y , z ) 2 d θ d ϕ d λ
z ^ ( x , y ) = arg min z Z D ( x , y , z )
z ^ ( x , y ) = arg min z Z j = 1 M i = 1 N [ ( θ i , ϕ i , λ j ) ( λ j ) ] ( x , y , z ) 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.