Previously, we reported a digital technique for formation of real, non-distorted, orthoscopic integral images by direct pickup. However the technique was constrained to the case of symmetric image capture and display systems. Here, we report a more general algorithm which allows the pseudoscopic to orthoscopic transformation with full control over the display parameters so that one can generate a set of synthetic elemental images that suits the characteristics of the Integral-Imaging monitor and permits control over the depth and size of the reconstructed 3D scene.
©2010 Optical Society of America
Integral imaging (InI) is a three-dimensional (3D) imaging technique that works with incoherent light and provides with auto-stereoscopic images that can be observed without the help of special glasses. InI is the natural consequence of applying the modern technology to the old concept of Integral Photography (IP) which was proposed by Lipmann in 1908 . Although InI concept was initially intended for the capture and display of 3D pictures or movies, in the last decade it has been proposed for other interesting applications. Examples of such applications are the digital reconstruction of spatially incoherent 3D scenes [2,3], the visualization of partially occluded objects [4,5], or the sensing and recognition of dark scenes .
One problem encountered with InI for 3D display applications is the pseudoscopic (or depth reversed) nature of the displayed images when the captured elemental images do not receive pixel pre-processing. Okano et al. were the first to propose a digital method to display orthoscopic scenes . Although very simple and efficient, Okano’s algorithm has the weakness that it provides only virtual reconstructions, i.e. the 3D scene appears inside the monitor. Recently, we reported a method for the calculation of a set of synthetic elemental images (SEIs) that permit orthoscopic, real (or floating outside the monitor) reconstruction of the 3D scene . Our algorithm, referred to as the Smart PIxel Mapping (SPIM), was however limited since it allows only a fixed position for the reconstructed scene. Also, the number of microlenses and their pitch cannot be changed. After the work reported in , research has been performed with the aim of overcoming the limitations of SPIM and/or taking profit of its capabilities such as the work reported by Shin et al. , were they propose a novel computational method based on SPIM for the improvement of the resolution of reconstructed scenes over long distances. Based on SPIM and using sub-image transformation process, a computational scheme was reported for removing occlusion in partially-occluded objects [10,11]. Also, there has been proposed modified versions of SPIM with the aim producing sets of synthetic elemental images created for displaying orthoscopic 3D images with depth control [12,13]. Other research have pursued the depth and scale control, but with no influence over the pseudoscopic nature of the reconstruction .
In this paper we report an updated version of SPIM that permits the calculation of new sets of SEIs which are fully adapted to the display monitor characteristics. Specifically, this new pixel-mapping algorithm –denoted here as the Smart Pseudoscopic-to-Orthoscopic Conversion (SPOC), permits us to select the display parameters such as the pitch, focal length and size of the microlens array (MLA), the depth position and size of the reconstructed images, and even the geometry of the MLA.
To present the new approach, we have organized this paper as following. In Section 2, we explain the principles of SPOC and develop the corresponding mathematical formulation. In Section 3, we revisit two classical algorithms for the pseudoscopic-to-orthoscopic (PO) conversion and demonstrate that they can be considered as particular cases of the SPOC algorithm. Section 4 is devoted to the computer and experimental verifications of the SPOC. Finally in Section 5, we summarize the main achievements of this reported research. Acronyms in this paper are listed in Table 1.
2. The smart pseudoscopic-to-orthoscopic conversion algorithm
In our previous paper , we reported a digital technique, the SPIM, for formation of real, non-distorted, orthoscopic integral images by direct pickup. The SPIM algorithm allowed, by proper mapping of pixels, the creation of a set of SEIs which, when placed in front of a MLA which is identical to the one used in the capture, produce the reconstruction of a real and orthoscopic image at the same position and with the same size as the original 3D object.
Now in this paper we have taken profit from the additional possibilities of SPIM and have developed a more flexible digital method that allows the calculation of a new set of SEIs to be used in a display configuration that can be essentially different from the one used in the capture. We have named the new algorithm “SPOC”. It allows the calculation of SEIs ready to be displayed in an InI monitor in which the pitch, the microlenses focal length, the number of pixels per elemental cell, the depth position of the reference plane, and even the grid geometry of the MLA can be selected to fit the conditions of the display architecture.
The SPOC algorithm is relatively simple. One has to calculate the SEIs that are obtained in the simulated experiment schematized in Fig. 1 . The algorithm can be explained as the result of the application in cascade of three processes: the simulated display, the synthetic capture and the homogeneous scaling.
In the simulated display we use as the input for the algorithm, the set of elemental images captured experimentally. The pitch, the gap and the focal length of the MLA are equal to those used in the experimental capture. This simulated display permits the reconstruction of the original 3D scene in the same position and with the same scale.
The second process is the synthetic capture, which is done through an array of pinholes (PA). To give our algorithm the maximum generality, for the synthetic capture: (i) we place the synthetic PA at an arbitrary distance D from the display MLA; (ii) we assign arbitrary pitch, p S, and gap, g S, to the synthetic PA –note that this selection will determine also the size of the final image; and (iii) we fix, also arbitrarily, the number of pixels, n S, per synthetic elemental image and the total number of elemental images, N S.
Note that the value of parameter d S determines the position of the reference plane of the image displayed by the InI display monitor. A smart selection of d S will permit, when displaying the SEIs in an actual InI display monitor, the observation of either orthoscopic real or virtual 3D images. A positive value of d S corresponds to a floating real 3D image. A negative value corresponds to a virtual reconstruction.
The third step, the homogeneous scaling, is intended to adapt the size (scale) of the SEIs to the final InI monitor. In this step, both the pitch and the gap are scaled by the same factor. The only constraint in this step is that the final SEIs must be ready to be used in a realistic InI monitor and therefore the value of the scaled gap should be equal to the focal length of the monitor microlenses.
For simplicity, we have drawn in Fig. 1 the one-dimensional case. The extrapolation of the forthcoming theory to two dimensions is trivial. There exist, however, a case of great interest in which the extrapolation is not trivial. We refer to the case in which the geometry of synthetic array is different from that of the image capture stage. This happens, for example, when one of the arrays is rectangular and the other hexagonal.
Next we concentrate on calculating the pixels of each SEI. The key insight to this method is, given a pixel of one SEI, to find the pixel of the captured integral image that maps to it. To do this, we first back-project the coordinate, x S, of the center of the m th pixel of the j th SEI through its corresponding pinhole (blue dotted line in Fig. 1). The coordinate of the pixel can be written as
3. Revisiting two typical cases of the pseudoscopic to orthoscopic (PO) conversion
To demonstrate the generality of this proposal, we revisit two classical algorithms for the PO conversion and demonstrate that they can be considered as particular cases of the SPOC algorithm.
3.1. The method proposed by Okano et al
A smart and simple method for the PO conversion was reported by Okano and associates , who proposed to rotate each elemental image by 180° around the center of the elemental cell. Then, the rotated elemental images are displayed at a distance g S = g D -2f 2 /(d D -f) from a MLA similar to the one used in the capture. This procedure permits the reconstruction of virtual, orthoscopic 3D scenes. Note however that , and therefore the final reconstructed image is slightly distorted since it has shrunk in the axial direction.
To reproduce Okano’s method one simply has to use as the input of the SPOC algorithm the following values (see Fig. 2 ): (and, therefore), , and .15]. On the other hand, one can use SPOC to produce, in addition to the PO conversion, other changes to adapt the integral image to the display grid geometry.
3.2. The symmetric case
This is the case for which the SPIM algorithm was created. Again, g S = g D, p S = p D and n S = n D, but now D = 2d D (and therefore and d S = d D), besides8], although with different appearance due to the choice of different label criteria.
4. Demonstration of the SPOC and experimental results
To demonstrate the versatility of the SPOC, we apply it to generate SEIs ready to be displayed in display architectures which are very different from the parameters used in capture stage. For the capture of the elemental images, we prepared over a black background a 3D scene composed by a doll (a cook preparing paella, see Fig. 3 ).
For the acquisition of the elemental images, instead of using an array of digital cameras, we used the so-called synthetic aperture method [5,16], in which all the elemental images are picked up with only one digital camera that is mechanically translated. The digital camera was focused at a distance of one meter. The camera parameters were fixed to and . The depth of field was large enough to allow obtaining sharp pictures of the doll, which was placed at a distance from the camera. The gap for the capture was, then, . We obtained a set of images with pitch and. Note that the pitch is slightly smaller than the size of the CMOS sensor (22.2 mm x 14.8 mm), thus we cropped slightly any captured picture in order to remove the outer parts. By this way, we could compose the integral image consisting of elemental images of and pixels each.
In Fig. 4 , we show a portion of the set of elemental images obtained after the capture experiment and the cropping of the pictures.
For the calculation of the set of SEIs, and with the aim of reducing the computing time, we first resized the elemental images to , and .. Then we fixed the synthetic parameters to: , , , and microlenses. In Fig. 5 we show the set of calculated SEIs. Note that we have increased substantially the number of elemental images, which now are square and arranged in square grid.
Finally we applied the third step of SPOC, and scaled the corresponding parameter by factor of 1.25, so that , and therefore . With this final set of SEIs we performed two visualization experiments. One simulated with the computer, the second real in our laboratory.
For the simulated visualization experiment, the calculations were done assuming a virtual observer placed at a distance from the MLA. The visualization calculations were performed following the algorithm described in . In Fig. 6 we show the result of the visualization simulation.
Next, we performed the optical visualization experiment. To this end, first we printed the SEIs in photographic paper with a high-resolution inkjet printer. Our InI display monitor was equipped with by an array of microlenses, arranged in square grid, with pitch and . Then we placed the SEIs at a distance from the MLA (see Fig. 7 ).
Up to 20 different perspectives of the displayed 3D image were captured with a digital camera placed at . The camera was displaced in the horizontal direction in steps of . The captured perspectives are shown in Fig. 8 .
We have demonstrated that the SPOC permits to create from a low number of elemental images, a new set of SEIs ready to be displayed in an InI monitor equipped with a MLA composed by a much higher number of microlenses. The displayed image is orthoscopic, and is displayed at a shorter distance from the monitor.
Next, and as the proof of the flexibility of the SPOC algorithm, we calculated the SEIs for a display geometry that is essentially different from the one used in the capture stage, but that is very common in the 3D display applications . We refer to the case in which the display microlenses are arranged in a hexagonal grid. For the application of the SPOC algorithm, we considered microlenses with diameter and focal length . Besides, we fixed the depth distance to the MLA as . We applied the SPOC to calculate up to 24462 hexagonal synthetic elemental images (the dimensions of the hexagonal MLA were ), which are shown in Fig. 9 .
For the simulated visualization experiment the calculations were done assuming a virtual observer placed at a distance from the MLA. In Fig. 10 we show the result of the visualization simulation. The degradations that are observed in the reconstructed image are due to the fact that there is no possibility of perfect matching between the microlenses arranged in hexagonal grid and the pixels of the matrix display, which are arranged in rectangular grid.
Also in this case we performed optical visualization experiments. We printed the SEIs on photographic paper. The InI monitor was equipped with an array of microlenses, of diameter and arranged in hexagonal grid. Then we placed the SEIs at a distance from the MLA (see Fig. 11 ).
Up to 35 different perspectives of the displayed 3D image were captured with a digital camera placed at . The camera was moved in the horizontal direction in steps of
10 nm. The captured perspectives are shown in Fig. 12 .
One interesting issue to discuss here is how the SPOC affects the resolution of reconstructed images. There is no one single response, since it depends of the algorithm parameter. Since it is not possible to increment the image bandwidth through rearrangement of pixel information, no increment of image resolution is possible with SPOC. If we do not want to lose resolution, it is necessary select carefully the parameters of the algorithm.
A different issue is the resolution of the reconstructed image as observed when the observer eye looks at the display. As explained in , such viewing resolution is mainly determined by the microlenses pitch. In this case, we can state that the proper use of SPOC can help to increment significantly the viewing resolution.
We have demonstrated the SPOC algorithm which allows full control over the optical display parameters in InI monitors. Specifically, we have shown that from a given collection of elemental images, one can create a new set of SEIs ready to be displayed in an InI monitor in which the pitch, the microlenses focal length, the number of pixels per elemental cell, the depth position of the reference plane, and even the grid geometry of the MLA can be selected to fit the conditions of the display architecture.
This work was supported in part by the Plan Nacional I+D+I under Grant FIS2009-9135, Ministerio de Ciencia e Innovación, Spain. and also by Generalitat Valenciana under Grant PROMETEO2009-077. Héctor Navarro gratefully acknowledges funding from the Generalitat Valencia (VALi+d predoctoral contract).
References and Links
1. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. 7, 821–825 (1908).
5. B. Heigl, R. Koch, M. Pollefeys, J. Denzler, and L. Van Gool, “Plenoptic Modeling and Rendering from Image sequences taken by hand-held Camera,” Proc. DAGM, 94–101 (1999).
8. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express 13(23), 9175–9180 (2005). [CrossRef] [PubMed]
9. D.-H. Shin, C.-W. Tan, B.-G. Lee, J.-J. Lee, and E.-S. Kim, “Resolution-enhanced three-dimensional image reconstruction by use of smart pixel mapping in computational integral imaging,” Appl. Opt. 47(35), 6656–6665 (2008). [CrossRef] [PubMed]
10. M. Zhang, Y. Piao, and E.-S. Kim, “Occlusion-removed scheme using depth-reversed method in computational integral imaging,” Appl. Opt. 49(14), 2571–2580 (2010). [CrossRef]
11. T.-Ch. Wei, D.-H. Shin, and B.-G. Lee, “Resolution-enhanced reconstruction of 3D object using depth-reversed elemental images for partially occluded object recognition,” J. Opt. Soc. Korea 13(1), 139–145 (2009). [CrossRef]
12. D.-H. Shin, B.-G. Lee, and E.-S. Kim, “Modified smart pixel mapping method for displaying orthoscopic 3D images in integral imaging,” Opt. Lasers Eng. 47(11), 1189–1194 (2009). [CrossRef]
14. D.-Ch. Hwang, J.-S. Park, S.-Ch. Kim, D.-H. Shin, and E.-S. Kim, “Magnification of 3D reconstructed images in integral imaging using an intermediate-view reconstruction technique,” Appl. Opt. 45(19), 4631–4637 (2006). [CrossRef] [PubMed]
15. H. Navarro, R. Martínez-Cuenca, A. Molina-Martín, M. Martínez-Corral, G. Saavedra, and B. Javidi, “Method to remedy image degradations due to facet braiding in 3D integral imaging monitors,” J. Display Technol. 6(10), 404–411 (2010). [CrossRef]
16. J. S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. 27(13), 1144–1146 (2002). [CrossRef]