Abstract

Previously, we reported a digital technique for formation of real, non-distorted, orthoscopic integral images by direct pickup. However the technique was constrained to the case of symmetric image capture and display systems. Here, we report a more general algorithm which allows the pseudoscopic to orthoscopic transformation with full control over the display parameters so that one can generate a set of synthetic elemental images that suits the characteristics of the Integral-Imaging monitor and permits control over the depth and size of the reconstructed 3D scene.

© 2010 OSA

1. Introduction

Integral imaging (InI) is a three-dimensional (3D) imaging technique that works with incoherent light and provides with auto-stereoscopic images that can be observed without the help of special glasses. InI is the natural consequence of applying the modern technology to the old concept of Integral Photography (IP) which was proposed by Lipmann in 1908 [1]. Although InI concept was initially intended for the capture and display of 3D pictures or movies, in the last decade it has been proposed for other interesting applications. Examples of such applications are the digital reconstruction of spatially incoherent 3D scenes [2,3], the visualization of partially occluded objects [4,5], or the sensing and recognition of dark scenes [6].

One problem encountered with InI for 3D display applications is the pseudoscopic (or depth reversed) nature of the displayed images when the captured elemental images do not receive pixel pre-processing. Okano et al. were the first to propose a digital method to display orthoscopic scenes [7]. Although very simple and efficient, Okano’s algorithm has the weakness that it provides only virtual reconstructions, i.e. the 3D scene appears inside the monitor. Recently, we reported a method for the calculation of a set of synthetic elemental images (SEIs) that permit orthoscopic, real (or floating outside the monitor) reconstruction of the 3D scene [8]. Our algorithm, referred to as the Smart PIxel Mapping (SPIM), was however limited since it allows only a fixed position for the reconstructed scene. Also, the number of microlenses and their pitch cannot be changed. After the work reported in [8], research has been performed with the aim of overcoming the limitations of SPIM and/or taking profit of its capabilities such as the work reported by Shin et al. [9], were they propose a novel computational method based on SPIM for the improvement of the resolution of reconstructed scenes over long distances. Based on SPIM and using sub-image transformation process, a computational scheme was reported for removing occlusion in partially-occluded objects [10,11]. Also, there has been proposed modified versions of SPIM with the aim producing sets of synthetic elemental images created for displaying orthoscopic 3D images with depth control [12,13]. Other research have pursued the depth and scale control, but with no influence over the pseudoscopic nature of the reconstruction [14].

In this paper we report an updated version of SPIM that permits the calculation of new sets of SEIs which are fully adapted to the display monitor characteristics. Specifically, this new pixel-mapping algorithm –denoted here as the Smart Pseudoscopic-to-Orthoscopic Conversion (SPOC), permits us to select the display parameters such as the pitch, focal length and size of the microlens array (MLA), the depth position and size of the reconstructed images, and even the geometry of the MLA.

To present the new approach, we have organized this paper as following. In Section 2, we explain the principles of SPOC and develop the corresponding mathematical formulation. In Section 3, we revisit two classical algorithms for the pseudoscopic-to-orthoscopic (PO) conversion and demonstrate that they can be considered as particular cases of the SPOC algorithm. Section 4 is devoted to the computer and experimental verifications of the SPOC. Finally in Section 5, we summarize the main achievements of this reported research. Acronyms in this paper are listed in Table 1.

2. The smart pseudoscopic-to-orthoscopic conversion algorithm

In our previous paper [8], we reported a digital technique, the SPIM, for formation of real, non-distorted, orthoscopic integral images by direct pickup. The SPIM algorithm allowed, by proper mapping of pixels, the creation of a set of SEIs which, when placed in front of a MLA which is identical to the one used in the capture, produce the reconstruction of a real and orthoscopic image at the same position and with the same size as the original 3D object.

Now in this paper we have taken profit from the additional possibilities of SPIM and have developed a more flexible digital method that allows the calculation of a new set of SEIs to be used in a display configuration that can be essentially different from the one used in the capture. We have named the new algorithm “SPOC”. It allows the calculation of SEIs ready to be displayed in an InI monitor in which the pitch, the microlenses focal length, the number of pixels per elemental cell, the depth position of the reference plane, and even the grid geometry of the MLA can be selected to fit the conditions of the display architecture.

Tables Icon

Table 1. List of acronyms

The SPOC algorithm is relatively simple. One has to calculate the SEIs that are obtained in the simulated experiment schematized in Fig. 1 . The algorithm can be explained as the result of the application in cascade of three processes: the simulated display, the synthetic capture and the homogeneous scaling.

 

Fig. 1 Calculation of the synthetic integral image. The pixel of the synthetic integral image (dotted blue line) stores the same value as the pixel of the captured integral image.

Download Full Size | PPT Slide | PDF

In the simulated display we use as the input for the algorithm, the set of elemental images captured experimentally. The pitch, the gap and the focal length of the MLA are equal to those used in the experimental capture. This simulated display permits the reconstruction of the original 3D scene in the same position and with the same scale.

The second process is the synthetic capture, which is done through an array of pinholes (PA). To give our algorithm the maximum generality, for the synthetic capture: (i) we place the synthetic PA at an arbitrary distance D from the display MLA; (ii) we assign arbitrary pitch, p S, and gap, g S, to the synthetic PA –note that this selection will determine also the size of the final image; and (iii) we fix, also arbitrarily, the number of pixels, n S, per synthetic elemental image and the total number of elemental images, N S.

Note that the value of parameter d S determines the position of the reference plane of the image displayed by the InI display monitor. A smart selection of d S will permit, when displaying the SEIs in an actual InI display monitor, the observation of either orthoscopic real or virtual 3D images. A positive value of d S corresponds to a floating real 3D image. A negative value corresponds to a virtual reconstruction.

The third step, the homogeneous scaling, is intended to adapt the size (scale) of the SEIs to the final InI monitor. In this step, both the pitch and the gap are scaled by the same factor. The only constraint in this step is that the final SEIs must be ready to be used in a realistic InI monitor and therefore the value of the scaled gap should be equal to the focal length of the monitor microlenses.

For simplicity, we have drawn in Fig. 1 the one-dimensional case. The extrapolation of the forthcoming theory to two dimensions is trivial. There exist, however, a case of great interest in which the extrapolation is not trivial. We refer to the case in which the geometry of synthetic array is different from that of the image capture stage. This happens, for example, when one of the arrays is rectangular and the other hexagonal.

Next we concentrate on calculating the pixels of each SEI. The key insight to this method is, given a pixel of one SEI, to find the pixel of the captured integral image that maps to it. To do this, we first back-project the coordinate, x S, of the center of the m th pixel of the j th SEI through its corresponding pinhole (blue dotted line in Fig. 1). The coordinate of the pixel can be written as

xS=jpS+mpSnS
The back-projection through the pinhole permits us to calculate the intersection of the blue line with the reference plane:
Δo=(1+dSgS)jpSdSgSxS,
and also the interface with the display MLA:
ΔD=(1+DgS)jpSDgSxS
The index of capture microlens where the blue dotted line impacts can be calculated as
ijm=Round[DpDgS(jpS+mpSnS)+gS+DpDgSjpS]=Round[pSpDjpSpDDgSmnS],
The last step is to find the mapping pixel. To this end, we calculate the coordinate of the point that is the conjugate, through the impact microlens, of point Δo
xD=(1+gDDdS)pDijmgDgSgS+dSDdSjpS+gDgSdSDdSxS
Finaly, we can calculate the index of the l th pixel within the i th elemental cell as
ljm=Round[gDDdSnDijm+gDgSpSpDnDDdS(dSmnSjpS)]
Thus, the pixel values of the SEIs can be obtained from the captured integral image by the mapping

IjmS=IilD

3. Revisiting two typical cases of the pseudoscopic to orthoscopic (PO) conversion

To demonstrate the generality of this proposal, we revisit two classical algorithms for the PO conversion and demonstrate that they can be considered as particular cases of the SPOC algorithm.

3.1. The method proposed by Okano et al

A smart and simple method for the PO conversion was reported by Okano and associates [7], who proposed to rotate each elemental image by 180° around the center of the elemental cell. Then, the rotated elemental images are displayed at a distance g S = g D -2f 2 /(d D -f) from a MLA similar to the one used in the capture. This procedure permits the reconstruction of virtual, orthoscopic 3D scenes. Note however that gSgD, and therefore the final reconstructed image is slightly distorted since it has shrunk in the axial direction.

To reproduce Okano’s method one simply has to use as the input of the SPOC algorithm the following values (see Fig. 2 ): D=0 (and, thereforedS=dD), gS=gD, pS=pD and nS=nD.

 

Fig. 2 Scheme for the calculation of the Okano’s synthetic integral image.

Download Full Size | PPT Slide | PDF

Introducing such values into Eqs. (4) and (6), one finds

ijm=j
and
ljm=Round[gDdSnDim+nDgSdSj]=m
Note that SPOC result is, however, slightly different from the one reported by Okano and his colleagues. While in their case fS=fD but gSgD, in our case gS=gD and fS=gS. This fact gives the SPOC a slight advantage, since it permits the reconstruction of 3D scenes without any distortion, i.e. with the same magnification in the axial and in the lateral direction, and also free of facet braiding [15]. On the other hand, one can use SPOC to produce, in addition to the PO conversion, other changes to adapt the integral image to the display grid geometry.

3.2. The symmetric case

This is the case for which the SPIM algorithm was created. Again, g S = g D, p S = p D and n S = n D, but now D = 2d D (and therefore and d S = d D), besides

nD=nS=2dDgD
This leads to the result
ijm=jm
ljm=m
Note that these equations are the same as the ones reported in [8], although with different appearance due to the choice of different label criteria.

4. Demonstration of the SPOC and experimental results

To demonstrate the versatility of the SPOC, we apply it to generate SEIs ready to be displayed in display architectures which are very different from the parameters used in capture stage. For the capture of the elemental images, we prepared over a black background a 3D scene composed by a doll (a cook preparing paella, see Fig. 3 ).

 

Fig. 3 Scheme of the experimental set up for the acquisition of the set of elemental images of a 3D scene.

Download Full Size | PPT Slide | PDF

For the acquisition of the elemental images, instead of using an array of digital cameras, we used the so-called synthetic aperture method [5,16], in which all the elemental images are picked up with only one digital camera that is mechanically translated. The digital camera was focused at a distance of one meter. The camera parameters were fixed to f=10mm and f/#=22. The depth of field was large enough to allow obtaining sharp pictures of the doll, which was placed at a distance d=203mm from the camera. The gap for the capture was, then, g=10mm. We obtained a set of NH=17×NV=11 images with pitch PH=22mm andPV=14mm. Note that the pitch is slightly smaller than the size of the CMOS sensor (22.2 mm x 14.8 mm), thus we cropped slightly any captured picture in order to remove the outer parts. By this way, we could compose the integral image consisting of 17H ×11V elemental images of 22×14 mm and nH=2256×nV=1504  pixels each.

In Fig. 4 , we show a portion of the set of elemental images obtained after the capture experiment and the cropping of the pictures.

 

Fig. 4 Subset of the elemental images obtained experimentally. These elemental images are the input for the SPOC algorithm.

Download Full Size | PPT Slide | PDF

For the calculation of the set of SEIs, and with the aim of reducing the computing time, we first resized the elemental images to nDH=251px , and nDV=161px .. Then we fixed the synthetic parameters to: dS=dD/2=101.5mm, gS=3.75mm, pSH=pSV=1.25mm, nSH=nSV=19px and NSH=NSV=151 microlenses. In Fig. 5 we show the set of calculated SEIs. Note that we have increased substantially the number of elemental images, which now are square and arranged in square grid.

 

Fig. 5 (a)Collection of 151×151 SEIs obtained after the application of the SPOC algorithm; (b) enlarged view of central SEIs.

Download Full Size | PPT Slide | PDF

Finally we applied the third step of SPOC, and scaled the corresponding parameter by factor of 1.25, so that gS=3.0mm, pSH=pSV=1.0mmand therefore dS=81.2mm. With this final set of SEIs we performed two visualization experiments. One simulated with the computer, the second real in our laboratory.

For the simulated visualization experiment, the calculations were done assuming a virtual observer placed at a distance L=700mm from the MLA. The visualization calculations were performed following the algorithm described in [15]. In Fig. 6 we show the result of the visualization simulation.

 

Fig. 6 Two Perspectives of the 3D reconstructed scene, as seen by an observer placed at a distanceL=700mm. In the video (Media 1) we show the movie built with the frames obtained with the visualization algorithm.

Download Full Size | PPT Slide | PDF

Next, we performed the optical visualization experiment. To this end, first we printed the SEIs in photographic paper with a high-resolution inkjet printer. Our InI display monitor was equipped with by an array of 151×151 microlenses, arranged in square grid, with pitch p=1.0mm and f=3.0mm. Then we placed the SEIs at a distance g=3.0mm from the MLA (see Fig. 7 ).

 

Fig. 7 Experimental setup for the observation of the InI monitor. After displacing horizontally the camera in steps of 10 mm we recorded 20 different perspectives of the displayed 3D scene.

Download Full Size | PPT Slide | PDF

Up to 20 different perspectives of the displayed 3D image were captured with a digital camera placed at L=700mm. The camera was displaced in the horizontal direction in steps of 10mm. The captured perspectives are shown in Fig. 8 .

 

Fig. 8 Two perspectives of the 3D scene displayed in the real experiment. In the video (Media 2) we show the movie built with the perspectives photographed with the digital camera.

Download Full Size | PPT Slide | PDF

We have demonstrated that the SPOC permits to create from a low number of elemental images, a new set of SEIs ready to be displayed in an InI monitor equipped with a MLA composed by a much higher number of microlenses. The displayed image is orthoscopic, and is displayed at a shorter distance from the monitor.

Next, and as the proof of the flexibility of the SPOC algorithm, we calculated the SEIs for a display geometry that is essentially different from the one used in the capture stage, but that is very common in the 3D display applications [17]. We refer to the case in which the display microlenses are arranged in a hexagonal grid. For the application of the SPOC algorithm, we considered microlenses with diameter ϕS=1.0mm and focal length fS=3.0mm. Besides, we fixed the depth distance to the MLA as dS=20.3mm. We applied the SPOC to calculate up to 24462 hexagonal synthetic elemental images (the dimensions of the hexagonal MLA were 151×152mm), which are shown in Fig. 9 .

 

Fig. 9 (a)Collection of hexagonal elemental images obtained after the application of the SPOC algorithm; (b) enlarged view of some central SEIs.

Download Full Size | PPT Slide | PDF

For the simulated visualization experiment the calculations were done assuming a virtual observer placed at a distance L=700mm from the MLA. In Fig. 10 we show the result of the visualization simulation. The degradations that are observed in the reconstructed image are due to the fact that there is no possibility of perfect matching between the microlenses arranged in hexagonal grid and the pixels of the matrix display, which are arranged in rectangular grid.

 

Fig. 10 Two Perspectives of the reconstructed scene, as seen by an observer placed at a distanceL=700mm. In the video (Media 3) we show the movie built with the frames obtained with the visualization algorithm.

Download Full Size | PPT Slide | PDF

Also in this case we performed optical visualization experiments. We printed the SEIs on photographic paper. The InI monitor was equipped with an array of microlenses, of diameter ϕ=1.0mm and f=3.0mm arranged in hexagonal grid. Then we placed the SEIs at a distance g=3.0mm from the MLA (see Fig. 11 ).

 

Fig. 11 Experimental setup for the observation of the hexagonal InI monitor. After displacing horizontally the camera in steps of 10 mm, we recorded 35 different perspectives of the displayed 3D scene.

Download Full Size | PPT Slide | PDF

Up to 35 different perspectives of the displayed 3D image were captured with a digital camera placed at L=700mm. The camera was moved in the horizontal direction in steps of

10 nm. The captured perspectives are shown in Fig. 12 .

 

Fig. 12 Two perspectives of the 3D scene displayed in the real hexagonal experiment. In the video (Media 4) we show the movie built with the perspectives photographed with the digital camera.

Download Full Size | PPT Slide | PDF

One interesting issue to discuss here is how the SPOC affects the resolution of reconstructed images. There is no one single response, since it depends of the algorithm parameter. Since it is not possible to increment the image bandwidth through rearrangement of pixel information, no increment of image resolution is possible with SPOC. If we do not want to lose resolution, it is necessary select carefully the parameters of the algorithm.

A different issue is the resolution of the reconstructed image as observed when the observer eye looks at the display. As explained in [15], such viewing resolution is mainly determined by the microlenses pitch. In this case, we can state that the proper use of SPOC can help to increment significantly the viewing resolution.

5. Conclusions

We have demonstrated the SPOC algorithm which allows full control over the optical display parameters in InI monitors. Specifically, we have shown that from a given collection of elemental images, one can create a new set of SEIs ready to be displayed in an InI monitor in which the pitch, the microlenses focal length, the number of pixels per elemental cell, the depth position of the reference plane, and even the grid geometry of the MLA can be selected to fit the conditions of the display architecture.

Acknowledgements

This work was supported in part by the Plan Nacional I+D+I under Grant FIS2009-9135, Ministerio de Ciencia e Innovación, Spain. and also by Generalitat Valenciana under Grant PROMETEO2009-077. Héctor Navarro gratefully acknowledges funding from the Generalitat Valencia (VALi+d predoctoral contract).

References and Links

1. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. 7, 821–825 (1908).

2. S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12(3), 483–491 (2004). [CrossRef]   [PubMed]  

3. J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009). [CrossRef]   [PubMed]  

4. B. Javidi, R. Ponce-Díaz, and S.-H. Hong, “Three-dimensional recognition of occluded objects by using computational integral imaging,” Opt. Lett. 31(8), 1106–1108 (2006). [CrossRef]   [PubMed]  

5. B. Heigl, R. Koch, M. Pollefeys, J. Denzler, and L. Van Gool, “Plenoptic Modeling and Rendering from Image sequences taken by hand-held Camera,” Proc. DAGM, 94–101 (1999).

6. S. Yeom, B. Javidi, and E. Watson, “Photon counting passive 3D image sensing for automatic target recognition,” Opt. Express 13(23), 9310–9330 (2005). [CrossRef]   [PubMed]  

7. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997). [CrossRef]   [PubMed]  

8. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express 13(23), 9175–9180 (2005). [CrossRef]   [PubMed]  

9. D.-H. Shin, C.-W. Tan, B.-G. Lee, J.-J. Lee, and E.-S. Kim, “Resolution-enhanced three-dimensional image reconstruction by use of smart pixel mapping in computational integral imaging,” Appl. Opt. 47(35), 6656–6665 (2008). [CrossRef]   [PubMed]  

10. M. Zhang, Y. Piao, and E.-S. Kim, “Occlusion-removed scheme using depth-reversed method in computational integral imaging,” Appl. Opt. 49(14), 2571–2580 (2010). [CrossRef]  

11. T.-Ch. Wei, D.-H. Shin, and B.-G. Lee, “Resolution-enhanced reconstruction of 3D object using depth-reversed elemental images for partially occluded object recognition,” J. Opt. Soc. Korea 13(1), 139–145 (2009). [CrossRef]  

12. D.-H. Shin, B.-G. Lee, and E.-S. Kim, “Modified smart pixel mapping method for displaying orthoscopic 3D images in integral imaging,” Opt. Lasers Eng. 47(11), 1189–1194 (2009). [CrossRef]  

13. J. Arai, H. Kawai, M. Kawakita, and F. Okano, “Depth-control method for integral imaging,” Opt. Lett. 33(3), 279–281 (2008). [CrossRef]   [PubMed]  

14. D.-Ch. Hwang, J.-S. Park, S.-Ch. Kim, D.-H. Shin, and E.-S. Kim, “Magnification of 3D reconstructed images in integral imaging using an intermediate-view reconstruction technique,” Appl. Opt. 45(19), 4631–4637 (2006). [CrossRef]   [PubMed]  

15. H. Navarro, R. Martínez-Cuenca, A. Molina-Martín, M. Martínez-Corral, G. Saavedra, and B. Javidi, “Method to remedy image degradations due to facet braiding in 3D integral imaging monitors,” J. Display Technol. 6(10), 404–411 (2010). [CrossRef]  

16. J. S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. 27(13), 1144–1146 (2002). [CrossRef]  

17. J. Arai, H. Kawai, and F. Okano, “Microlens arrays for integral imaging system,” Appl. Opt. 45(36), 9066–9078 (2006). [CrossRef]   [PubMed]  

References

  • View by:
  • |
  • |
  • |

  1. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. 7, 821–825 (1908).
  2. S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12(3), 483–491 (2004).
    [CrossRef] [PubMed]
  3. J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009).
    [CrossRef] [PubMed]
  4. B. Javidi, R. Ponce-Díaz, and S.-H. Hong, “Three-dimensional recognition of occluded objects by using computational integral imaging,” Opt. Lett. 31(8), 1106–1108 (2006).
    [CrossRef] [PubMed]
  5. B. Heigl, R. Koch, M. Pollefeys, J. Denzler, and L. Van Gool, “Plenoptic Modeling and Rendering from Image sequences taken by hand-held Camera,” Proc. DAGM, 94–101 (1999).
  6. S. Yeom, B. Javidi, and E. Watson, “Photon counting passive 3D image sensing for automatic target recognition,” Opt. Express 13(23), 9310–9330 (2005).
    [CrossRef] [PubMed]
  7. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997).
    [CrossRef] [PubMed]
  8. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express 13(23), 9175–9180 (2005).
    [CrossRef] [PubMed]
  9. D.-H. Shin, C.-W. Tan, B.-G. Lee, J.-J. Lee, and E.-S. Kim, “Resolution-enhanced three-dimensional image reconstruction by use of smart pixel mapping in computational integral imaging,” Appl. Opt. 47(35), 6656–6665 (2008).
    [CrossRef] [PubMed]
  10. M. Zhang, Y. Piao, and E.-S. Kim, “Occlusion-removed scheme using depth-reversed method in computational integral imaging,” Appl. Opt. 49(14), 2571–2580 (2010).
    [CrossRef]
  11. T.-Ch. Wei, D.-H. Shin, and B.-G. Lee, “Resolution-enhanced reconstruction of 3D object using depth-reversed elemental images for partially occluded object recognition,” J. Opt. Soc. Korea 13(1), 139–145 (2009).
    [CrossRef]
  12. D.-H. Shin, B.-G. Lee, and E.-S. Kim, “Modified smart pixel mapping method for displaying orthoscopic 3D images in integral imaging,” Opt. Lasers Eng. 47(11), 1189–1194 (2009).
    [CrossRef]
  13. J. Arai, H. Kawai, M. Kawakita, and F. Okano, “Depth-control method for integral imaging,” Opt. Lett. 33(3), 279–281 (2008).
    [CrossRef] [PubMed]
  14. D.-Ch. Hwang, J.-S. Park, S.-Ch. Kim, D.-H. Shin, and E.-S. Kim, “Magnification of 3D reconstructed images in integral imaging using an intermediate-view reconstruction technique,” Appl. Opt. 45(19), 4631–4637 (2006).
    [CrossRef] [PubMed]
  15. H. Navarro, R. Martínez-Cuenca, A. Molina-Martín, M. Martínez-Corral, G. Saavedra, and B. Javidi, “Method to remedy image degradations due to facet braiding in 3D integral imaging monitors,” J. Display Technol. 6(10), 404–411 (2010).
    [CrossRef]
  16. J. S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. 27(13), 1144–1146 (2002).
    [CrossRef]
  17. J. Arai, H. Kawai, and F. Okano, “Microlens arrays for integral imaging system,” Appl. Opt. 45(36), 9066–9078 (2006).
    [CrossRef] [PubMed]

2010 (2)

M. Zhang, Y. Piao, and E.-S. Kim, “Occlusion-removed scheme using depth-reversed method in computational integral imaging,” Appl. Opt. 49(14), 2571–2580 (2010).
[CrossRef]

H. Navarro, R. Martínez-Cuenca, A. Molina-Martín, M. Martínez-Corral, G. Saavedra, and B. Javidi, “Method to remedy image degradations due to facet braiding in 3D integral imaging monitors,” J. Display Technol. 6(10), 404–411 (2010).
[CrossRef]

2009 (3)

T.-Ch. Wei, D.-H. Shin, and B.-G. Lee, “Resolution-enhanced reconstruction of 3D object using depth-reversed elemental images for partially occluded object recognition,” J. Opt. Soc. Korea 13(1), 139–145 (2009).
[CrossRef]

D.-H. Shin, B.-G. Lee, and E.-S. Kim, “Modified smart pixel mapping method for displaying orthoscopic 3D images in integral imaging,” Opt. Lasers Eng. 47(11), 1189–1194 (2009).
[CrossRef]

J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009).
[CrossRef] [PubMed]

2008 (2)

D.-H. Shin, C.-W. Tan, B.-G. Lee, J.-J. Lee, and E.-S. Kim, “Resolution-enhanced three-dimensional image reconstruction by use of smart pixel mapping in computational integral imaging,” Appl. Opt. 47(35), 6656–6665 (2008).
[CrossRef] [PubMed]

J. Arai, H. Kawai, M. Kawakita, and F. Okano, “Depth-control method for integral imaging,” Opt. Lett. 33(3), 279–281 (2008).
[CrossRef] [PubMed]

2006 (3)

D.-Ch. Hwang, J.-S. Park, S.-Ch. Kim, D.-H. Shin, and E.-S. Kim, “Magnification of 3D reconstructed images in integral imaging using an intermediate-view reconstruction technique,” Appl. Opt. 45(19), 4631–4637 (2006).
[CrossRef] [PubMed]

J. Arai, H. Kawai, and F. Okano, “Microlens arrays for integral imaging system,” Appl. Opt. 45(36), 9066–9078 (2006).
[CrossRef] [PubMed]

B. Javidi, R. Ponce-Díaz, and S.-H. Hong, “Three-dimensional recognition of occluded objects by using computational integral imaging,” Opt. Lett. 31(8), 1106–1108 (2006).
[CrossRef] [PubMed]

2005 (2)

S. Yeom, B. Javidi, and E. Watson, “Photon counting passive 3D image sensing for automatic target recognition,” Opt. Express 13(23), 9310–9330 (2005).
[CrossRef] [PubMed]

M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express 13(23), 9175–9180 (2005).
[CrossRef] [PubMed]

2004 (1)

S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12(3), 483–491 (2004).
[CrossRef] [PubMed]

2002 (1)

J. S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. 27(13), 1144–1146 (2002).
[CrossRef]

1997 (1)

F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997).
[CrossRef] [PubMed]

1908 (1)

G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. 7, 821–825 (1908).

Arai, J.

J. Arai, H. Kawai, M. Kawakita, and F. Okano, “Depth-control method for integral imaging,” Opt. Lett. 33(3), 279–281 (2008).
[CrossRef] [PubMed]

J. Arai, H. Kawai, and F. Okano, “Microlens arrays for integral imaging system,” Appl. Opt. 45(36), 9066–9078 (2006).
[CrossRef] [PubMed]

F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997).
[CrossRef] [PubMed]

Hong, K.

J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009).
[CrossRef] [PubMed]

Hong, S.-H.

B. Javidi, R. Ponce-Díaz, and S.-H. Hong, “Three-dimensional recognition of occluded objects by using computational integral imaging,” Opt. Lett. 31(8), 1106–1108 (2006).
[CrossRef] [PubMed]

S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12(3), 483–491 (2004).
[CrossRef] [PubMed]

Hoshino, H.

F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997).
[CrossRef] [PubMed]

Hwang, D.-Ch.

D.-Ch. Hwang, J.-S. Park, S.-Ch. Kim, D.-H. Shin, and E.-S. Kim, “Magnification of 3D reconstructed images in integral imaging using an intermediate-view reconstruction technique,” Appl. Opt. 45(19), 4631–4637 (2006).
[CrossRef] [PubMed]

Jang, J. S.

J. S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. 27(13), 1144–1146 (2002).
[CrossRef]

Jang, J.-S.

S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12(3), 483–491 (2004).
[CrossRef] [PubMed]

Javidi, B.

H. Navarro, R. Martínez-Cuenca, A. Molina-Martín, M. Martínez-Corral, G. Saavedra, and B. Javidi, “Method to remedy image degradations due to facet braiding in 3D integral imaging monitors,” J. Display Technol. 6(10), 404–411 (2010).
[CrossRef]

B. Javidi, R. Ponce-Díaz, and S.-H. Hong, “Three-dimensional recognition of occluded objects by using computational integral imaging,” Opt. Lett. 31(8), 1106–1108 (2006).
[CrossRef] [PubMed]

S. Yeom, B. Javidi, and E. Watson, “Photon counting passive 3D image sensing for automatic target recognition,” Opt. Express 13(23), 9310–9330 (2005).
[CrossRef] [PubMed]

M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express 13(23), 9175–9180 (2005).
[CrossRef] [PubMed]

S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12(3), 483–491 (2004).
[CrossRef] [PubMed]

J. S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. 27(13), 1144–1146 (2002).
[CrossRef]

Kawai, H.

J. Arai, H. Kawai, M. Kawakita, and F. Okano, “Depth-control method for integral imaging,” Opt. Lett. 33(3), 279–281 (2008).
[CrossRef] [PubMed]

J. Arai, H. Kawai, and F. Okano, “Microlens arrays for integral imaging system,” Appl. Opt. 45(36), 9066–9078 (2006).
[CrossRef] [PubMed]

Kawakita, M.

J. Arai, H. Kawai, M. Kawakita, and F. Okano, “Depth-control method for integral imaging,” Opt. Lett. 33(3), 279–281 (2008).
[CrossRef] [PubMed]

Kim, E.-S.

M. Zhang, Y. Piao, and E.-S. Kim, “Occlusion-removed scheme using depth-reversed method in computational integral imaging,” Appl. Opt. 49(14), 2571–2580 (2010).
[CrossRef]

D.-H. Shin, B.-G. Lee, and E.-S. Kim, “Modified smart pixel mapping method for displaying orthoscopic 3D images in integral imaging,” Opt. Lasers Eng. 47(11), 1189–1194 (2009).
[CrossRef]

D.-H. Shin, C.-W. Tan, B.-G. Lee, J.-J. Lee, and E.-S. Kim, “Resolution-enhanced three-dimensional image reconstruction by use of smart pixel mapping in computational integral imaging,” Appl. Opt. 47(35), 6656–6665 (2008).
[CrossRef] [PubMed]

D.-Ch. Hwang, J.-S. Park, S.-Ch. Kim, D.-H. Shin, and E.-S. Kim, “Magnification of 3D reconstructed images in integral imaging using an intermediate-view reconstruction technique,” Appl. Opt. 45(19), 4631–4637 (2006).
[CrossRef] [PubMed]

Kim, S.-Ch.

D.-Ch. Hwang, J.-S. Park, S.-Ch. Kim, D.-H. Shin, and E.-S. Kim, “Magnification of 3D reconstructed images in integral imaging using an intermediate-view reconstruction technique,” Appl. Opt. 45(19), 4631–4637 (2006).
[CrossRef] [PubMed]

Lee, B.

J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009).
[CrossRef] [PubMed]

Lee, B.-G.

T.-Ch. Wei, D.-H. Shin, and B.-G. Lee, “Resolution-enhanced reconstruction of 3D object using depth-reversed elemental images for partially occluded object recognition,” J. Opt. Soc. Korea 13(1), 139–145 (2009).
[CrossRef]

D.-H. Shin, B.-G. Lee, and E.-S. Kim, “Modified smart pixel mapping method for displaying orthoscopic 3D images in integral imaging,” Opt. Lasers Eng. 47(11), 1189–1194 (2009).
[CrossRef]

D.-H. Shin, C.-W. Tan, B.-G. Lee, J.-J. Lee, and E.-S. Kim, “Resolution-enhanced three-dimensional image reconstruction by use of smart pixel mapping in computational integral imaging,” Appl. Opt. 47(35), 6656–6665 (2008).
[CrossRef] [PubMed]

Lee, J.-J.

D.-H. Shin, C.-W. Tan, B.-G. Lee, J.-J. Lee, and E.-S. Kim, “Resolution-enhanced three-dimensional image reconstruction by use of smart pixel mapping in computational integral imaging,” Appl. Opt. 47(35), 6656–6665 (2008).
[CrossRef] [PubMed]

Lippmann, G.

G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. 7, 821–825 (1908).

Martínez-Corral, M.

H. Navarro, R. Martínez-Cuenca, A. Molina-Martín, M. Martínez-Corral, G. Saavedra, and B. Javidi, “Method to remedy image degradations due to facet braiding in 3D integral imaging monitors,” J. Display Technol. 6(10), 404–411 (2010).
[CrossRef]

M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express 13(23), 9175–9180 (2005).
[CrossRef] [PubMed]

Martínez-Cuenca, R.

H. Navarro, R. Martínez-Cuenca, A. Molina-Martín, M. Martínez-Corral, G. Saavedra, and B. Javidi, “Method to remedy image degradations due to facet braiding in 3D integral imaging monitors,” J. Display Technol. 6(10), 404–411 (2010).
[CrossRef]

M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express 13(23), 9175–9180 (2005).
[CrossRef] [PubMed]

Molina-Martín, A.

H. Navarro, R. Martínez-Cuenca, A. Molina-Martín, M. Martínez-Corral, G. Saavedra, and B. Javidi, “Method to remedy image degradations due to facet braiding in 3D integral imaging monitors,” J. Display Technol. 6(10), 404–411 (2010).
[CrossRef]

Navarro, H.

H. Navarro, R. Martínez-Cuenca, A. Molina-Martín, M. Martínez-Corral, G. Saavedra, and B. Javidi, “Method to remedy image degradations due to facet braiding in 3D integral imaging monitors,” J. Display Technol. 6(10), 404–411 (2010).
[CrossRef]

Okano, F.

J. Arai, H. Kawai, M. Kawakita, and F. Okano, “Depth-control method for integral imaging,” Opt. Lett. 33(3), 279–281 (2008).
[CrossRef] [PubMed]

J. Arai, H. Kawai, and F. Okano, “Microlens arrays for integral imaging system,” Appl. Opt. 45(36), 9066–9078 (2006).
[CrossRef] [PubMed]

F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997).
[CrossRef] [PubMed]

Park, J.-H.

J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009).
[CrossRef] [PubMed]

Park, J.-S.

D.-Ch. Hwang, J.-S. Park, S.-Ch. Kim, D.-H. Shin, and E.-S. Kim, “Magnification of 3D reconstructed images in integral imaging using an intermediate-view reconstruction technique,” Appl. Opt. 45(19), 4631–4637 (2006).
[CrossRef] [PubMed]

Piao, Y.

M. Zhang, Y. Piao, and E.-S. Kim, “Occlusion-removed scheme using depth-reversed method in computational integral imaging,” Appl. Opt. 49(14), 2571–2580 (2010).
[CrossRef]

Ponce-Díaz, R.

B. Javidi, R. Ponce-Díaz, and S.-H. Hong, “Three-dimensional recognition of occluded objects by using computational integral imaging,” Opt. Lett. 31(8), 1106–1108 (2006).
[CrossRef] [PubMed]

Saavedra, G.

H. Navarro, R. Martínez-Cuenca, A. Molina-Martín, M. Martínez-Corral, G. Saavedra, and B. Javidi, “Method to remedy image degradations due to facet braiding in 3D integral imaging monitors,” J. Display Technol. 6(10), 404–411 (2010).
[CrossRef]

M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express 13(23), 9175–9180 (2005).
[CrossRef] [PubMed]

Shin, D.-H.

T.-Ch. Wei, D.-H. Shin, and B.-G. Lee, “Resolution-enhanced reconstruction of 3D object using depth-reversed elemental images for partially occluded object recognition,” J. Opt. Soc. Korea 13(1), 139–145 (2009).
[CrossRef]

D.-H. Shin, B.-G. Lee, and E.-S. Kim, “Modified smart pixel mapping method for displaying orthoscopic 3D images in integral imaging,” Opt. Lasers Eng. 47(11), 1189–1194 (2009).
[CrossRef]

D.-H. Shin, C.-W. Tan, B.-G. Lee, J.-J. Lee, and E.-S. Kim, “Resolution-enhanced three-dimensional image reconstruction by use of smart pixel mapping in computational integral imaging,” Appl. Opt. 47(35), 6656–6665 (2008).
[CrossRef] [PubMed]

D.-Ch. Hwang, J.-S. Park, S.-Ch. Kim, D.-H. Shin, and E.-S. Kim, “Magnification of 3D reconstructed images in integral imaging using an intermediate-view reconstruction technique,” Appl. Opt. 45(19), 4631–4637 (2006).
[CrossRef] [PubMed]

Tan, C.-W.

D.-H. Shin, C.-W. Tan, B.-G. Lee, J.-J. Lee, and E.-S. Kim, “Resolution-enhanced three-dimensional image reconstruction by use of smart pixel mapping in computational integral imaging,” Appl. Opt. 47(35), 6656–6665 (2008).
[CrossRef] [PubMed]

Watson, E.

S. Yeom, B. Javidi, and E. Watson, “Photon counting passive 3D image sensing for automatic target recognition,” Opt. Express 13(23), 9310–9330 (2005).
[CrossRef] [PubMed]

Wei, T.-Ch.

T.-Ch. Wei, D.-H. Shin, and B.-G. Lee, “Resolution-enhanced reconstruction of 3D object using depth-reversed elemental images for partially occluded object recognition,” J. Opt. Soc. Korea 13(1), 139–145 (2009).
[CrossRef]

Yeom, S.

S. Yeom, B. Javidi, and E. Watson, “Photon counting passive 3D image sensing for automatic target recognition,” Opt. Express 13(23), 9310–9330 (2005).
[CrossRef] [PubMed]

Yuyama, I.

F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997).
[CrossRef] [PubMed]

Zhang, M.

M. Zhang, Y. Piao, and E.-S. Kim, “Occlusion-removed scheme using depth-reversed method in computational integral imaging,” Appl. Opt. 49(14), 2571–2580 (2010).
[CrossRef]

Appl. Opt. (6)

J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009).
[CrossRef] [PubMed]

D.-H. Shin, C.-W. Tan, B.-G. Lee, J.-J. Lee, and E.-S. Kim, “Resolution-enhanced three-dimensional image reconstruction by use of smart pixel mapping in computational integral imaging,” Appl. Opt. 47(35), 6656–6665 (2008).
[CrossRef] [PubMed]

M. Zhang, Y. Piao, and E.-S. Kim, “Occlusion-removed scheme using depth-reversed method in computational integral imaging,” Appl. Opt. 49(14), 2571–2580 (2010).
[CrossRef]

F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997).
[CrossRef] [PubMed]

D.-Ch. Hwang, J.-S. Park, S.-Ch. Kim, D.-H. Shin, and E.-S. Kim, “Magnification of 3D reconstructed images in integral imaging using an intermediate-view reconstruction technique,” Appl. Opt. 45(19), 4631–4637 (2006).
[CrossRef] [PubMed]

J. Arai, H. Kawai, and F. Okano, “Microlens arrays for integral imaging system,” Appl. Opt. 45(36), 9066–9078 (2006).
[CrossRef] [PubMed]

J. Display Technol. (1)

H. Navarro, R. Martínez-Cuenca, A. Molina-Martín, M. Martínez-Corral, G. Saavedra, and B. Javidi, “Method to remedy image degradations due to facet braiding in 3D integral imaging monitors,” J. Display Technol. 6(10), 404–411 (2010).
[CrossRef]

J. Opt. Soc. Korea (1)

T.-Ch. Wei, D.-H. Shin, and B.-G. Lee, “Resolution-enhanced reconstruction of 3D object using depth-reversed elemental images for partially occluded object recognition,” J. Opt. Soc. Korea 13(1), 139–145 (2009).
[CrossRef]

J. Phys. (1)

G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. 7, 821–825 (1908).

Opt. Express (3)

S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12(3), 483–491 (2004).
[CrossRef] [PubMed]

M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express 13(23), 9175–9180 (2005).
[CrossRef] [PubMed]

S. Yeom, B. Javidi, and E. Watson, “Photon counting passive 3D image sensing for automatic target recognition,” Opt. Express 13(23), 9310–9330 (2005).
[CrossRef] [PubMed]

Opt. Lasers Eng. (1)

D.-H. Shin, B.-G. Lee, and E.-S. Kim, “Modified smart pixel mapping method for displaying orthoscopic 3D images in integral imaging,” Opt. Lasers Eng. 47(11), 1189–1194 (2009).
[CrossRef]

Opt. Lett. (3)

J. Arai, H. Kawai, M. Kawakita, and F. Okano, “Depth-control method for integral imaging,” Opt. Lett. 33(3), 279–281 (2008).
[CrossRef] [PubMed]

B. Javidi, R. Ponce-Díaz, and S.-H. Hong, “Three-dimensional recognition of occluded objects by using computational integral imaging,” Opt. Lett. 31(8), 1106–1108 (2006).
[CrossRef] [PubMed]

J. S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. 27(13), 1144–1146 (2002).
[CrossRef]

Other (1)

B. Heigl, R. Koch, M. Pollefeys, J. Denzler, and L. Van Gool, “Plenoptic Modeling and Rendering from Image sequences taken by hand-held Camera,” Proc. DAGM, 94–101 (1999).

Supplementary Material (4)

» Media 1: AVI (581 KB)     
» Media 2: AVI (389 KB)     
» Media 3: AVI (283 KB)     
» Media 4: AVI (592 KB)     

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1

Calculation of the synthetic integral image. The pixel of the synthetic integral image (dotted blue line) stores the same value as the pixel of the captured integral image.

Fig. 2
Fig. 2

Scheme for the calculation of the Okano’s synthetic integral image.

Fig. 3
Fig. 3

Scheme of the experimental set up for the acquisition of the set of elemental images of a 3D scene.

Fig. 4
Fig. 4

Subset of the elemental images obtained experimentally. These elemental images are the input for the SPOC algorithm.

Fig. 5
Fig. 5

(a) Collection of 151 × 151 SEIs obtained after the application of the SPOC algorithm; (b) enlarged view of central SEIs.

Fig. 6
Fig. 6

Two Perspectives of the 3D reconstructed scene, as seen by an observer placed at a distance L = 700 m m . In the video (Media 1) we show the movie built with the frames obtained with the visualization algorithm.

Fig. 7
Fig. 7

Experimental setup for the observation of the InI monitor. After displacing horizontally the camera in steps of 10 mm we recorded 20 different perspectives of the displayed 3D scene.

Fig. 8
Fig. 8

Two perspectives of the 3D scene displayed in the real experiment. In the video (Media 2) we show the movie built with the perspectives photographed with the digital camera.

Fig. 9
Fig. 9

(a) Collection of hexagonal elemental images obtained after the application of the SPOC algorithm; (b) enlarged view of some central SEIs.

Fig. 10
Fig. 10

Two Perspectives of the reconstructed scene, as seen by an observer placed at a distance L = 700 m m . In the video (Media 3) we show the movie built with the frames obtained with the visualization algorithm.

Fig. 11
Fig. 11

Experimental setup for the observation of the hexagonal InI monitor. After displacing horizontally the camera in steps of 10 mm, we recorded 35 different perspectives of the displayed 3D scene.

Fig. 12
Fig. 12

Two perspectives of the 3D scene displayed in the real hexagonal experiment. In the video (Media 4) we show the movie built with the perspectives photographed with the digital camera.

Tables (1)

Tables Icon

Table 1 List of acronyms

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

x S = j p S + m p S n S
Δ o = ( 1 + d S g S ) j p S d S g S x S ,
Δ D = ( 1 + D g S ) j p S D g S x S
i jm = R o u n d [ D p D g S ( j p S + m p S n S ) + g S + D p D g S j p S ] = R o u n d [ p S p D j p S p D D g S m n S ] ,
x D = ( 1 + g D D d S ) p D i jm g D g S g S + d S D d S j p S + g D g S d S D d S x S
l jm = R o u n d [ g D D d S n D i jm + g D g S p S p D n D D d S ( d S m n S j p S ) ]
I jm S = I il D
i jm = j
l jm = R o u n d [ g D d S n D i m + n D g S d S j ] = m
n D = n S = 2 d D g D
i jm = j m
l jm = m

Metrics