Abstract

In this paper, we introduce an improved signal analysis of the computational integral imaging (CII) system having a pickup process of three-dimensional object and a volumetric computational reconstruction (VCR) process. We propose a signal model for the CII system. From the signal model and its analysis, we can define a granular noise caused by the non-uniform overlapping. We also analyze the characteristics of the granular noise. According to our model and analysis, there is a condition that the granular noise cancels out. To show the feasibility of our model, the preliminary experiments are carried out and the result is presented. This is the first time, to our knowledge, that a signal model for the analysis of CII systems is provided.

©2007 Optical Society of America

1. Introduction

Integral imaging (II) is a sensing and display technique for the visualization of a true three-dimensional (3-D) image. It was first proposed by Lippmann in 1908 and has been actively studied in the literature of 3-D image capturing and visualization [1–9]. The II system records an image of a 3-D object using a lenslet array and an image sensor. The recorded images that are referred to as elemental images are considered to be demagnified images with their own perspective through each lenslet. II reconstruction of a 3-D image from elemental images is a reverse of the recording process.

In II, reconstruction techniques can be classified into two categories: one is based on optical reconstruction [2–9] and the other is based on volumetric computational reconstruction (VCR) [10–17]. When the optical reconstruction techniques are utilized, there are some problems such as the degraded image quality of the displayed 3-D images caused by physical limitations of optical devices such as diffraction and aberration. To overcome these drawbacks of the optical reconstruction techniques, a VCR technique employing a pinhole array model has been introduced in the computational II (CII) system [12]. However, one serious drawback in VCR is that there is a possible artifact due to the nonuniform overlapping [12–14]. The artifact causes the degradation of the visual quality of reconstructed 3-D images. To reduce the artifact, some techniques have been proposed [13–14]. One is based on the lenslet model to improve the resolution of the reconstructed 3-D images by supplying the sufficient overlapping of elemental images [13]. The other is based on a compensation process accomplished by normalizing the number of elemental images overlapped and a moving array lenslet technique [14]. However, they did not address the detailed analysis of the artifact although they proposed the additional compensation process to remove the artifact. And the additional process may require an additional computational cost in VCR.

Recently, various VCR-based CII applications such as 3-D object recognition and a 3-D correlator are reported [15–17]. In 2006, Javidi et al. proposed a concept of VCR-based correlation to recognize 3-D objects that are occluded partially [15, 16]. They showed a good performance of the CII system. Also, in 2007, a novel VCR-based 3-D image correlator to detect the 3-D location coordinates of objects in space was proposed [17]. Even though the usefulness of the CII system was shown, its detailed analysis has not been reported yet.

In this paper, we try to analyze the computational integral imaging. A signal model for the CII system is proposed. To our knowledge, this is the first time to analyze the noise characteristics based on a signal model in the literature. From the signal model and its analysis, we define a granular noise caused by the non-uniform overlapping. According to our model and analysis, there is a condition that the granular noise cancels out. We also analyze the characteristics of the granular noise. To show the feasibility of our model, the preliminary experiments are carried out and the result is presented.

2. Overview of VCR in computational integral imaging

The CII system shown in Fig. 1 is composed of two processes: the pickup process of 3-D objects and the VCR from elemental images. In the pickup process of CII, elemental images are recorded by use of a lenslet array and an image sensor. On the other hand, in the VCR process 3-D images are digitally reconstructed from the elemental images by use of a computer where they can be easily reconstructed at any output plane without optical devices. As shown in Fig. 1(b), the elemental images are inversely mapped on the image plane. The elemental images are magnified by a factor of z/g, where z is the distance between the reconstruction image plane and the virtual pinhole array and g is the distance between the elemental images and the virtual pinhole array. The magnified elemental images are superimposed on each other and the reconstructed 3-D image is finally produced at the reconstruction plane z. To completely obtain the volumetric reconstruction of a 3-D object, this process is repeatedly performed along the z-axis.

 figure: Fig. 1.

Fig. 1. The CII system. (a) Pickup process (b) VCR process.

Download Full Size | PPT Slide | PDF

3. Signal model for the analysis of the CII system

To analyze the overall CII system in detail, we construct a one-dimensional (1-D) signal model for CII. This model has a direct counterpart in multidimensional systems and thus it is easily applied to the CII system. Let a function fz(x) be an intensity signal in the image plane located at the distance z from the lens array to the object, as depicted in Fig. 2. The signal fz(x) are windowed and reversely mapped through the lens array into the elemental images. This process is referred to as the pickup process of the elemental images in integral imaging. To reconstruct a signal rz(x) in the reconstructed image plane located at the distance z from the virtual pinhole array, elemental images are reversely mapped and magnified by a factor of z/g. And then these magnified elemental images are superimposed on each other in the reconstructed image plane. This reconstruction method is so called the VCR method.

 figure: Fig. 2.

Fig. 2. Signal model for the analysis of the CII system.

Download Full Size | PPT Slide | PDF

We now focus on the relationship between the signal fz(x) and the reconstructed signal rz(x). rz(x) is considered as a linear sum of windowed (or truncated) versions of fz(x). This situation is illustrated in Fig. 3. The pickup process provides each elemental image that is considered to be a windowed, inverted, and downscaled version of the signal fz(x). In the VCR process, these elemental images are again inverted and upscaled before superimposing on them. Thus, the effect of inverting and scaling disappears and that of windowing is left in our model. Hence, the lens array and virtual pinhole-array in Fig. 2 can be eliminated in our model and thus our model is simplified as shown in Fig. 3. Also, a reconstructed signal in VCR is considered to be a sum of windowed versions of the original signal.

 figure: Fig. 3.

Fig. 3. Illustration of relationship between fz(x) and rz(x).

Download Full Size | PPT Slide | PDF

Figure 3 provides us a relationship between the signal fz(x) and the reconstructed signal rz(x) in the form

rz(x)=i=0N1fz(x)πi(xw)=fz(x)i=0N1πi(xw)=fz(x)Sπ(x),

where

πi(xw)=π0((xs)w),

N denotes the number of elemental images, and w denotes the size of the windows. Here, Eq. (1) can be easily converted into a discrete version by substituting xk for x, where k is a sampling point of x. A shifted window function (SWF) πi(x) is a shifted version of a rectangular window function π 0(x) that equals one in the interval [0,1] and zero outside. Note that the shifting factor s in a SWF is a multiple of the elemental signal (image) length a, that is, s=ia, as illustrated in the Fig. 3. Equation (1) states that the reconstructed signal rz(x) can be the original signal fz(x) if the sum of SWFs, denoted by Sπ(x), is one or constant. In other words, the function Sπ(x) is the ratio of a reconstructed signal to the original signal fz(x). Hence, the function Sπ(x) is considered as the response function of a CII system and the characteristics of Sπ(x) deserve to be analyzed more precisely.

Figure 4 illustrates an example of the superposition or sum of nine SWFs. The size of each shifted window is w and the shifting factor is a multiple of the value of a. The resulting signal Sπ(x) by superposition of SWFs has two regions; one is a transition region (a portion of the non-homogeneous response function), and the other is a non-transition region (a portion of the homogeneous response function). At the boundary of Sπ(x), there exists a transition status due to the lacking of the number of the overlapping. The status is treated as a trivial one in this paper.

 figure: Fig. 4.

Fig. 4. (a). Illustration of the sum of nine SWFs, (b). Lower uniform component and positive granular component and (c). Upper uniform component and negative granular component of the signal Sπ(x) of the non-transition region.

Download Full Size | PPT Slide | PDF

4. Granular noise (GN) and GN-free condition

The response function Sπ(x) of the non-transition region may not be uniform or constant as illustrated Fig. 4. This non-uniformity in the function Sπ(x) occurs from the fact that the number of SWFs that participate in the superposition process is different along the x axis. We call this number as the overlapping number. And the overlapping number is an integer number given by the function floor(w/a) or ceil(w/a). Based on this property, we define the uniform components and the granular components as depicted in Figs. 4(b) and 4(c). The uniform components have two types: lower level one and upper level one. The related granular components are positive one and negative one, respectively. The granular components are considered to be an unwanted signal because the function is desired to be constant according to Eq. (1). Thus we call these granular components as the granular noise whose detailed definition is given in the section 5.

Let the value b be the residual number after dividing the value w by the value a, as illustrated in Fig. 4(a). This relation is defined by

w=na+b=a(n+ba),

where n is an integer number given by the function floor(w/a) and the value b is a real number in the range [0, a). Note that the number n is the same as the level of the lower uniform component and it is related to the overlapping number. From Eq. (2), it can be easily seen that the granular noise is free when the value b equals zero, in other words, the size w is a multiple of the value a. Here, we consider the geometric relation between the window size w and the distance z where the reconstructed image plane is located, as depicted in Fig. 2. The relation has the form

w=azg=aM.

It states that the elemental signal (image) of length a is magnified by a factor of M=z/g and thus the corresponding window size w has to be aM. From Eqs. (2) and (3), we obtain that z/g=n+b/a. This equation implies the value b be zero if the ratio z/g is an integer number. If the value b is zero, the levels of the two uniform components are equal to each other and the granular component, referring to Fig. 3(b), has to be zero. Therefore, we can say that the GN-free condition in VCR is achieved when the ratio z/g or the magnification factor M is integer. In fact, a 3D object never can be free of the granular noise in all its depths. However, the GN-free images that are generated at the distance of ceil(z/g) may be enough to construct a volume data if the value g is much less than the depth of a 3D object. Then the intermediate images between two consecutive GN-free images can be interpolated by an interpolation method. Also, the value g is considered to be a very small number so that it enables the GN-free condition to be useful. To facilitate the understanding of the granular noise, a set of 2D reconstructed images are shown in Fig. 5. Figure 5(c) and 5(b) show the reconstructed images when the GN-free condition is satisfied or not satisfied. It can be easily seen that the reconstructed image of Fig. 5(b) suffers from the granular noise.

 figure: Fig. 5.

Fig. 5. Experimental results: (a) Set of elemental images (b) reconstructed image at distance z=3.5g with the granular noise and the original object “tree” is located at z=3.5g (c) reconstructed image at distance z=4g without the granular noise but it is slightly defocused.

Download Full Size | PPT Slide | PDF

If the ratio z/g is not an integer number, there should be the granular noise in the reconstructed images of the VCR. In this case, a compensation process is required to eliminate the granular noise. To derive the compensation formula, we rewrite Eq. (1) as

fz(x)=rz(x)Sπ(x).

The compensation is accomplished by dividing the reconstructed signal rz(x) by the function Sπ(x). This additional process in VCR requires an additional computational cost. And this additional dividing operation requires a higher computational cost than the summation operations of the superposition process. Although the complexity of the compensation process may be relatively small compared with that of the magnification process in VCR, it requires a higher computational cost than the superposition process. For example, assume that the magnification factor is M and the pixel number of a reconstructed image is K×K. Then the number of adders is around M×K×K for the superposition process. If the compensation process is required, there are two ways to implement it. One is only based on calculation. The other is based on calculation with a memory. We discuss the latter in this paper since it is easier to implement it. The memory is required to save the overlapping numbers in the reconstructed image plane. The size of the memory is required to be K×K because the numbers are regionally different. To evaluate the overlapping numbers, counters are required in the superposition process and the number of counters is around M×K×K. Then the compensation is done by dividing the superimposed image by the overlapping numbers in the memory. Thus, the additional operations are M×K×K memory access operations and the M×K×K counter operations besides the division operations. In embedded systems, memory access operations are much more expensive than adders or counters because the speed of external memory access is much slower than the clock speed of CPU. It is desirable to avoid the use of the compensation process in the real time applications, especially embedded systems. Thus, the GN-free condition may enable us to reduce a computational cost and memories in VCR by eliminating the compensation.

5. Experiments and discussions

The granular components (GCs) introduced in this paper are periodic signals with period a, as depicted in Figs. 4(a)–4(c). It can be seen that the power of the positive GC increases as the value b increases whereas that of the negative GC decreases as the value b increases. In one case that the value b is in the range 0 to 0.5a, the power of the positive GC is less than that of the negative GC. In the other case that the value b is in the range 0.5a to a, the power of the negative GC is less than that of the positive GC. Based on this, we define the granular noise by

gn(x)={Sπ(x)wa,0<b0.5awaSπ(x),0.5ab<a.

And the uniform component is easily defined by round(w/a) or round(z/g), which is related to the number of the signals overlapped. To understand the nature of the granular noise, we plot the curve of the granular noise power along the z-axis in Fig. 6(a) and that of the uniform component power along the z-axis in Fig. 6(b). Referring to Fig. 6(a), the power of the granular noise is periodic along the z-axis and equals to zero at the integer position of the scaled z-axis. The maximum of the power is every half position in the z-axis. The power of the uniform component increases with the square of the overlapping number, referring to Fig. 6(b). Figure 6(c) shows the curves of uniform-signal to noise ratio (USNR) that is the ratio of the power of the uniform component to that of the granular noise, as defined by

USNR=10log10PUPgn=10log10round(wa)21a0agn(x)2dx,

where the power of the uniform component is denoted by PU and the power of the granular component is denoted by Pgn.

Let us investigate two cases: one is a short distance case (e.g., let the ratio z/g = 3.4) and the other is a relatively long distance case (e.g., let the ratio z/g =10.4). A reconstructed image located at the short distance z = 3.4g has USNR=13.52dB whereas a reconstructed image located at the long distance z=10.4g has USNR=23.98dB. Figure 6(c) indicates this situation. Therefore, it is easily seen that the granular noise effect is more noticeable in a reconstructed image located at a short distance.

 figure: Fig. 6.

Fig. 6. Graph of Power of (a) the granular noise and (b) the uniform component (c) USNR along the z-axis.

Download Full Size | PPT Slide | PDF

6. Conclusions and future work

We have introduced an improved signal analysis of computational integral imaging. A signal model for the pickup process and the VCR process in the CII system was proposed. From our signal model, a granular noise was defined and our analysis provided the granular noise-free (GN-free) condition. We also analyzed the characteristics of the granular noise in detail. The proposed model can apply to the optical pickup process of real 3D objects because the 3D objects can be considered as a set of sampled plane images. Further, it is expected that our model can provide various models to analyze various CII systems more precisely. A CII system including VCR can be expressed in terms of a response function that is a summation of SWFs. Thus, the SWF may play a key role to determine the visual quality of the CII system. For future works, hence, another SWF will be applied to VCR and be analyzed on the basis of visual quality.

7. Acknowledgment

The authors wish to express their sincere gratitude to the anonymous reviewers for their valuable comments and suggestions. This work was supported in part by a grant from the Dongseo University and in part by a grant from the post Brain Korea 21 project.

References and links

1. G. Lippmann, “La photographic integrale,” C. R. Acad. Sci. 146, 446–451 (1908).

2. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38, 1072–1077 (1999). [CrossRef]  

3. B. Javidi and F. Okano, eds., Three dimensional television, video, and display technologies, (Springer Verlag Berlin, 2002).

4. J.-S. Jang and B. Javidi, “Improved viewing resolution of three- dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27, 324–326 (2002). [CrossRef]  

5. B. Lee, S. Y. Jung, S.-W. Min, and J.-H. Park, “Three-dimensional display by use of integral photography with dynamically variable image planes,” Opt. Lett. 26, 1481–1482 (2001). [CrossRef]  

6. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Multifacet structure of observed reconstructed integral images,” J. Opt. Soc. Am. A 22, 597–603 (2005). [CrossRef]  

7. A. Stern and B. Javidi, “3D image sensing, visualization, and processing using integral imaging,” Proc. of IEEE 94, 591–608 (2006). [CrossRef]  

8. J. S. Jang and B. Javidi, “Depth and size control of three-dimensional images in projection integral imaging,” Opt. Express 12, 3778–3790 (2004). [CrossRef]   [PubMed]  

9. D. -H. Shin, B. Lee, and E. -S. Kim, “Multidirectional curved integral imaging with large depth by additional use of a large-aperture lens,” Appl. Opt. 45, 7375–7381 (2006). [CrossRef]   [PubMed]  

10. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26, 157–159 (2001). [CrossRef]  

11. A. Stern and B. Javidi, “3D image sensing and reconstruction with time-division multiplexed computational integral imaging,” Appl. Opt. 42, 7036–7042 (2003). [CrossRef]   [PubMed]  

12. S. -H. Hong, J. -S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12, 483–491 (2004). [CrossRef]   [PubMed]  

13. D.-H. Shin, E.-S. Kim, and B. Lee, “Computational reconstruction technique of three-dimensional object in integral imaging using a lenslet array,” Jpn. J. Appl. Phys. 44, 8016–8018 (2005). [CrossRef]  

14. S. -H. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express 12, 4579–4588 (2004). [CrossRef]   [PubMed]  

15. B. Javidi, R. Ponce-Díaz, and S. -H. Hong, “Three-dimensional recognition of occluded objects by using computational integral imaging,” Opt. Lett. 31, 1106–1108 (2006). [CrossRef]   [PubMed]  

16. S. -H. Hong and B. Javidi, “Distortion-tolerant 3D recognition of occluded objects using computational integral imaging,” Opt. Express 14, 12085–12095 (2006). [CrossRef]   [PubMed]  

17. J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Resolution-enhanced three-dimensional image correlator using computationally reconstructed integral images,” Opt. Commun. 26, 72–79 (2007). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. G. Lippmann, “La photographic integrale,” C. R. Acad. Sci. 146, 446–451 (1908).
  2. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38, 1072–1077 (1999).
    [Crossref]
  3. B. Javidi and F. Okano, eds., Three dimensional television, video, and display technologies, (Springer Verlag Berlin, 2002).
  4. J.-S. Jang and B. Javidi, “Improved viewing resolution of three- dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27, 324–326 (2002).
    [Crossref]
  5. B. Lee, S. Y. Jung, S.-W. Min, and J.-H. Park, “Three-dimensional display by use of integral photography with dynamically variable image planes,” Opt. Lett. 26, 1481–1482 (2001).
    [Crossref]
  6. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Multifacet structure of observed reconstructed integral images,” J. Opt. Soc. Am. A 22, 597–603 (2005).
    [Crossref]
  7. A. Stern and B. Javidi, “3D image sensing, visualization, and processing using integral imaging,” Proc. of IEEE 94, 591–608 (2006).
    [Crossref]
  8. J. S. Jang and B. Javidi, “Depth and size control of three-dimensional images in projection integral imaging,” Opt. Express 12, 3778–3790 (2004).
    [Crossref] [PubMed]
  9. D. -H. Shin, B. Lee, and E. -S. Kim, “Multidirectional curved integral imaging with large depth by additional use of a large-aperture lens,” Appl. Opt. 45, 7375–7381 (2006).
    [Crossref] [PubMed]
  10. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26, 157–159 (2001).
    [Crossref]
  11. A. Stern and B. Javidi, “3D image sensing and reconstruction with time-division multiplexed computational integral imaging,” Appl. Opt. 42, 7036–7042 (2003).
    [Crossref] [PubMed]
  12. S. -H. Hong, J. -S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12, 483–491 (2004).
    [Crossref] [PubMed]
  13. D.-H. Shin, E.-S. Kim, and B. Lee, “Computational reconstruction technique of three-dimensional object in integral imaging using a lenslet array,” Jpn. J. Appl. Phys. 44, 8016–8018 (2005).
    [Crossref]
  14. S. -H. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express 12, 4579–4588 (2004).
    [Crossref] [PubMed]
  15. B. Javidi, R. Ponce-Díaz, and S. -H. Hong, “Three-dimensional recognition of occluded objects by using computational integral imaging,” Opt. Lett. 31, 1106–1108 (2006).
    [Crossref] [PubMed]
  16. S. -H. Hong and B. Javidi, “Distortion-tolerant 3D recognition of occluded objects using computational integral imaging,” Opt. Express 14, 12085–12095 (2006).
    [Crossref] [PubMed]
  17. J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Resolution-enhanced three-dimensional image correlator using computationally reconstructed integral images,” Opt. Commun. 26, 72–79 (2007).
    [Crossref]

2007 (1)

J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Resolution-enhanced three-dimensional image correlator using computationally reconstructed integral images,” Opt. Commun. 26, 72–79 (2007).
[Crossref]

2006 (4)

2005 (2)

M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Multifacet structure of observed reconstructed integral images,” J. Opt. Soc. Am. A 22, 597–603 (2005).
[Crossref]

D.-H. Shin, E.-S. Kim, and B. Lee, “Computational reconstruction technique of three-dimensional object in integral imaging using a lenslet array,” Jpn. J. Appl. Phys. 44, 8016–8018 (2005).
[Crossref]

2004 (3)

2003 (1)

2002 (1)

2001 (2)

1999 (1)

F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38, 1072–1077 (1999).
[Crossref]

1908 (1)

G. Lippmann, “La photographic integrale,” C. R. Acad. Sci. 146, 446–451 (1908).

Arai, J.

F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38, 1072–1077 (1999).
[Crossref]

Arimoto, H.

Hong, S. -H.

Hoshino, H.

F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38, 1072–1077 (1999).
[Crossref]

Hwang, D.-C.

J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Resolution-enhanced three-dimensional image correlator using computationally reconstructed integral images,” Opt. Commun. 26, 72–79 (2007).
[Crossref]

Jang, J. S.

Jang, J. -S.

Jang, J.-S.

Javidi, B.

A. Stern and B. Javidi, “3D image sensing, visualization, and processing using integral imaging,” Proc. of IEEE 94, 591–608 (2006).
[Crossref]

B. Javidi, R. Ponce-Díaz, and S. -H. Hong, “Three-dimensional recognition of occluded objects by using computational integral imaging,” Opt. Lett. 31, 1106–1108 (2006).
[Crossref] [PubMed]

S. -H. Hong and B. Javidi, “Distortion-tolerant 3D recognition of occluded objects using computational integral imaging,” Opt. Express 14, 12085–12095 (2006).
[Crossref] [PubMed]

M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Multifacet structure of observed reconstructed integral images,” J. Opt. Soc. Am. A 22, 597–603 (2005).
[Crossref]

J. S. Jang and B. Javidi, “Depth and size control of three-dimensional images in projection integral imaging,” Opt. Express 12, 3778–3790 (2004).
[Crossref] [PubMed]

S. -H. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express 12, 4579–4588 (2004).
[Crossref] [PubMed]

S. -H. Hong, J. -S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12, 483–491 (2004).
[Crossref] [PubMed]

A. Stern and B. Javidi, “3D image sensing and reconstruction with time-division multiplexed computational integral imaging,” Appl. Opt. 42, 7036–7042 (2003).
[Crossref] [PubMed]

J.-S. Jang and B. Javidi, “Improved viewing resolution of three- dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27, 324–326 (2002).
[Crossref]

H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26, 157–159 (2001).
[Crossref]

Jung, S. Y.

Kim, E. -S.

Kim, E.-S.

J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Resolution-enhanced three-dimensional image correlator using computationally reconstructed integral images,” Opt. Commun. 26, 72–79 (2007).
[Crossref]

D.-H. Shin, E.-S. Kim, and B. Lee, “Computational reconstruction technique of three-dimensional object in integral imaging using a lenslet array,” Jpn. J. Appl. Phys. 44, 8016–8018 (2005).
[Crossref]

Lee, B.

Lippmann, G.

G. Lippmann, “La photographic integrale,” C. R. Acad. Sci. 146, 446–451 (1908).

Martínez-Corral, M.

Martínez-Cuenca, R.

Min, S.-W.

Okano, F.

F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38, 1072–1077 (1999).
[Crossref]

Park, J.-H.

Park, J.-S.

J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Resolution-enhanced three-dimensional image correlator using computationally reconstructed integral images,” Opt. Commun. 26, 72–79 (2007).
[Crossref]

Ponce-Díaz, R.

Saavedra, G.

Shin, D. -H.

Shin, D.-H.

J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Resolution-enhanced three-dimensional image correlator using computationally reconstructed integral images,” Opt. Commun. 26, 72–79 (2007).
[Crossref]

D.-H. Shin, E.-S. Kim, and B. Lee, “Computational reconstruction technique of three-dimensional object in integral imaging using a lenslet array,” Jpn. J. Appl. Phys. 44, 8016–8018 (2005).
[Crossref]

Stern, A.

A. Stern and B. Javidi, “3D image sensing, visualization, and processing using integral imaging,” Proc. of IEEE 94, 591–608 (2006).
[Crossref]

A. Stern and B. Javidi, “3D image sensing and reconstruction with time-division multiplexed computational integral imaging,” Appl. Opt. 42, 7036–7042 (2003).
[Crossref] [PubMed]

Yuyama, I.

F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38, 1072–1077 (1999).
[Crossref]

Appl. Opt. (2)

C. R. Acad. Sci. (1)

G. Lippmann, “La photographic integrale,” C. R. Acad. Sci. 146, 446–451 (1908).

J. Opt. Soc. Am. A (1)

Jpn. J. Appl. Phys. (1)

D.-H. Shin, E.-S. Kim, and B. Lee, “Computational reconstruction technique of three-dimensional object in integral imaging using a lenslet array,” Jpn. J. Appl. Phys. 44, 8016–8018 (2005).
[Crossref]

Opt. Commun. (1)

J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Resolution-enhanced three-dimensional image correlator using computationally reconstructed integral images,” Opt. Commun. 26, 72–79 (2007).
[Crossref]

Opt. Eng. (1)

F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38, 1072–1077 (1999).
[Crossref]

Opt. Express (4)

Opt. Lett. (4)

Proc. of IEEE (1)

A. Stern and B. Javidi, “3D image sensing, visualization, and processing using integral imaging,” Proc. of IEEE 94, 591–608 (2006).
[Crossref]

Other (1)

B. Javidi and F. Okano, eds., Three dimensional television, video, and display technologies, (Springer Verlag Berlin, 2002).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. The CII system. (a) Pickup process (b) VCR process.
Fig. 2.
Fig. 2. Signal model for the analysis of the CII system.
Fig. 3.
Fig. 3. Illustration of relationship between fz (x) and rz (x).
Fig. 4.
Fig. 4. (a). Illustration of the sum of nine SWFs, (b). Lower uniform component and positive granular component and (c). Upper uniform component and negative granular component of the signal Sπ (x) of the non-transition region.
Fig. 5.
Fig. 5. Experimental results: (a) Set of elemental images (b) reconstructed image at distance z=3.5g with the granular noise and the original object “tree” is located at z=3.5g (c) reconstructed image at distance z=4g without the granular noise but it is slightly defocused.
Fig. 6.
Fig. 6. Graph of Power of (a) the granular noise and (b) the uniform component (c) USNR along the z-axis.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

r z ( x ) = i = 0 N 1 f z ( x ) π i ( x w ) = f z ( x ) i = 0 N 1 π i ( x w ) = f z ( x ) S π ( x ) ,
π i ( x w ) = π 0 ( ( x s ) w ) ,
w = na + b = a ( n + b a ) ,
w = a z g = aM .
f z ( x ) = r z ( x ) S π ( x ) .
gn ( x ) = { S π ( x ) w a , 0 < b 0.5 a w a S π ( x ) , 0.5 a b < a .
USNR = 10 log 10 P U P gn = 10 log 10 round ( w a ) 2 1 a 0 a gn ( x ) 2 dx ,

Metrics