Abstract

We introduce a projection-type light field display featuring effective light modulation. By combining a tomographic display with integral imaging (InIm) technology, a novel optical design is capable of an autostereoscopic light field projector. Here, the tomographic approach generates a high-resolution volumetric scene, and InIm makes it possible for the volumetric scene to be reconstructed on a large screen through a projection. Since all the processes are realized optically without digital processing, our system can overcome the performance limitations associated with the number of pixels in the conventional InIm displays. We built a prototype display and demonstrated that our optical design has the potential of massive resolution with a full-parallax in a single device.

© 2021 Optical Society of America

Interest in nonglasses three-dimensional (3D) displays over the past decades has led to the development of light field displays [18]. The light field display can modulate both the direction and intensity of light, which provides the capability to reconstruct 3D objects in free space. However, it is inherently restricted by the information capacity of a flat panel display, which is defined as the number of pixels. Typically, to represent a single point, a bundle of converging-diverging rays should be used. Since the ray and the pixel are one-to-one matched, it reduces the available information by its number. This is a fundamental trade-off between spatial and angular resolution.

A critical issue in the light field displays is the need for enormous information to achieve enough visual quality, full-parallax, and depth. It may be the prime reason why the light field displays are difficult to put into practical use. To increase the information capacity and extend the 3D volume, the spatial-multiplexed systems of stacking multiple two-dimensional (2D) displays in the depth direction were proposed [1,2]. Furthermore, approaches that computationally optimize the light field have been described, which can condense the spatial-angular information effectively [3,4]. However, such a stacking structure has a problem that a bulky space and a large number of display panels are essential to be implemented on a large scale.

In contrast, the projection-based light field system has the advantage of being able to scale the display at will [5,6]. Using multiple projectors can easily increase the information capacity [7,8]. Integral imaging (InIm) is the most straightforward and suitable method for a projection-based light field. In the InIm, the angular information is recorded and reproduced in the form of spatial information called an elemental image (EI), as shown in Fig. 1(a). The problem is that the amount of information is limited by the spatial resolution of the pickup sensor or display device. Moreover, redundant information occurs to express the angular information of a 3D scene, as shown in Fig. 1(b). Thereby, the InIm shows a significantly lower image quality than the original resolution of the display panel. We think that there exists a capability to improve the performance of the InIm projection by removing the repeated and inefficiently used information.

 figure: Fig. 1.

Fig. 1. (a) Basic two processes of the InIm system: pickup and reconstruction. (b) Number of picked up points is increasing according to distance. (c) Simplified concept diagram of the optical structure. It consists of four steps, optically combining multifocal display with integral imaging. Note that the ${z_1}$ can be a negative value, which means pickup as a virtual image.

Download Full Size | PPT Slide | PDF

Here, we introduce a novel projection-type light field display that can effectively transfer spatial-angular information on a large screen. Figure 1(c) shows the principle of the light field optical transmission, which consists of four steps: light field generation, pickup, projection, and reconstruction. The main idea is to optically connect all the processes using a projection system. With this configuration, the automatically mapped EI plays a key role in avoiding the inefficient use of information represented by the trade-off relationship in the InIm. Furthermore, unlike the previous InIm systems, the proposed design prevents hardware-related information reduction at the capture and display stage and allows the EI to be supersampled without discrete division.

 figure: Fig. 2.

Fig. 2. Ray-tracing results of the proposed method. (a)–(f) Two-dimensional light field simulation results for a single plane. The horizontal axis represents the $x$-direction position of the light field, and the vertical axis represents the tangent value of its angle. The light field emanating from a pixel has the same color. The center pixel is marked in black to trace the shape of the light field. Illustrations of the changes in spot size during (g) pickup and (h) reconstruction process. (i) Maximum resolution of the system corresponding to the Rayleigh criterion when a wavelength is 550 nm. Our prototype has a pickup area of $2.4\;{\rm{cm}}^2$ with $F/4$.

Download Full Size | PPT Slide | PDF

For the first step, we adopt the tomographic display to generate a light field. This method can produce a volumetric scene over a wide depth range by creating dozens of planes placed at different depths [912]. The multifocal planes (MFPs) are generated from a red/green/blue/depth (RGB-D) image by synchronizing a binary backlight with a focus-tunable lens (FTL). Here, we utilize the FTL as an aperture stop for the MFPs. While the FTL controls the floating position ${z_1}$ to ${f_{R{L_2}}}^2/{f_{\rm{FTL}}}$, each focal plane has the same divergence angle and size with the telecentric relay [13]. Then, in the next step, the synthesized light field from the MFPs is picked up by microlens array (MiLA). In the ${{\rm{EI}}_1}$, the overlap between each lenslet can be alleviated by matching the $F$-number ($F\#$) of the MFPs to that of MiLA. Third, through the projection lens, the ${{\rm{EI}}_1}$ is enlarged by the magnification factor $M$ on the ${{\rm{EI}}_2}$ plane. We place the screen here. However, similar to previous projection InIm techniques, it is possible to use either a screen or direct projection method here. The difference between the two methods has been well described using parameters such as fill factor and depth range [14,15]. As the final step, the light field after the screen is reconstructed as it passes through the macrolens array (MaLA) in the reverse process of the pickup.

Generally, bringing the volumetric scene to the big screen is difficult because the wider the area during magnification, the narrower the angle of each display pixel [16]. However, we utilize the InIm techniques while projecting the volumetric scene. It allows that the angular information of the volumetric scene is converted into spatial information during the pickup process. Accordingly, even if the divergence angle for each pixel is reduced when projected, it is restored while the EI is returned back to the angular information in the reconstruction process. In other words, our approach not only avoids the information loss in the InIm but also effectively solves the enlargement problem of the volumetric scene.

Figures 2(a)–2(f) show the light field analysis of the proposed system for a focal plane. We only count a single dimension $x$ for simplicity and analyze the light field as an ordered pair of the position and angle. First, the light field has a rectangular shape with a spatial length of ${L_x}$ and a tangent value of $1/F{\# _1}$, where $F{\# _1}$ is the $F\#$ of MiLA. After the propagation to the MiLA, the light field is transformed into a parallelogram, as shown in Fig. 2(b). Then, the light field is divided spatially by the interval $D$, MiLA pitch, and the maximum tangent value is doubled. Here, the MiLA optically arranges the pixels in the ${{\rm{EI}}_1}$ plane according to the spatial position of each lenslet. At the screen plane, the light field’s divergence angle, which decreases during the magnification, is expanded again by diffusing. Since the screen is placed at the focal length of the MaLA in the focused mode [17], the light field after the MaLA has a rectangular shape as shown in the inset of Fig. 2(e). Then, after propagating distance ${z_2}$, the MaLA reproduces the focal plane where each display pixel appears as a sampled form, as shown in Fig. 2(f). The distance ${z_2}$ is calculated as ${z_1}MF{\# _2}/F{\# _1}$ from the geometric relationship. Because the volumetric depth is proportional to $F{\# _2}/F{\# _1}$ and the viewing angle is proportional to $1/F{\# _2}$, these two parameters can be customized by adjusting the $F\#$ between the pickup and the reconstruction.

We evaluate the system performance by deriving a point size at the reconstruction plane based on ray optics. As shown in Fig. 2(g), the blur spot size $\rho$ at the ${{\rm{EI}}_1}$ plane is calculated as $D{f_1}/{z_1}$ while picking up a point at ${z_1}$ distance. After projecting and scattering, the blur spot reconstructs the point at ${z_2}$ distance, as shown in Fig. 2(h). The lateral size of the reconstruction point is $MD + M\rho {z_2}/{f_2}$. By substituting $\rho$ and ${z_2}$, the reconstructed point size is calculated as a constant value of $2MD$, regardless of the pickup distance ${z_1}$. However, the blur spot size cannot be smaller than the diffraction limit for the MiLA in the pickup process. Considering the diffraction effect, the minimum spot ${\rho _m}$ formed by the plane wave is $2.44\lambda F{\# _1}$. In accordance with the Rayleigh criterion, the maximum spatial resolution of the system can be considered as $4A/{\rho _m}^2$, where $A$ is the pickup area. By increasing a numerical aperture of the MiLA and the pickup area, our optical design enables a large-scale volumetric display equivalent to the InIm of megapixels or higher, as shown in Fig. 2(i). For instance, in the case of using a typical projection lens with the pickup area of $24\;{\rm{mm}} \times 36\;{\rm{mm}}$ and $F/1.2$, the volumetric display with a resolution of up to 1.3 gigapixels is feasible. For the experiment, due to a lack of suitable off-the-shelf MaLA, the pickup area of the prototype was set to $15.4\;{\rm{mm}} \times 15.4\;{\rm{mm}}$.

We implemented the light field generation system using a digital micromirror device (DMD, DLP9500) from Texas Instruments as the binary backlight, which has a full high-definition (FHD) resolution and 16 kHz operation speed with $20.7\;{\rm mm} \times 11.7\;{\rm mm}$ size. Due to the DMD mirror characteristic, which rotates at an angle of 45°, we only use a resolution of $763 \times 763$ as the modulation area. The backlight image projected from the DMD is relayed to the transparent LCD with a magnification of $\times 2$ using camera lenses. The LCD model used is Sharp LS029B3SX, and the effective resolution is $465 \times 465$ (0.22 megapixels).

For the focus-tunable lens, we selected the EL10-30-TC of Optotune, which has a 10 mm aperture. The MiLA of the RPC photonics was used at a $F$-number of 4 and a pitch of 100 µm. The focal length of ${{\rm{RL}}_2}$ is set to 40 mm to match the $F\#$ between the MFPs and MiLA. For the synchronization of the FTL and DMD, the data acquisition (DAQ) board from National Instrument is utilized. In the experiments, we generated 60 planes for the volume of $13.4\;{\rm{mm}} \times 13.4\;{\rm{mm}} \times 16\;{\rm{mm}}$ with the real-time operation. The effective number of MiLA is $154 \times 154$. Because of the limitation of the experimental space, the magnification of $M$ is set to 16. However, the system scale can be expanded without restrictions. The MaLA has a pitch of 1.6 mm with $F/5$. With this configuration, the system reconstructs the volumetric scene of $21.4\;{\rm{cm}} \times 21.4\;{\rm{cm}} \times 32\;{\rm{cm}}$. More system details are discussed in Supplement 1.

Figure 3(a) shows the cropped ${{\rm{EI}}_2}$ for varying pickup distance ${z_1}$ from ${-}{{8}}$ to 8 mm. As the $F\#$ is matched, the EI is confined within a boundary of each lenslet. The number $N$ of the lenslets required to represent a pixel is calculated as $|{z_1}/{f_1}|$. Even though the scene created by the tomographic display contains the information matched with the RGB-D image in a one-to-one ratio, the optical pickup process allows the generation of the light field mapped in a one-to-$N$ ratio like the conventional InIm. As such, the proposed design can effectively perform the InIm projection with $N$ times higher resolution.

 figure: Fig. 3.

Fig. 3. Experimental results of EI generation. (a) Captured EI at the screen plane. The white wheel image is sampled differently according to the pickup distance ${z_1}$ (Visualization 1). (b) Experimental MTF results when ${z_1}$ is ${\pm}8\;{\rm{mm}}$. Binary gratings are captured using CCD camera without lenses mounted on.

Download Full Size | PPT Slide | PDF

We have measured the modulation transfer function (MTF) to analyze the resolution of the ${{\rm{EI}}_2}$, as shown in Fig. 3(b). A method of displaying binary gratings was utilized in the two cases of ${\pm}8\;{\rm{mm}}$. Compared to the simulation result from the angular spectrum method, it fits quite well except for a small mismatch due to the lenslet aberration. Where the MTF is 17%, the prototype can generate the EI with a resolution of up to $5347 \times 5347$ (28.6 megapixels). Even if we only use the relatively lower resolution LCD (0.22 megapixels) and DMD (0.58 megapixels), 36 times higher resolution is obtained. Consequently, our method can realize the projection-type InIm, which features ultrahigh resolution that previously could not be achieved with a display panel.

We constructed the MFPs for verifying the support of full-parallax and adequate focus cues. The results are captured at a distance of 1.5 m using a 50 mm focal length camera lens with $F/1.4$. The duty cycle of the DMD projection for each depth is set to 0.1 [11]. Detailed image sequences are described in Supplement 1. As shown in Fig. 4(a), the proposed method can support continuous parallax not only in the horizontal direction but also in axial direction within a viewing zone. Figure 4(b) shows the experimental results of volumetric scenes. By changing the focal length of the camera lens, it was confirmed that the depth information of the volumetric scene is well reconstructed (See Visualization 2). Since the EI is supersampled and relayed directly, it represents an antialiased image in the focus plane.

 figure: Fig. 4.

Fig. 4. Experimental 3D results. (a) Parallax according to the horizontal and axial distance. Five circles were constructed with a given RGB-D image. To emphasize the parallax, the images sliced horizontally are shown on the right. The results verify that the system can support full-parallax. (b) Volumetric scenes of Pieta and Market [18] captured with front and back focus (source image courtesy of “Pieta” www.cgtrader.com). The magnified images demonstrate the validity of 3D reconstruction (Visualization 2).

Download Full Size | PPT Slide | PDF

We utilize a Siemens star target to evaluate the contrast and imaging performance qualitatively, as shown in Fig. 5(a). Since each spoke is located at a different depth, out-of-focused spokes are gradually blurred as getting away from a focused spoke marked with a red arrow. From these results, it is clear that the depth information from the MFPs is well transmitted via the EI. For the quantitative evaluation of the resolution, the experimental MTF curve for the reconstructed image is illustrated in Fig. 5(b). The binary gratings in the same method of Fig. 3(b) were utilized. Owing to the noise at the screen, we averaged the MTF near the center. The measured MTF is less than the wave simulation result, which is thought to be caused by aberrations in the projection lens and the MaLA. The results demonstrate that the proposed method can regenerate the light field on a large scale from high-resolution EI.

 figure: Fig. 5.

Fig. 5. Experimental reconstruction results of the MFPs. (a) Qualitative results of Siemens star target for the green channel. Each spoke’s angle represents the reconstruction distance, and the radius from the center corresponds to the spatial frequency. The line trace along the arrow-line shows the contrast change. (b) Experimental MTF results for the reconstruction planes. Because of the inaccuracy by aliasing error near the MaLA, it was tested only for the distance where ${z_2}$ is ${\pm}16\;{\rm{cm}}$.

Download Full Size | PPT Slide | PDF

In summary, a large amount of information demanded has been challenging to deal with in a glasses-free 3D display. In this Letter, we have proposed a new optical configuration that effectively brings spatial-angular information to the big screen. The method optically connects the multifocal display and the InIm display using a projection system. This is a perspective beyond the fundamental tradeoff relation rather than merely combining existing studies. With the novel design, our experimental results have demonstrated that optical pixel mapping can realize the EI close to 28.6 megapixels at 17% MTF and mitigate the information loss associated with the repetitive images in the InIm. Consequently, we have improved the resolution by 36 times, and it has been verified that a full-parallax volumetric scene can be implemented on a large scale through a projection. Our optical design could be widely integrated with existing light field displays, and high visual quality can be achieved based on previous projection-type InIm techniques. We hope that our perspective of optically manipulating spatial-angular information will inspire further developments for large-scale 3D displays.

Funding

Institute for Information and Communications Technology Promotion Planning and Evaluation Grant funded by the Korea Government (MSIT) (2017-0-00787).

Acknowledgment

This work is supported by the Institute for Information and Communications Technology Planning and Evaluation Grant funded by the Korea Government (MSIT) (development of vision assistant head-mounted display (HMD) and contents for the legally blind and low visions).

Disclosures

The authors declare no conflicts of interest.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

REFERENCES

1. G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, ACM Trans. Graph. 30, 95 (2011). [CrossRef]  

2. K. Osmanis, G. Valters, R. Zabels, U. Gertners, I. Osmanis, L. Kalnins, U. Kandere, and A. Ozols, Proc. SPIE 10555, 1055510 (2018). [CrossRef]  

3. F.-C. Huang, K. Chen, and G. Wetzstein, ACM Trans. Graph. 34, 60 (2015). [CrossRef]  

4. S. Lee, C. Jang, S. Moon, J. Cho, and B. Lee, ACM Trans. Graph. 35, 60 (2016). [CrossRef]  

5. M. Hirsch, G. Wetzstein, and R. Raskar, ACM Trans. Graph. 33, 58 (2014). [CrossRef]  

6. N. Efrat, P. Didyk, M. Foshey, W. Matusik, and A. Levin, ACM Trans. Graph. 35, 59 (2016). [CrossRef]  

7. M. A. Alam, G. Baasantseren, M.-U. Erdenebat, N. Kim, and J.-H. Park, J. Soc. Inf. Disp. 20, 221 (2012). [CrossRef]  

8. H. Watanabe, N. Okaichi, H. Sasaki, and M. Kawakita, Opt. Express 28, 24731 (2020). [CrossRef]  

9. S. Lee, Y. Jo, D. Yoo, J. Cho, D. Lee, and B. Lee, Nat. Commun. 10, 2497 (2019). [CrossRef]  

10. Y. Jo, S. Lee, D. Yoo, S. Choi, D. Kim, and B. Lee, ACM Trans. Graph. 38, 215 (2019). [CrossRef]  

11. S. Choi, S. Lee, Y. Jo, D. Yoo, D. Kim, and B. Lee, Opt. Express 27, 24362 (2019). [CrossRef]  

12. D. Yoo, S. Lee, Y. Jo, J. Cho, S. Choi, and B. Lee, “Volumetric headmounted display with locally adaptive focal blocks,” IEEE Trans. Vis. Comp. Graph., doi: 10.1109/TVCG.2020.3011468 (to be published). [CrossRef]  

13. M. Martnez-Corral and B. Javidi, Adv. Opt. Photon. 10, 512 (2018). [CrossRef]  

14. S.-G. Park, B.-S. Song, and S.-W. Min, J. Opt. Soc. Korea 14, 121 (2010). [CrossRef]  

15. Y. Lou, J. Hu, F. Wu, and A. Chen, Appl. Opt. 58, A234 (2019). [CrossRef]  

16. M. S. Brennesholtz and E. H. Stupp, Projection Displays, 2nd ed. (Wiley Publishing, 2008).

17. B. Lee, J.-H. Park, and S.-W. Min, in Digital Holography and Three-Dimensional Display (Springer, 2006), p. 333.

18. D. J. Butler, J. Wulff, G. B. Stanley, and M. J. Black, in European Conference on Computer Vision (Springer, 2012), p. 611.

References

  • View by:

  1. G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, ACM Trans. Graph. 30, 95 (2011).
    [Crossref]
  2. K. Osmanis, G. Valters, R. Zabels, U. Gertners, I. Osmanis, L. Kalnins, U. Kandere, and A. Ozols, Proc. SPIE 10555, 1055510 (2018).
    [Crossref]
  3. F.-C. Huang, K. Chen, and G. Wetzstein, ACM Trans. Graph. 34, 60 (2015).
    [Crossref]
  4. S. Lee, C. Jang, S. Moon, J. Cho, and B. Lee, ACM Trans. Graph. 35, 60 (2016).
    [Crossref]
  5. M. Hirsch, G. Wetzstein, and R. Raskar, ACM Trans. Graph. 33, 58 (2014).
    [Crossref]
  6. N. Efrat, P. Didyk, M. Foshey, W. Matusik, and A. Levin, ACM Trans. Graph. 35, 59 (2016).
    [Crossref]
  7. M. A. Alam, G. Baasantseren, M.-U. Erdenebat, N. Kim, and J.-H. Park, J. Soc. Inf. Disp. 20, 221 (2012).
    [Crossref]
  8. H. Watanabe, N. Okaichi, H. Sasaki, and M. Kawakita, Opt. Express 28, 24731 (2020).
    [Crossref]
  9. S. Lee, Y. Jo, D. Yoo, J. Cho, D. Lee, and B. Lee, Nat. Commun. 10, 2497 (2019).
    [Crossref]
  10. Y. Jo, S. Lee, D. Yoo, S. Choi, D. Kim, and B. Lee, ACM Trans. Graph. 38, 215 (2019).
    [Crossref]
  11. S. Choi, S. Lee, Y. Jo, D. Yoo, D. Kim, and B. Lee, Opt. Express 27, 24362 (2019).
    [Crossref]
  12. D. Yoo, S. Lee, Y. Jo, J. Cho, S. Choi, and B. Lee, “Volumetric headmounted display with locally adaptive focal blocks,” IEEE Trans. Vis. Comp. Graph., doi: 10.1109/TVCG.2020.3011468 (to be published).
    [Crossref]
  13. M. Martnez-Corral and B. Javidi, Adv. Opt. Photon. 10, 512 (2018).
    [Crossref]
  14. S.-G. Park, B.-S. Song, and S.-W. Min, J. Opt. Soc. Korea 14, 121 (2010).
    [Crossref]
  15. Y. Lou, J. Hu, F. Wu, and A. Chen, Appl. Opt. 58, A234 (2019).
    [Crossref]
  16. M. S. Brennesholtz and E. H. Stupp, Projection Displays, 2nd ed. (Wiley Publishing, 2008).
  17. B. Lee, J.-H. Park, and S.-W. Min, in Digital Holography and Three-Dimensional Display (Springer, 2006), p. 333.
  18. D. J. Butler, J. Wulff, G. B. Stanley, and M. J. Black, in European Conference on Computer Vision (Springer, 2012), p. 611.

2020 (1)

2019 (4)

S. Lee, Y. Jo, D. Yoo, J. Cho, D. Lee, and B. Lee, Nat. Commun. 10, 2497 (2019).
[Crossref]

Y. Jo, S. Lee, D. Yoo, S. Choi, D. Kim, and B. Lee, ACM Trans. Graph. 38, 215 (2019).
[Crossref]

S. Choi, S. Lee, Y. Jo, D. Yoo, D. Kim, and B. Lee, Opt. Express 27, 24362 (2019).
[Crossref]

Y. Lou, J. Hu, F. Wu, and A. Chen, Appl. Opt. 58, A234 (2019).
[Crossref]

2018 (2)

M. Martnez-Corral and B. Javidi, Adv. Opt. Photon. 10, 512 (2018).
[Crossref]

K. Osmanis, G. Valters, R. Zabels, U. Gertners, I. Osmanis, L. Kalnins, U. Kandere, and A. Ozols, Proc. SPIE 10555, 1055510 (2018).
[Crossref]

2016 (2)

S. Lee, C. Jang, S. Moon, J. Cho, and B. Lee, ACM Trans. Graph. 35, 60 (2016).
[Crossref]

N. Efrat, P. Didyk, M. Foshey, W. Matusik, and A. Levin, ACM Trans. Graph. 35, 59 (2016).
[Crossref]

2015 (1)

F.-C. Huang, K. Chen, and G. Wetzstein, ACM Trans. Graph. 34, 60 (2015).
[Crossref]

2014 (1)

M. Hirsch, G. Wetzstein, and R. Raskar, ACM Trans. Graph. 33, 58 (2014).
[Crossref]

2012 (1)

M. A. Alam, G. Baasantseren, M.-U. Erdenebat, N. Kim, and J.-H. Park, J. Soc. Inf. Disp. 20, 221 (2012).
[Crossref]

2011 (1)

G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, ACM Trans. Graph. 30, 95 (2011).
[Crossref]

2010 (1)

Alam, M. A.

M. A. Alam, G. Baasantseren, M.-U. Erdenebat, N. Kim, and J.-H. Park, J. Soc. Inf. Disp. 20, 221 (2012).
[Crossref]

Baasantseren, G.

M. A. Alam, G. Baasantseren, M.-U. Erdenebat, N. Kim, and J.-H. Park, J. Soc. Inf. Disp. 20, 221 (2012).
[Crossref]

Black, M. J.

D. J. Butler, J. Wulff, G. B. Stanley, and M. J. Black, in European Conference on Computer Vision (Springer, 2012), p. 611.

Brennesholtz, M. S.

M. S. Brennesholtz and E. H. Stupp, Projection Displays, 2nd ed. (Wiley Publishing, 2008).

Butler, D. J.

D. J. Butler, J. Wulff, G. B. Stanley, and M. J. Black, in European Conference on Computer Vision (Springer, 2012), p. 611.

Chen, A.

Chen, K.

F.-C. Huang, K. Chen, and G. Wetzstein, ACM Trans. Graph. 34, 60 (2015).
[Crossref]

Cho, J.

S. Lee, Y. Jo, D. Yoo, J. Cho, D. Lee, and B. Lee, Nat. Commun. 10, 2497 (2019).
[Crossref]

S. Lee, C. Jang, S. Moon, J. Cho, and B. Lee, ACM Trans. Graph. 35, 60 (2016).
[Crossref]

D. Yoo, S. Lee, Y. Jo, J. Cho, S. Choi, and B. Lee, “Volumetric headmounted display with locally adaptive focal blocks,” IEEE Trans. Vis. Comp. Graph., doi: 10.1109/TVCG.2020.3011468 (to be published).
[Crossref]

Choi, S.

Y. Jo, S. Lee, D. Yoo, S. Choi, D. Kim, and B. Lee, ACM Trans. Graph. 38, 215 (2019).
[Crossref]

S. Choi, S. Lee, Y. Jo, D. Yoo, D. Kim, and B. Lee, Opt. Express 27, 24362 (2019).
[Crossref]

D. Yoo, S. Lee, Y. Jo, J. Cho, S. Choi, and B. Lee, “Volumetric headmounted display with locally adaptive focal blocks,” IEEE Trans. Vis. Comp. Graph., doi: 10.1109/TVCG.2020.3011468 (to be published).
[Crossref]

Didyk, P.

N. Efrat, P. Didyk, M. Foshey, W. Matusik, and A. Levin, ACM Trans. Graph. 35, 59 (2016).
[Crossref]

Efrat, N.

N. Efrat, P. Didyk, M. Foshey, W. Matusik, and A. Levin, ACM Trans. Graph. 35, 59 (2016).
[Crossref]

Erdenebat, M.-U.

M. A. Alam, G. Baasantseren, M.-U. Erdenebat, N. Kim, and J.-H. Park, J. Soc. Inf. Disp. 20, 221 (2012).
[Crossref]

Foshey, M.

N. Efrat, P. Didyk, M. Foshey, W. Matusik, and A. Levin, ACM Trans. Graph. 35, 59 (2016).
[Crossref]

Gertners, U.

K. Osmanis, G. Valters, R. Zabels, U. Gertners, I. Osmanis, L. Kalnins, U. Kandere, and A. Ozols, Proc. SPIE 10555, 1055510 (2018).
[Crossref]

Heidrich, W.

G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, ACM Trans. Graph. 30, 95 (2011).
[Crossref]

Hirsch, M.

M. Hirsch, G. Wetzstein, and R. Raskar, ACM Trans. Graph. 33, 58 (2014).
[Crossref]

Hu, J.

Huang, F.-C.

F.-C. Huang, K. Chen, and G. Wetzstein, ACM Trans. Graph. 34, 60 (2015).
[Crossref]

Jang, C.

S. Lee, C. Jang, S. Moon, J. Cho, and B. Lee, ACM Trans. Graph. 35, 60 (2016).
[Crossref]

Javidi, B.

Jo, Y.

S. Choi, S. Lee, Y. Jo, D. Yoo, D. Kim, and B. Lee, Opt. Express 27, 24362 (2019).
[Crossref]

S. Lee, Y. Jo, D. Yoo, J. Cho, D. Lee, and B. Lee, Nat. Commun. 10, 2497 (2019).
[Crossref]

Y. Jo, S. Lee, D. Yoo, S. Choi, D. Kim, and B. Lee, ACM Trans. Graph. 38, 215 (2019).
[Crossref]

D. Yoo, S. Lee, Y. Jo, J. Cho, S. Choi, and B. Lee, “Volumetric headmounted display with locally adaptive focal blocks,” IEEE Trans. Vis. Comp. Graph., doi: 10.1109/TVCG.2020.3011468 (to be published).
[Crossref]

Kalnins, L.

K. Osmanis, G. Valters, R. Zabels, U. Gertners, I. Osmanis, L. Kalnins, U. Kandere, and A. Ozols, Proc. SPIE 10555, 1055510 (2018).
[Crossref]

Kandere, U.

K. Osmanis, G. Valters, R. Zabels, U. Gertners, I. Osmanis, L. Kalnins, U. Kandere, and A. Ozols, Proc. SPIE 10555, 1055510 (2018).
[Crossref]

Kawakita, M.

Kim, D.

S. Choi, S. Lee, Y. Jo, D. Yoo, D. Kim, and B. Lee, Opt. Express 27, 24362 (2019).
[Crossref]

Y. Jo, S. Lee, D. Yoo, S. Choi, D. Kim, and B. Lee, ACM Trans. Graph. 38, 215 (2019).
[Crossref]

Kim, N.

M. A. Alam, G. Baasantseren, M.-U. Erdenebat, N. Kim, and J.-H. Park, J. Soc. Inf. Disp. 20, 221 (2012).
[Crossref]

Lanman, D.

G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, ACM Trans. Graph. 30, 95 (2011).
[Crossref]

Lee, B.

Y. Jo, S. Lee, D. Yoo, S. Choi, D. Kim, and B. Lee, ACM Trans. Graph. 38, 215 (2019).
[Crossref]

S. Choi, S. Lee, Y. Jo, D. Yoo, D. Kim, and B. Lee, Opt. Express 27, 24362 (2019).
[Crossref]

S. Lee, Y. Jo, D. Yoo, J. Cho, D. Lee, and B. Lee, Nat. Commun. 10, 2497 (2019).
[Crossref]

S. Lee, C. Jang, S. Moon, J. Cho, and B. Lee, ACM Trans. Graph. 35, 60 (2016).
[Crossref]

D. Yoo, S. Lee, Y. Jo, J. Cho, S. Choi, and B. Lee, “Volumetric headmounted display with locally adaptive focal blocks,” IEEE Trans. Vis. Comp. Graph., doi: 10.1109/TVCG.2020.3011468 (to be published).
[Crossref]

B. Lee, J.-H. Park, and S.-W. Min, in Digital Holography and Three-Dimensional Display (Springer, 2006), p. 333.

Lee, D.

S. Lee, Y. Jo, D. Yoo, J. Cho, D. Lee, and B. Lee, Nat. Commun. 10, 2497 (2019).
[Crossref]

Lee, S.

Y. Jo, S. Lee, D. Yoo, S. Choi, D. Kim, and B. Lee, ACM Trans. Graph. 38, 215 (2019).
[Crossref]

S. Choi, S. Lee, Y. Jo, D. Yoo, D. Kim, and B. Lee, Opt. Express 27, 24362 (2019).
[Crossref]

S. Lee, Y. Jo, D. Yoo, J. Cho, D. Lee, and B. Lee, Nat. Commun. 10, 2497 (2019).
[Crossref]

S. Lee, C. Jang, S. Moon, J. Cho, and B. Lee, ACM Trans. Graph. 35, 60 (2016).
[Crossref]

D. Yoo, S. Lee, Y. Jo, J. Cho, S. Choi, and B. Lee, “Volumetric headmounted display with locally adaptive focal blocks,” IEEE Trans. Vis. Comp. Graph., doi: 10.1109/TVCG.2020.3011468 (to be published).
[Crossref]

Levin, A.

N. Efrat, P. Didyk, M. Foshey, W. Matusik, and A. Levin, ACM Trans. Graph. 35, 59 (2016).
[Crossref]

Lou, Y.

Martnez-Corral, M.

Matusik, W.

N. Efrat, P. Didyk, M. Foshey, W. Matusik, and A. Levin, ACM Trans. Graph. 35, 59 (2016).
[Crossref]

Min, S.-W.

S.-G. Park, B.-S. Song, and S.-W. Min, J. Opt. Soc. Korea 14, 121 (2010).
[Crossref]

B. Lee, J.-H. Park, and S.-W. Min, in Digital Holography and Three-Dimensional Display (Springer, 2006), p. 333.

Moon, S.

S. Lee, C. Jang, S. Moon, J. Cho, and B. Lee, ACM Trans. Graph. 35, 60 (2016).
[Crossref]

Okaichi, N.

Osmanis, I.

K. Osmanis, G. Valters, R. Zabels, U. Gertners, I. Osmanis, L. Kalnins, U. Kandere, and A. Ozols, Proc. SPIE 10555, 1055510 (2018).
[Crossref]

Osmanis, K.

K. Osmanis, G. Valters, R. Zabels, U. Gertners, I. Osmanis, L. Kalnins, U. Kandere, and A. Ozols, Proc. SPIE 10555, 1055510 (2018).
[Crossref]

Ozols, A.

K. Osmanis, G. Valters, R. Zabels, U. Gertners, I. Osmanis, L. Kalnins, U. Kandere, and A. Ozols, Proc. SPIE 10555, 1055510 (2018).
[Crossref]

Park, J.-H.

M. A. Alam, G. Baasantseren, M.-U. Erdenebat, N. Kim, and J.-H. Park, J. Soc. Inf. Disp. 20, 221 (2012).
[Crossref]

B. Lee, J.-H. Park, and S.-W. Min, in Digital Holography and Three-Dimensional Display (Springer, 2006), p. 333.

Park, S.-G.

Raskar, R.

M. Hirsch, G. Wetzstein, and R. Raskar, ACM Trans. Graph. 33, 58 (2014).
[Crossref]

G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, ACM Trans. Graph. 30, 95 (2011).
[Crossref]

Sasaki, H.

Song, B.-S.

Stanley, G. B.

D. J. Butler, J. Wulff, G. B. Stanley, and M. J. Black, in European Conference on Computer Vision (Springer, 2012), p. 611.

Stupp, E. H.

M. S. Brennesholtz and E. H. Stupp, Projection Displays, 2nd ed. (Wiley Publishing, 2008).

Valters, G.

K. Osmanis, G. Valters, R. Zabels, U. Gertners, I. Osmanis, L. Kalnins, U. Kandere, and A. Ozols, Proc. SPIE 10555, 1055510 (2018).
[Crossref]

Watanabe, H.

Wetzstein, G.

F.-C. Huang, K. Chen, and G. Wetzstein, ACM Trans. Graph. 34, 60 (2015).
[Crossref]

M. Hirsch, G. Wetzstein, and R. Raskar, ACM Trans. Graph. 33, 58 (2014).
[Crossref]

G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, ACM Trans. Graph. 30, 95 (2011).
[Crossref]

Wu, F.

Wulff, J.

D. J. Butler, J. Wulff, G. B. Stanley, and M. J. Black, in European Conference on Computer Vision (Springer, 2012), p. 611.

Yoo, D.

Y. Jo, S. Lee, D. Yoo, S. Choi, D. Kim, and B. Lee, ACM Trans. Graph. 38, 215 (2019).
[Crossref]

S. Lee, Y. Jo, D. Yoo, J. Cho, D. Lee, and B. Lee, Nat. Commun. 10, 2497 (2019).
[Crossref]

S. Choi, S. Lee, Y. Jo, D. Yoo, D. Kim, and B. Lee, Opt. Express 27, 24362 (2019).
[Crossref]

D. Yoo, S. Lee, Y. Jo, J. Cho, S. Choi, and B. Lee, “Volumetric headmounted display with locally adaptive focal blocks,” IEEE Trans. Vis. Comp. Graph., doi: 10.1109/TVCG.2020.3011468 (to be published).
[Crossref]

Zabels, R.

K. Osmanis, G. Valters, R. Zabels, U. Gertners, I. Osmanis, L. Kalnins, U. Kandere, and A. Ozols, Proc. SPIE 10555, 1055510 (2018).
[Crossref]

ACM Trans. Graph. (6)

F.-C. Huang, K. Chen, and G. Wetzstein, ACM Trans. Graph. 34, 60 (2015).
[Crossref]

S. Lee, C. Jang, S. Moon, J. Cho, and B. Lee, ACM Trans. Graph. 35, 60 (2016).
[Crossref]

M. Hirsch, G. Wetzstein, and R. Raskar, ACM Trans. Graph. 33, 58 (2014).
[Crossref]

N. Efrat, P. Didyk, M. Foshey, W. Matusik, and A. Levin, ACM Trans. Graph. 35, 59 (2016).
[Crossref]

G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, ACM Trans. Graph. 30, 95 (2011).
[Crossref]

Y. Jo, S. Lee, D. Yoo, S. Choi, D. Kim, and B. Lee, ACM Trans. Graph. 38, 215 (2019).
[Crossref]

Adv. Opt. Photon. (1)

Appl. Opt. (1)

J. Opt. Soc. Korea (1)

J. Soc. Inf. Disp. (1)

M. A. Alam, G. Baasantseren, M.-U. Erdenebat, N. Kim, and J.-H. Park, J. Soc. Inf. Disp. 20, 221 (2012).
[Crossref]

Nat. Commun. (1)

S. Lee, Y. Jo, D. Yoo, J. Cho, D. Lee, and B. Lee, Nat. Commun. 10, 2497 (2019).
[Crossref]

Opt. Express (2)

Proc. SPIE (1)

K. Osmanis, G. Valters, R. Zabels, U. Gertners, I. Osmanis, L. Kalnins, U. Kandere, and A. Ozols, Proc. SPIE 10555, 1055510 (2018).
[Crossref]

Other (4)

D. Yoo, S. Lee, Y. Jo, J. Cho, S. Choi, and B. Lee, “Volumetric headmounted display with locally adaptive focal blocks,” IEEE Trans. Vis. Comp. Graph., doi: 10.1109/TVCG.2020.3011468 (to be published).
[Crossref]

M. S. Brennesholtz and E. H. Stupp, Projection Displays, 2nd ed. (Wiley Publishing, 2008).

B. Lee, J.-H. Park, and S.-W. Min, in Digital Holography and Three-Dimensional Display (Springer, 2006), p. 333.

D. J. Butler, J. Wulff, G. B. Stanley, and M. J. Black, in European Conference on Computer Vision (Springer, 2012), p. 611.

Supplementary Material (3)

NameDescription
Supplement 1       Supplemental document for detailed description and additional results.
Visualization 1       Captured elemental image at the screen plane.
Visualization 2       Experimental results of volumetric scenes.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. (a) Basic two processes of the InIm system: pickup and reconstruction. (b) Number of picked up points is increasing according to distance. (c) Simplified concept diagram of the optical structure. It consists of four steps, optically combining multifocal display with integral imaging. Note that the ${z_1}$ can be a negative value, which means pickup as a virtual image.
Fig. 2.
Fig. 2. Ray-tracing results of the proposed method. (a)–(f) Two-dimensional light field simulation results for a single plane. The horizontal axis represents the $x$ -direction position of the light field, and the vertical axis represents the tangent value of its angle. The light field emanating from a pixel has the same color. The center pixel is marked in black to trace the shape of the light field. Illustrations of the changes in spot size during (g) pickup and (h) reconstruction process. (i) Maximum resolution of the system corresponding to the Rayleigh criterion when a wavelength is 550 nm. Our prototype has a pickup area of $2.4\;{\rm{cm}}^2$ with $F/4$ .
Fig. 3.
Fig. 3. Experimental results of EI generation. (a) Captured EI at the screen plane. The white wheel image is sampled differently according to the pickup distance ${z_1}$ (Visualization 1). (b) Experimental MTF results when ${z_1}$ is ${\pm}8\;{\rm{mm}}$ . Binary gratings are captured using CCD camera without lenses mounted on.
Fig. 4.
Fig. 4. Experimental 3D results. (a) Parallax according to the horizontal and axial distance. Five circles were constructed with a given RGB-D image. To emphasize the parallax, the images sliced horizontally are shown on the right. The results verify that the system can support full-parallax. (b) Volumetric scenes of Pieta and Market [18] captured with front and back focus (source image courtesy of “Pieta” www.cgtrader.com). The magnified images demonstrate the validity of 3D reconstruction (Visualization 2).
Fig. 5.
Fig. 5. Experimental reconstruction results of the MFPs. (a) Qualitative results of Siemens star target for the green channel. Each spoke’s angle represents the reconstruction distance, and the radius from the center corresponds to the spatial frequency. The line trace along the arrow-line shows the contrast change. (b) Experimental MTF results for the reconstruction planes. Because of the inaccuracy by aliasing error near the MaLA, it was tested only for the distance where ${z_2}$ is ${\pm}16\;{\rm{cm}}$ .

Metrics