In recent years, head-mounted display technologies have greatly advanced. In order to overcome the accommodation-convergence conflict, light field displays reconstruct three-dimensional (3D) images with a focusing cue but sacrifice resolution. In this paper, a hybrid head-mounted display system that is based on a liquid crystal microlens array is proposed. By using a time-multiplexed method, the display signals can be divided into light field and two-dimensional (2D) modes to show comfortable 3D images with high resolution compensated by the 2D image. According to the experimental results, the prototype supports a 12.28 ppd resolution in the diagonal direction, which reaches 82% of the traditional virtual reality (VR) head-mounted display (HMD).
© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Over the past decade, three-dimensional (3D) display technologies [1–3] have greatly advanced to improve the sense of reality in flat panel displays (FPDs) . To further facilitate human life, 3D display technologies have been combined with head-mounted displays (HMDs) [5,6] for augmented reality (AR) , mixed reality (MR) , or virtual reality (VR)  applications. However, most current HMD products generate 3D images through binocular parallax [10,11], which causes a mismatch between the accommodation distance and the convergence distance of human eyes. This accommodation-convergence conflict (AC conflict) [12–14], as Figs. 1(a) and 1(b) show, dizzies observers and further produces visual fatigue. Consequently, to prevent the reduction of usage time and improve the immersive feeling of the observers, the AC conflict issue must be addressed in HMDs.
To overcome the AC conflict issue, light field (LF) display technology [15–25] has been proposed and considered as a promising solution, as it displays 3D contents in a more natural way for the human visual system, as shown in Fig. 1(c). In a typical LF display system, a microlens array (MLA) is placed in front of a display panel with an appropriate gap, and each micro-lens covers an elemental image of a number of pixels. LF images can simultaneously provide stereo parallaxes and focus cues, making the accommodation of the users’ eyes coincide with the depth of an object. In this manner, the AC conflict issue can be apparently solved. Moreover, compared with holographic displays, LF displays do not require coherent light sources, because they work based on refraction, which is more suitable and friendlier to practical applications.
However, in an LF display system, since the panel carries not only spatial but also angular information, the resolution of the reconstructed 3D images is sacrificed enormously. Generally, the resolution may be affected by ray aberration, diffraction, defocusing, and the sampling rate [26–28]. To enhance the resolution of LF displays, diverse methods have been proposed, such as mechanically moving the MLA , using an MLA with an adjustable focal length [30,31], or reducing the pixel size . Unfortunately, current technologies cannot easily achieve fast-response mechanical components, and the pixel size and lens pitch cannot be manufactured without limitations. Consequently, in this paper, a new hybrid HMD is proposed by combining the LF display technology and conventional 2D images through time multiplexing to solve the AC conflict and compensate for the resolution simultaneously.
2. Principle and design of hybrid VR HMD
2.1 Optical system principle
Conceptually, the proposed hybrid HMD works by switching between an LF mode and a 2D mode, where an LF image and a full-resolution 2D image are shown in the two modes, respectively. As long as the switching is fast enough to conduct time multiplexing [33,34], observers can perceive a resolution-enhanced image with depth information via visual persistence. To achieve such a system, a microdisplay panel, a liquid crystal (LC) MLA with a static LC alignment, a twisted nematic (TN) cell, a linear polarizer, and a main lens were adopted. The birefringent LC MLA [35,36] only deflects the light polarized parallel to its alignment direction. The TN cell  acts as a fast polarization rotator. In addition, the synchronization of the frame rate between each component should also be considered. Figure 2 illustrates the working principle and driving methods of each component.
In the LF mode, as shown in Fig. 3, the elemental images, which are generated by inversely tracing chief rays from a 3D scene to the display panel, are shown on the panel . After the unpolarized light from the display passes through the LC MLA, the light deflected by the lenses reconstructs an LF image. Meanwhile, an electric voltage is applied on the TN cell to maintain the polarization state of the light passing through the TN cell. In this manner, the unnecessary light, which passes through the LC MLA without being defected, is blocked by the linear polarizer. Finally, through the magnification of the main lens, an observer can see a low-resolution LF image with depth information in the LF mode.
In the 2D mode, as shown in Fig. 4, full-resolution 2D images are directly displayed on the panel. Similarly, the unpolarized light from the panel passes through the LC MLA. However, the TN cell at this point rotates the polarization direction of the incoming light 90 degrees by not applying a voltage; thus, the light not deflected by the LC MLA is further refracted by the main lens to generate full-resolution 2D images.
2.2 Optical components
An organic light-emitting diode (OLED) panel with a high pixel density and fast response provided by AU Optronics (AUO) was adopted as the display panel of our system, whose specifications are shown in Table 1. Since the panel size and the pixel size of the panel are different in the vertical and horizontal directions, the field of view (FOV)  and the resolution of our system will be separately discussed in the two directions. In our structure, since the frame rate of the OLED panel (72 Hz) is not sufficiently fast, the perceived hybrid image may exhibit flickering, which can be easily solved with an OLED panel with a higher frame rate.
The gradient-index lens LC MLA focuses the light polarized parallel to the LC molecules and has no bending power for the other polarization direction; hence, it considerably affects the image quality of both modes in our proposed system. According to the characteristics of low viscosity and common LC phase, the nematic E7 LC with positive dielectric anisotropy (ε// >ε⊥) was selected as the material of the LC MLA to achieve a high consistency of fabrication. Figure 5 and Table 2 show the structure and specification of our fabricated LC MLA. To achieve a smooth gradient of the electric field distribution, i.e., better lens quality, a Nb2O5 high resistance (Hi-R) [40–42] layer with a thickness of 20 nm was coated on the top of the aluminum electrode to import the electric line of force into the central part of the LC MLA. In addition, with this layer, the applied voltage on the LC MLA can be reduced to achieve a simpler driving method. Finally, a polyimide (PI) layer was coated on both the Hi-R layer and the bottom electrode by a spin-coating machine to simplify the alignment.Fig. 6. In the interference pattern, the intervals between every two fringes are almost the same, indicating that the index is distributed with a smooth gradient, resulting in good lens quality. In addition, the center of each fringe pattern coincides with the center of each lens, revealing that the image quality degradation caused by an off-center lens can be largely avoided.
A TN cell (model X-FPM(L) from Unice E-O Services Inc.) was adopted as a polarization switch, whose specifications are shown in Table 3. In particular, the response speed of the TN cell is fast enough for our requirement when the applied voltage is 24 V. Moreover, the polarization contrast in the visible spectrum reaches almost 700 in the polarization-altering state and 4000 in the non-altering state, as Fig. 7 shows. Finally, a lens from Google Cardboard ver. 2.0  was chosen as the main lens, and its specifications are shown in Table 4.
2.3 Hybrid VR HMD design
In this paper, our proposed binocular system could be regarded as two separate monocular systems. The process of the optical layout can be divided into three parts, and the structure of the monocular optical system is illustrated in Fig. 8.
First, the distance between the eye box and the LF virtual imaging plane was set to 1 m (1 diopter) for comfortable eye accommodation, and the eye relief between the eyebox and the main lens was set to a common value of 18 mm. In this manner, the position of the reconstructed LF image plane located 0.95 mm under the OLED panel can be determined using the Gaussian lens formula.
Second, to approach the typical depth of focus of human eyes, the interval between the 2D and LF virtual imaging planes was set to 0.6 diopter. Therefore, the locations of the 2D virtual imaging plane and the display panel were decided to be 81.6 mm and 56.32 mm from the eye position, respectively.
Last but not least, depending on whether the panel-lens gap equals the focal length of the LC MLA, LF displays can be separated into depth priority integral imaging (DPII) system and resolution priority integral imaging (RPII) system  types. In our design, to achieve high resolution in a limited depth of field, the RPII type was adopted by creating an appropriate central depth plane (CDP)  of the LC MLA and reconstructing LF images around the CDP. In addition, the CDP of the LC MLA should be located at the position of the reconstructed LF imaging plane, and the object distance needs to fit the paraxial imaging equation. However, with the restriction of the LC MLA fabrication process, the designed focal length was restricted by three adjustable factors; lens pitch, cell gap, and the LC material. Generally, the lens pitch determines the number of pixels underneath a single microlens and consequently affects the tradeoff between angular and spatial information. With more covered pixels, the viewing zone is enlarged, but the image resolution is decreased. Therefore, the criteria for LF image formation would restrain the lens pitch and focal length in the LC MLA. Consequently, the system parameters can only be generated when the LC MLA reconstructs a virtual LF image.
For simplicity, the whole design process can be summarized into an algorithm. By using the thin lens imaging formula,Fig. 9. Because the pixel size of our adopted panel is different in the horizontal and vertical directions, the resolution should be discussed in two dimensions. Moreover, the field of view (FOV) of the proposed system is dominated by the specification of the main lens as 80°. In addition, the resolution of the 2D virtual image is independent of the LC MLA; therefore, the resolution in pixel per degree (ppd) of the hybrid HMD can be given by Eq. (3):Table 5 summarizes all the system parameters.
In the proposed system, the light efficiency needs to be considered since the additional absorptive optical components are used. In the LC MLA layer, the aperture ratio of the top electrode is 90%, and the transmittances of the bottom electrode material, ITO, and the LC material, E7 are 95% and 90%. Therefore, the transmittance of the LC MLA is 77% by multiplexing these three factors. Moreover, based on the specification in Table 3, the transmittance of the TN cell with the polarizer is 43.5%. Compared with traditional VR HMD, the light efficiency of our system is 33.5%, Such a sacrifice in light efficiency is necessary for the time-multiplexing and can be easily overcome by using a higher brightness of the display panel.
3. Experiments and results
3.1 Experimental setup
After determining all the parameters and fabricating the optical components, a prototype of our hybrid VR HMD system was set up to verify the proposed design. Figure 10 illustrates the configuration of the hybrid VR HMD, which includes a movable and rotatable stage with six degrees of freedom. In the experiment, first, the OLED panel was placed on the bottom of the optical stage, which was fixed on an optical table. Second, the fabricated LC MLA connected to a function generator was put on the OLED panel with the determined gap, 0.98 mm, to cover the entire display area. Third, the TN cell and polarizer were stacked on the LC MLA. To synchronize the display images with the driving signals in both LF mode and 2D mode, a flexible printed circuit (FPC) board printed from AUO was connected with the OLED panel and the TN cell, separately. Finally, the main lens was fixed with a translation stage to adjust the interval between the eye pupils of corresponding observers. Before the experiment, a beam-expanded He-Ne laser was used to determine the focal point of the main lens by finding the smallest light spot of the laser bundle passing through the main lens.
To measure the FOV, resolution and evaluate the image quality, an industrial camera with a high-precision mobile stage was used to imitate the human eyes because the entrance pupil of the camera (5.7 mm) is similar to that of human eye pupils. Next, to measure the depth range, a Canon 5D Mark II camera with a 100-mm lens was adopted because it has a very shallow depth of focus (DOF), which can sensitively reflect the depth variation.
Moreover, the field of view (FOV), which significantly affects the sense of immersion, was verified using a testing pattern containing symmetrical lines. Based on the experimental results, the monocular FOV of the prototype was 76° in both the horizontal and vertical directions. With 50° overlap, the binocular FOV could be calculated as 102° in the horizontal direction. Compared with the theoretical restricted value of the main lens, 80°, the FOV of our design is acceptable and sufficient to produce the immersive visual experience in the hybrid HMD.
3.2 Resolution assessment
To measure the visual resolution in ppd , the USAF 1951 test chart  with corresponding sizes in different depth positions by criterion was adopted to provide line pairs with different spatial frequencies, whose values can be calculated with Eq. (5).
In the experiment, the original USAF 1951 images should be resized separately by considering different pixel sizes in two dimensions and magnifications with different depths to keep the same FOV. After capture by the industrial camera, the experimental results of the resolution assessment with normalized size are shown in Fig. 11. According to the conversion equation and look-up table, the resolution of the LF, 2D and hybrid virtual images could be calculated in Table 6. Based on the results, the resolution of the hybrid virtual image was obviously enhanced in our system as 12.28 ppd in the diagonal direction, which verified the evaluated method of ideal resolution performance in Fig. 9.
To further evaluate the image quality, a picture containing a 3D cube and a colorful diamond was used, as shown in Fig. 12, where the resolution of the captured hybrid virtual image was indeed enhanced. A video showing the hybrid virtual image is revealed in Visualization 1. In the video, the depth information of the hybrid image could be perceived. In addition, slight flickers can be observed in the video, which can be easily solved with an OLED panel with a higher frame rate.
3.3 Depth evaluation
In the depth experiment, the monocular depth information of the perceived images confirmed that the proposed system could supply natural focus cues to solve the AC conflict by LF technology. To generate the experimental patterns, first, the magnification effect and the correct resizing of the 2D and LF virtual images for corresponding depths need to be considered. In addition, the pattern for the LF virtual image should be further computed as an elemental image with a continuous depth map by the LF ray tracing algorithm. At the same time, for the hybrid virtual image, the input patterns of both modes should be processed under the same method as the prior one; however, the resizing ratio should be modified to cooperate with the hybrid depth distance for superposition. Based on the design, the depths of the 2D, and LF virtual images are located at 62.5 cm, and 100 cm, respectively, and the depth of the hybrid virtual image is between these two planes.
To confirm the depth characteristic of the proposed system, the experiment could be separated into two parts: reconstructed planes of virtual images and continuous depth range with focus cues. First, in the verification of depth positions, the camera lens was set to focus at 62.5 cm, 100 cm, and the position between them from the eye box. Figure 13 presents the captured 2D, hybrid, and LF virtual images. According to the results, once the camera focused at 100 cm, the LF virtual image was well reconstructed, but the 2D and hybrid virtual images still contained blur effects at the edge of images. This phenomenon indicates that the depth position at 100 cm is the best imaging plane of the LF virtual image only. In contrast, the depth position at 62.5 cm could be verified as the 2D virtual image plane since it contained a sharper edge than the others, but the LF and hybrid images were not the clearest based on a similar method of comparison. Furthermore, of the measured results, the hybrid virtual image at around 80 cm presents the best resolution compared with the others, which is almost located at the middle of the LF and 2D virtual imaging planes.
Second, the continuous depth range with focus cues of the proposed system was verified by experiment. Owing to the depth-fused effect by time multiplexing in our approach, the depth difference in the hybrid virtual image would be smaller than that in the LF virtual image. By using a narrow-DOF camera, the depth information of the hybrid virtual image could be captured. Based on the specification of the camera, the DOF is 0.95 cm at 80 cm far, when the focal length of lens is 100 mm and the f number is f / 2.8. For the designed diamond pattern with depth information, where the depth planes of the purple and the red diamonds were set as 147 cm, and 75 cm in the LF image, respectively, shown in Fig. 14, when the camera lens focused on the purple diamond, the purple diamond was clearly captured but the red diamond was blurred, and vice versa. Consequently, the hybrid virtual images could be verified with natural depth information, which eliminates the visual fatigue issue caused by the AC conflict.
After the above experiments, the FOV, resolution and depth information of the hybrid image in the proposed system were verified and measured. The comparison between a traditional VR HMD, a light field VR HMD, and the proposed hybrid VR HMD is shown in Table 7. In our structure, the disadvantage is the hardware requirement, which requires a high-response-speed panel and a TN cell. Otherwise, the flicking issue would appear when the frame rate is not fast enough. Moreover, a circuit board synchronizing all the components with signal contents is also necessary. However, in the proposed system, not only could the AC conflict be solved by light field technology but also the resolution and quality of the reconstructed virtual image could be compensated by the 2D image. In the design of this paper, the monocular depth range of the hybrid image in the prototype is from 66 cm to 121 cm, which is mixed by the depth positions of the 2D image, 62.5 cm, and the light field image, from 70 cm to 180 cm. With the binocular parallax designed by image contents, the depth range of the proposed system could be further extended. On the other hand, the resolution of the hybrid virtual image in the diagonal direction is 12.28 ppd, where the light field image, 6.07 ppd, is compensated by the 2D image, 15.01 ppd.
In the proposed system, the generation of the hybrid image is similar to a two-focal-plane depth-fused display (DFD) [49–52], which reconstructs 3D images by merging front and rear images. In a DFD, the perceived depth of the fused images could be considered as a weighted sum of the depths of the two focal planes with the depth-weighted fusing functions of illuminance. However, since the image generation principle of the proposed system is different, where the integral imaging system considers the eyebox size but the DFD system does not, the depth of the hybrid virtual images cannot be calculated in the same manner. To our knowledge, it is the first time such a system combining 2D and LF images is proposed, thus few literature can provide an accurate equation to determine the depth of the hybrid image. In this manner, although the depth of the hybrid image was proved to be between the 2D and LF planes through photographs before, in future studies, the accurate accommodation response to the hybrid virtual images needs more physiological experiments.
Over the past few years, the technologies for VR HMDs have been greatly advanced by several companies to give users high-quality VR visual perception in diverse fields, such as entertainment, education, medicine and military training. Based on the principles of displaying 3D images, VR HMD technologies can be divided into stereoscope and LF systems. However, both technologies still have some serious issues that need to be solved. For the stereoscope system, since the accommodation and convergence distances of the human eyes are mismatched, the observer can easily feel visual fatigue from the AC conflict. In contrast, for LF technology, to eliminate the AC conflict, the resolution of the 3D image is decreased due to sacrificing pixels of the original panel to record the angular information. Therefore, in this paper, a new type of HMD called hybrid HMD was developed to simultaneously solve the above problems using a time-multiplexed method.
In the proposed system, the hybrid HMD was designed to consist of LF and 2D modes, which compensate for the resolution of the LF image by a full-resolution 2D image. The components in the hybrid HMD include an OLED panel, an electric circuit, a LC MLA, a TN cell, a polarizer, and a main lens. In the structure, the LC MLA functions as a lens only in the rubbing direction, and the TN cell and polarizer can be assumed as a polarization switch to control the necessary light to reconstruct the virtual image in each mode. By switching two images faster than human visual persistence, the observer could see a hybrid image with high resolution and depth information by the time-multiplexed method.
In this paper, a prototype of the hybrid HMD was built to verify our proposed structure. Based on the experimental results, the hybrid HMD supported a wide FOV, almost 76° in both directions. In addition, the effective resolution of the hybrid virtual image could be compensated to 12.28 ppd in the diagonal direction, which is much higher than the resolution of the LF virtual image, 6.07 ppd, and reaches 82% of the traditional VR HMD, 15.01 ppd. Furthermore, the depth planes of the LF, 2D and hybrid images and the depth range of the natural focusing cues were also verified in the experiment. Therefore, this paper announced a new structure called a hybrid HMD to display a high-resolution image with a focus cue to solve the AC conflict by a time-multiplexed method.
Ministry of Science and Technology (MOST) in Taiwan (contract No. MOST 104-2628-E-009-012-MY3 and 107-2221-E-009-115-MY3).
The OLED panel and circuit board were generously provided by the small-size OLED product design department of AU Optronics Company (AUO), Taiwan.
1. N. Holliman, “3D display systems,” in Department of Computer Science (University of Durham, 2005).
3. L. Hill and A. Jacobs, “3-D liquid crystal displays and their applications,” Proc. IEEE 94(3), 575–590 (2006). [CrossRef]
4. S. W. Depp and W. E. Howard, “Flat-panel displays,” Sci. Am. 266(3), 90–97 (1993). [CrossRef]
5. E. Sutherland, “A head-mounted three dimensional display,” in fall joint computer conference (1968).
6. K. Keller, A. State, and H. Fuchs, “Head mounted displays for medical use,” J. Disp. Technol. 4(4), 468–472 (2008). [CrossRef]
8. J. P. Rolland and H. Hua, “Head-mounted display systems,” in Encyclopedia of Optical Engineering (Dekker, 2005).
9. H. McLellan, “Virtual realities,” in Handbook of research for educational communications and technology, (1996).
10. C. Wheatstone, “Contributions to the physiology of vision.–Part the first. On some remarkable, and hitherto unobserved, phenomena of binocular vision,” in Philosophical transactions of the Royal Society of London (1838).
12. R. Burke and L. Brickson, “Focus cue enabled head-mounted display via microlens array,” TOG 32, 220 (2013).
14. F.-C. Huang, D. P. Luebke, and G. Wetzstein, “The light field stereoscope,” ACM Transactions on Graphics 34, 60:1–60:12 (2015).
17. E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing (MIT Press, 1991), pp. 3–20.
18. S.-W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. 44(2), 71–74 (2005). [CrossRef]
20. N. Balram and I. Tošić, “Light-field imaging and display systems,” Inf. Disp. 32(4), 6–13 (2016). [CrossRef]
22. M. Liu, C. Lu, H. Li, and X. Liu, “Bifocal computational near eye light field displays and Structure parameters determination scheme for bifocal computational display,” Opt. Express 26(4), 4060–4074 (2018). [CrossRef] [PubMed]
23. C. Yao, D. Cheng, T. Yang, and Y. Wang, “Design of an optical see-through light-field near-eye display using a discrete lenslet array,” Opt. Express 26(14), 18292–18301 (2018). [CrossRef] [PubMed]
25. Z. Xin, D. Wei, X. Xie, M. Chen, X. Zhang, J. Liao, H. Wang, and C. Xie, “Dual-polarized light-field imaging micro-system via a liquid-crystal microlens array for direct three-dimensional observation,” Opt. Express 26(4), 4035–4049 (2018). [CrossRef] [PubMed]
28. N. Viganò, H. Der Sarkissian, C. Herzog, O. de la Rochefoucauld, R. van Liere, and K. J. Batenburg, “Tomographic approach for the quantitative scene reconstruction from light field images,” Opt. Express 26(18), 22574–22602 (2018). [CrossRef] [PubMed]
29. V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in Proceedings IEEE Conference on Computational Photography (IEEE, 2014), pp. 1–10. [CrossRef]
31. T.-H. Jen, X. Shen, G. Yao, Y.-P. Huang, H.-P. D. Shieh, and B. Javidi, “Dynamic integral imaging display with electrically moving array lenslet technique using liquid crystal lens,” Opt. Express 23(14), 18415–18421 (2015). [CrossRef] [PubMed]
32. H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. A 15(8), 2059–2065 (1998). [CrossRef]
33. G. Johansson, “Visual perception of biological motion and a model for its analysis,” Percept. Psychophys. 14(2), 201–211 (1973). [CrossRef]
35. C.-W. Chen, M. Cho, Y.-P. Huang, and B. Javidi, “Three-dimensional imaging with axially distributed sensing using electronically controlled liquid crystal lens,” Opt. Lett. 37(19), 4125–4127 (2012). [CrossRef] [PubMed]
36. M. Martinez-Corral, P.-Y. Hsieh, A. Doblas, E. Sanchez-Ortiga, G. Saavedra, and Y.-P. Huang, “Fast axial-scanning widefield microscopy with constant magnification and resolution,” J. Disp. Technol. 11(11), 913–920 (2015). [CrossRef]
37. M. Schadt and W. Helfrich, “Voltage‐dependent optical activity of a twisted nematic liquid crystal,” Appl. Phys. Lett. 18(4), 127–128 (1971). [CrossRef]
39. K. W. Arthur and F. P. Brooks, Jr., “Effects of field of view on performance with head-mounted displays,” in University of North Carolina at Chapel Hill (2000).
40. P.-Y. Hsieh, P.-Y. Chou, H.-A. Lin, C.-Y. Chu, C.-T. Huang, C.-H. Chen, Z. Qin, M. M. Corral, B. Javidi, and Y.-P. Huang, “Long working range light field microscope with fast scanning multifocal liquid crystal microlens array,” Opt. Express 26(8), 10981–10996 (2018). [CrossRef] [PubMed]
41. Y.-C. Chang, T.-H. Jen, C.-H. Ting, and Y.-P. Huang, “High-resistance liquid-crystal lens array for rotatable 2D/3D autostereoscopic display,” Opt. Express 22(3), 2714–2724 (2014). [CrossRef] [PubMed]
43. Y. P. Huang, L. Y. Liao, and C. W. Chen, “2‐D/3‐D switchable autostereoscopic display with multi‐electrically driven liquid‐crystal (MeD‐LC) lenses,” J. Soc. Inf. Disp. 18(9), 642–646 (2010). [CrossRef]
44. D. MacIsaac, “Google Cardboard: A virtual reality headset for $10?” Phys. Teach. 53(2), 125 (2015). [CrossRef]
45. M. Cho, M. Daneshpanah, I. Moon, and B. Javidi, “Three-dimensional optical sensing and visualization using integral imaging,” Proc. IEEE 99(4), 556–575 (2011). [CrossRef]
47. Z. Qin, P.-J. Wong, W.-C. Chao, F.-C. Lin, Y.-P. Huang, and H.-P. D. Shieh, “Contrast-sensitivity-based evaluation method of a surveillance camera’s visual resolution: improvement from the conventional slanted-edge spatial frequency response method,” Appl. Opt. 56(5), 1464–1471 (2017). [CrossRef]
50. X. Hu and H. Hua, “Design and assessment of a depth-fused multi-focal-plane display prototype,” J. Disp. Technol. 10(4), 308–316 (2014). [CrossRef]