Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Binocular dynamic holographic floating image display

Open Access Open Access

Abstract

This paper proposes a binocular holographic floating display. The device consists of two phase-modulation spatial light modulators (SLM) and a dihedral corner reflector array (DCRA) element. The conjugate images of the SLMs generated by the DCRA become the system’s exit pupils. Exit pupils that are larger than the pupils of human eyes are arranged to locate at the position of observer's eyes. Therefore, the dimension of the SLM will not limit the viewing angle, although the pixel pitch of the SLM still limits the maximum field of view. For the laser light source, the resolution of the images can achieve 3 arc minutes when the distance between images and DCRA is less than 20 cm. The full-color display function is also performed in the proposed device.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Display technology has been already developing for several decades. To provide information more completely and vividly, floating display and 3D display technology have been attracting growing attention in recent years. The advantage of floating volumetric display is that the provided information can be located in midair. Then human-machine interaction interfaces can establish without any physical contact. Similar displays provide information more vivid and have a high potential to reduce the spread of the virus simultaneously. Furthermore, the applicability of the interaction function can be enhanced further if the device can arrange images at arbitrary lateral and longitudinal positions. Currently, the floating image systems were achieved well via retroreflector array such as aerial imaging by retro-reflection (AIRR) [1] or dihedral corner reflector array (DCRA) [2,3]. The former one is suitable to achieve a large area display function. In contrast, the DCRA has the potential to achieve high-resolution images. For similar devices, the volumetric display function can be achieved by utilizing a scanned display or scanning mirror [4,5]. However, since the high-speed synchronization between the display and scanning device is necessary, it should make a trade-off between the volume of the display area, resolution, and frame rate.

On the contrary, computer-generated holography (CGH) technology can form 3D images with different depth information simultaneously and dispense with high-speed scanning mechanism. The technique can calculate the wavefront of arbitrary objects even the object is not existing [69]. Then the objects can be reconstructed if the wavefront is correctly formed experimentally. Among the algorithms, the point-based method and convolution method are suitable to decide the lateral and longitudinal absolute position of images directly [6]. The pixel size of the image plane and the hologram plane are equal in the calculation process. The iterative Fourier-transform algorithm (IFTA) generated images at the Fourier plane, then the longitudinal position is decided by an additional lens a spherical wave phase. The pixel size of the image plane is proportional to the image distance [8,9]. To achieve dynamic display, the coherent light source and spatial light modulator (SLM) is necessary [10,11]. Since the current displays is difficult to achieve the complex amplitude-modulation, the CGH display generally modulates only amplitude term or phase term via employing a digital micro-mirror device (DMD) device or a liquid-crystal device. The amplitude-modulation SLM or phase-modulation SLM is allowed for CGH display. Most devices utilized a phase-modulation SLM to avoid the conjugate terms and keep high diffraction efficiency. No matter amplitude-modulation SLM or phase-modulation SLM, the elimination of amplitude or phase information will degrade the image quality. In order to avoid degradation, the iteration algorithm [12] or the complex amplitude encoding [13] method based on current SLM devices is necessary. Nowadays, the research of the CGH technique has already been well done for near-eye display [14]. The related algorithm has also been studied for diffraction waveguide elements to make the device more compact [15,16]. However, the CGH display with long eye relief is difficult to be achieved because of the limitation of the display area. The commercial SLMs were designed with tiny pixel sizes to enhance the field of view (FOV). It caused the active area to be very small. So, the CGH displays always have a small exit pupil and short eye relief. Therefore, an additional optical element to generate a new exit pupil with more extended eye relief is necessary to achieve the floating display function. The general method is employing a geometry lens or a holographic optical lens to form the exit pupil [17,18]. Then the magnification, aberration or dispersion of this lens must be considered. In contrast, the DCRA element can achieve the same function without these problems.

In this paper, a DCRA element was utilized to achieve floating CGH display function. The DCRA element generates the conjugate images of the SLMs and they become the system’s exit pupils for the specific observer. The system approximates to locate a calculated hologram in front of the observer when the phase distribution is constructed. Then the binocular holographic floating image can be obtained. The optical equivalent window for a single eye in the proposed system is exactly the image of SLM, and therefore the dimension of the SLM will no longer limit the viewing angle. Consequently, the available FOV in the proposed system will only be limited by the pixel pitch of the SLM. Compared with the binocular holographic display systems proposed by Su et. al., [1921], the exit pupils in their systems still depends on specifications of the ocular lens. Pixel pitch of the SLM will affect the FOV in their system. Simultaneously, the dimension and location of the generated holographic image, and the eye relief will be additional factors to affect FOV in their system. However, theoretical discussion on FOV is not reported in their publications. In this paper, we discussed the maximum achieved image dimension in the proposed system with a fixed eye relief. In addition, since the DCRA element works according to only the reflection principle, color dispersion does not exist in our system. Therefore, the full-color display function can be achieved easily. In the proposed system, the convergence spherical wave was employed to be the light source. The spatial filter was located at the focal plane to avoid the DC noise [22].

2. Principle

The proposed system was arranged as Fig. 1. The converging lens and the collimated light source were utilized to provide the convergence probe beam. The beam splitters (BS1 and BS2) were employed to guide the probe beam into the phase-modulation SLM (JD955B, Jasper Display). Since the probe beams were convergence, the zero-order and the surface reflection of SLMs could be blocked by the high pass filters (HPF) which located at the focal plane. When the probe beams were modulated via the SLMs, the first-order diffraction lights would form the intermediate images near the DCRA element. The absorber with a square aperture was employed to block the high-order noise. Then the device can provide holographic images without zero-order and high-order noise. In this configuration, the DCRA element produced the conjugate images of SLMs and intermediate images. Since the SLMs were the aperture stops in the optical system, the conjugate SLMs were defined as the exit pupils (Exit pupil 1 and Exit pupil 2). The magnification of the DCRA element is 1. Therefore, the eye relief is equal to the distance from SLMs to DCRA.

 figure: Fig. 1.

Fig. 1. The experimental configuration was arranged as the schematic diagram in (a) top view; (b) side view.

Download Full Size | PDF

The dimension of the exit pupils is equal to the active size of SLMs. In the configuration, the device arranged two small holograms in front of the position of the observer's eyes. Then the device could provide different information to the specific observer's eyes. The provided information from SLM1 and SLM2 correspond to left eye and right eye separately. In the prototype device, the distance from eyes to DCRA element is about 680mm, equal to the distance between SLM and DCRA. The dimension of the DCRA element is 10cm by 10cm and the pitch of unit reflectors is 300μm by 300μm. The pixel pitch of the employed SLMs is 6.4μm. Considering the limited diffraction angle of SLM with 532nm laser light source, the maximum dimension of the floating image will locate at the center position of the DCRA, and it will be about 56 mm by 56 mm. The dimension of each single exit pupil is 12.288 mm by 6.912 mm. Since the human eye locates at the exit pupil (image plane of SLM) of the system, observer's viewing angle will not be limited by the SLM window (dimension of SLM). However, the pixel pitch of the SLM still limits the maximum available field of view of the floating image.

The phase distribution on SLM1 and SLM2 can be calculated in different coordinate systems independently. Both the coordinates of SLM1 and SLM2, the SLM lied on the x-y plane with z=0 and the center of SLM is the origin of coordinate. Such as the schematic diagram of the coordinate system of SLMs was shown in Fig. 2.

 figure: Fig. 2.

Fig. 2. The coordinate system was utilized in CGH algorithm for (a) SLM1; (b) SLM2.

Download Full Size | PDF

Then the phase-type CGHs could be calculated based on the coordinate systems. Then the phase distributions of SLM1 with 3D information ${H_1}({x,y} )$ could be obtained based on an arbitrary CGH algorithm. Since the probe beam is convergence, the spherical wavefront should be compensated. The phase distribution of the probe beam at SLM1 could be described as

$$ex{p^{i{\phi _1}({x,y} )}} = ex{p^{i\pi \left( {\frac{{{x^2} + {y^2}}}{{\lambda z}}} \right)}}$$
where x and y is the position of pixels and z is the distance from SLM1 to the convergence point in free space. Then the phase modulation of SLM1 should be
$$H_1{^{\prime}}({x,y} )= {H_1}({x,y} )ex{p^{ - i{\phi _1}({x,y} )}}$$

Similarly, the phase modulation of SLM2 could be described as

$$H_2{^{\prime}}({X,Y} )= {H_2}({X,Y} )ex{p^{ - i{\phi _2}({X,Y} )}}$$

Then the specific observer could obtain information of ${H_1}({x,y} )$ for left eye and ${H_2}({X,Y} )$ for right eye at exit pupil 1 and exit pupil 2 separately.

3. Experimental results

Figure 3 shows the simulations and the experimental results with 532nm coherence light source when ${H_1}$ and ${H_2}$ included 2D information of the character ‘left eye’ and ‘right eye’ at 600 mm separately. The holograms were calculated based on convolution method. The simulations in Figs. 3(a) and 3(b) was based on the convolution method. The experimental results as shown in Figs. 3(c) and 3(d) were captured by the commercial camera (D90, Nikon) focus at 600 mm (80 mm in front of DCRA). The dimension is 29.8 mm by 23.7 mm in Figs. 3(a), 3(c) and 38.3 mm by 23.7 mm in Figs. 3(b), (d). Theoretically, the most meticulous line width of the convolution propagation method is 6.4μm. However, the DCRA element leads to image diffused as double of the size of a single reflector. Therefore, the experimental results were similar to the simulations but the image quality degraded a bit. The resulting image at the left eye was brighter than the other one because of only one light source was utilized in the experiment. The energy can be balanced by employing an added filter, employing an additional light source or reducing the diffraction efficiency of SLM1. Since the images are binary characters, images with sharp fringes can be obtained clearly. However, the speckle noise caused by the coherence light source led to the brightness inhomogeneous. The speckle noise can be resolved by the temporal averaging method [12] or changing the light source to a narrow band incoherence source.

 figure: Fig. 3.

Fig. 3. The simulation image of (a) ${H_1}({x,y} )$; (b) ${H_2}({x,y} )$ and the experimental result of (c) left view; (d) right view.

Download Full Size | PDF

In order to avoid the failures of binocular fusion, the position of images must be arranged strategically. Therefore, the system must be able to modulate the image location accurately. The same target images shown in the former figure were utilized to determine this feature. Figure 4 shows the resulting images when the characters “EYE” in the two images were arranged at the same position. The resulting images of left view and right view were shown in Figs. 4(a) and 4(b) separately. The white label and images located on the same plane and they were clearly obtained simultaneously.

 figure: Fig. 4.

Fig. 4. The experimental resulting images in bright room of (a) left view and (b) right view and (c) the overlay results.

Download Full Size | PDF

The resulting image as shown in Fig. 4(c) is the stacked result of the images of the left view and the right view. When the labels well overlapped, the characters “EYE” almost coincided to each other. It proved that the images located at the same position in 3D space.

Figure 5 shows the images when ${H_1}({x,y} )$ contained the information of images at different depths. The CGH contained the information of a 2D character “NCUE” located at 600mm and “3D” at 760 mm. The simulation images were shown in Figs. 5(a) and 5(b) and the experimental resulting images were shown in Figs. 5(a) and 5(b). The resulting image shown in Fig. 5(c) was captured when the camera focused at 600 mm. The character “NCUE” was obtained clearly but the character “3D” defocused. Oppositely, only the character “3D” was obtained clearly when the camera focus at 760 mm. The results proved that the 3D display function is available on the proposed device.

 figure: Fig. 5.

Fig. 5. The resulting images with 3D information were compared in (a) (b) simulation and (c) (d) experiment. (a) (c) focus at 600mm; (b)(d) focus at 760mm.

Download Full Size | PDF

The image quality is well when the images are located near the DCRA element. However, the image quality of the proposed device was actually limited by the DCRA element. According to the previous literature, the diffraction effect caused by the periodic structure leads the image quality will decrease following the longer distance between the image and the DCRA. In order to determine the appropriate displaying depth of the proposed device, the image qualities with different imaging distances were inspected. In this part, the holograms were calculated based on the IFTA method, and an additional spherical phase distribution decided the distance. Therefore, the viewing angle of this pattern is fixed and independent of the image distance. The target image was shown in Fig. 6(a). The resolution of the upper side pattern is 3 arc minutes (10 line pairs per degree) and the resolution of the underside is 1.46 arc minutes (20.5 line pairs per degree). The additional red arrow shows the length of 29 times the line width of the upper side. The total calculated dimension in the IFTA algorithm is 2048 by 2048 pixels. Figures 6(b), 6(c), 6(d), 6(e), 6(f), and 6(g) shows the resulting images when the distance between the target image and ${H_1}({x,y} )$ is 480 mm (200 mm in front of DCRA), 580 mm (100 mm in front of DCRA), 630 mm (50 mm in front of DCRA), 730 mm (50 mm behind DCRA), 780 mm (100 mm behind DCRA), and 880 mm (200 mm behind DCRA) separately. The wider lines became almost indistinguishable when the image located at 480mm. The thinner lines became indistinguishable when the image located at 580mm and 880mm. Even the distance between resulting images and the DCRA was the same, the image quality of Figs. 6(g) and 6(f) was better than Figs. 6(b) and 6(c). It is because of that the spatial frequency (line pairs per mm) is inversely proportional to image distance when the viewing angle is fixed. Eventually, the viewing angle of per line can achieve 3 arc minutes when images locate between the positions where 200mm in front of the DCRA and 200mm behind the DCRA. The resolution is sufficiently meticulous for human eyes.

 figure: Fig. 6.

Fig. 6. The (a) original pattern was utilized to determine the display performance when the image located at (b) 480mm; (c) 580mm; (d) 630mm; (e) 730mm; (f) 780mm; (g) 880mm.

Download Full Size | PDF

Figure 7 shows the display area of the proposed device. The blue area of Fig. 7 shows the schematic diffraction range of ${H_1}({x,y} )$ and ${H_2}({X,Y} )$. In order to let the images from 2 SLMs overlapping, the final images should be arranged in the red area which showed in Fig. 7. Then the red area is the final display area. The maximum protrusion distance is about 30 cm. The end of the display area is about 287 cm behind the DCRA. The image can achieve the maximum size (56 mm by 56 mm) when it is located at the position of the DCRA. However, the distance between the images and DCRA should be less than 20 cm if the resolution as 3arc minutes is desired.

 figure: Fig. 7.

Fig. 7. The schematic diagram of the spatial relationship between Exit pupils, DCRA, and images.

Download Full Size | PDF

Furthermore, the proposed device can achieve full-color display function without dispersion. Here, we utilized the white LED and the band pass filters which central wavelength is 470 nm, 520 nm, 630 nm and the full width at half maximum (FWHM) is 10 nm to be the light source. The configuration was arranged as shown in Fig. 8. A pinhole attached on the LED to enhance the spatial coherence. The polarizer modulates the polarization state to fit the liquid crystal arrangement of SLM. In our case, JD955B SLM cannot achieve 2π modulation for each wavelength (about 1.67πat 632.8 nm, and 2π at 532 nm). It will cause slight reducement of the efficiency and image quality. Here, the problem can be ignored, and the full-color image can still be reconstructed eventually. The monochrome holograms were calculated with different wavelengths and displayed sequentially. The filters were switched following the corresponded hologram. Then the monochrome images could be obtained sequentially. Finally, full-color images could be obtained by the specific observer. In the prototypical device without scanning mechanism, the filters and SLMs could not be synchronized fast. Therefore, the monochrome images were captured separately and accumulated digitally. The experimental resulting image was shown in Fig. 9(b) and the simulation image was shown in Fig. 9(a). The incoherent light source reduced the speckle noise well and the full-color display function can be achieved without additional dispersion correct.

 figure: Fig. 8.

Fig. 8. The colorful collimated light source was employed to reconstructed full-color image.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. The full-color image located at 600mm was obtained in (a) simulation; (b) experiment.

Download Full Size | PDF

4. Discussions

Since the prototype device in this paper arranged the SLMs 68 cm away from the DCRA, the eye relief is also 68 cm. This value is decided roughly according to the operating distance of a desktop device. The maximum image size will become larger if the design with a longer eye relief is necessary, then the total dimension of the device will also become larger, and vice versa. The proposed device can provide 3D floating images for a specific observer. The position of the observer should be arranged rigorously in the prototype device. In order to enhance the applicability, an eye-tracking module and rotating mirror are possible to be utilized to shift the eye-box. On the other hand, the speckle noise can be suppressed by the temporal averaging method. The disadvantage of this method is the frame rate reduction of the holographic video. The other method is employing an incoherent light source to probe SLMs. As shown in Fig. 9, the resulting image proved the incoherent light source could be used to suppress speckle. However, the wide spectrum will also blur the images. Experimental observation shows incoherent light with 10 nm FWHM linewidth is a relatively good condition to suppress speckle and keep image clear without serious blurring. Considering this condition, the super luminescent light emitting diode (sLED) is a good solution for the proposed device [23].

5. Conclusions

In summary, a holographic floating image display based on the DCRA was proposed. The eye relief of the proposed device is 68cm. In the demonstrated system, the maximum image size is about 56 mm by 56 mm for a laser light source with 532 nm wavelength and a SLM with 6.4μm pixel pitch. The dimension of each single exit pupil is 12.288 mm by 6.912 mm. When the distance between the floating images and the DCRA is less than 20cm, the linewidth of the images can achieve 3 arc minutes. Since the DCRA element works based on reflection principle, the device can achieve a full-color display function without any dispersion correction. This study utilized a white LED and 3 color filters as the full-color display light source. Finally, the full-color image was obtained via switching the monochrome holograms and filters sequentially.

Funding

Ministry of Science and Technology, Taiwan (108-2221-E-018-018-MY3).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. H. Yamamoto, Y. Tomiyama, and S. Suyama, “Floating aerial LED signage based on aerial imaging by retro-reflection (AIRR),” Opt. Express 22(22), 26919–26924 (2014). [CrossRef]  

2. S. Maekawa, K. Nitta, and O. Matoba, “Transmissive optical imaging device with micromirror array,” Proc. SPIE 6392, 63920E (2006). [CrossRef]  

3. S. Maekawa, K. Nitta, and O. Matoba, “Advances in passive imaging elements with micromirror array,” Proc. SPIE 6803, 68030B (2008). [CrossRef]  

4. Y. Maeda, D. Miyazaki, T. Mukai, and S. Maekawa, “Volumetric display using rotating prism sheets arranged in a symmetrical configuration,” Opt. Express 21(22), 27074–27086 (2013). [CrossRef]  

5. D. Miyazaki, N. Hirano, Y. Maeda, S. Yamamoto, T. Mukai, and S. Maekawa, “Floating volumetric image formation using a dihedral corner reflector array device,” Appl. Opt. 52(1), A281–A289 (2013). [CrossRef]  

6. Y. Ogihara and Y. Sakamoto, “Fast calculation method of a CGH for a patch model using a point-based method,” Appl. Opt. 54(1), A76–A83 (2015). [CrossRef]  

7. K. Matsushima, H. Schimmel, and F. Wyrowski, “Fast calculation method for optical diffraction on tilted planes by use of the angular spectrum of plane waves,” J. Opt. Soc. Am. A 20(9), 1755–1762 (2003). [CrossRef]  

8. J. Liu, A. Caley, and M. Taghizadeh, “Symmetrical iterative Fourier-transform algorithm using both phase and amplitude freedoms,” Opt. Commun. 267(2), 347–355 (2006). [CrossRef]  

9. F. Wyrowski and O. Bryngdahl, “Iterative Fourier-transform algorithm applied to computer holography,” J. Opt. Soc. Am. A 5(7), 1058–1065 (1988). [CrossRef]  

10. T. Shimobaba, M. Makowski, T. Kakue, M. Oikawa, N. Okada, Y. Endo, R. Hirayama, and T. Ito, “Lensless zoomable holographic projection using scaled Fresnel diffraction,” Opt. Express 21(21), 25285–25290 (2013). [CrossRef]  

11. I. Ducin, T. Shimobaba, M. Makowski, K. Kakarenko, A. Kowalczyk, J. Suszek, M. Bieda, A. Kolodziejczyk, and M. Sypek, “Holographic projection of images with step-less zoom and noise suppression by pixel separation,” Opt. Commun. 340, 131–135 (2015). [CrossRef]  

12. K. Masuda, Y. Saita, R. Toritani, P. Xia, K. Nitta, and O. Matoba, “Improvement of image quality of 3D display by using optimized binary phase modulation and intensity accumulation,” J. Disp. Technol. 12(5), 472–477 (2016). [CrossRef]  

13. O. Mendoza-Yero, G. Mínguez-Vega, and J. Lancis, “Encoding complex fields by using a phase-only optical element,” Opt. Lett. 39(7), 1740–1743 (2014). [CrossRef]  

14. Q. Gao, J. Liu, X. Duan, T. Zhao, X. Li, and P. Liu, “Compact see-through 3D head-mounted display based on wavefront modulation with holographic grating filter,” Opt. Express 25(7), 8412–8424 (2017). [CrossRef]  

15. W.-K. Lin, O. Matoba, B.-S. Lin, and W.-C. Su, “Astigmatism correction and quality optimization of computer-generated holograms for holographic waveguide displays,” Opt. Express 28(4), 5519–5527 (2020). [CrossRef]  

16. H.-J. Yeom, H.-J. Kim, S.-B. Kim, H. Zhang, B. Li, Y.-M. Ji, S.-H. Kim, and J.-H. Park, “3D holographic head mounted display using holographic optical elements with astigmatism aberration compensation,” Opt. Express 23(25), 32025–32034 (2015). [CrossRef]  

17. H. Amano, Y. Ichihashi, T. Kakue, K. Wakunami, H. Hashimoto, R. Miura, T. Shimobaba, and T. Ito, “Reconstruction of a three-dimensional color-video of a point-cloud object using the projection-type holographic display with a holographic optical element,” Opt. Express 28(4), 5692–5705 (2020). [CrossRef]  

18. Y. Sando, D. Barada, and T. Yatagai, “Aerial holographic 3D display with an enlarged field of view by the time-division method,” Appl. Opt. 60(17), 5044–5048 (2021). [CrossRef]  

19. Y. Su, Z. Cai, Q. Liu, L. Shi, F. Zhou, and J.W, “Binocular holographic three-dimensional display using a single spatial light modulator and a grating,” J. Opt. Soc. Am. A 35(8), 1477–1486 (2018). [CrossRef]  

20. Y. Su, Z. Cai, Q. Liu, L. Shi, F. Zhou, S. Huang, P. Guo, and J. Wu, “Projection-type dual-view holographic three-dimensional display and its augmented reality applications,” Opt. Commun. 428, 216–226 (2018). [CrossRef]  

21. Y. Su, X. Tanga, Z. Zhoua, Z. Caib, Y. Chen, J. Wu, and W. Wan, “Binocular dynamic holographic three-dimensional display for optical see-through augmented reality using two spatial light modulators,” Optik 217, 164918 (2020). [CrossRef]  

22. E. Murakami, Y. Oguro, and Y. Sakamoto, “Study on Compact Head-Mounted Display System Using Electro-Holography for Augmented Reality,” IEICE Trans. Electron. E100.C(11), 965–971 (2017). [CrossRef]  

23. Y. Deng and D. Chu, “Coherence properties of different light sources and their effect on the image sharpness and speckle of holographic displays,” Sci Rep 7(1), 1–12 (2017). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. The experimental configuration was arranged as the schematic diagram in (a) top view; (b) side view.
Fig. 2.
Fig. 2. The coordinate system was utilized in CGH algorithm for (a) SLM1; (b) SLM2.
Fig. 3.
Fig. 3. The simulation image of (a) ${H_1}({x,y} )$; (b) ${H_2}({x,y} )$ and the experimental result of (c) left view; (d) right view.
Fig. 4.
Fig. 4. The experimental resulting images in bright room of (a) left view and (b) right view and (c) the overlay results.
Fig. 5.
Fig. 5. The resulting images with 3D information were compared in (a) (b) simulation and (c) (d) experiment. (a) (c) focus at 600mm; (b)(d) focus at 760mm.
Fig. 6.
Fig. 6. The (a) original pattern was utilized to determine the display performance when the image located at (b) 480mm; (c) 580mm; (d) 630mm; (e) 730mm; (f) 780mm; (g) 880mm.
Fig. 7.
Fig. 7. The schematic diagram of the spatial relationship between Exit pupils, DCRA, and images.
Fig. 8.
Fig. 8. The colorful collimated light source was employed to reconstructed full-color image.
Fig. 9.
Fig. 9. The full-color image located at 600mm was obtained in (a) simulation; (b) experiment.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

e x p i ϕ 1 ( x , y ) = e x p i π ( x 2 + y 2 λ z )
H 1 ( x , y ) = H 1 ( x , y ) e x p i ϕ 1 ( x , y )
H 2 ( X , Y ) = H 2 ( X , Y ) e x p i ϕ 2 ( X , Y )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.