Abstract

We propose a see-through display with a wavefront coding (WFC) technique using a cubic phase plate to extend the depth of field (DOF) of the projected virtual image. The image projected by the WFC see-through display allows the eyes to see an object in the range of accommodation and a clear projected image simultaneously without re-focusing. The DOF of the clear projected image by the system can be extended from 250 mm to an optical infinity of 10 m.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

See-through displays offer digital images projected onto alongside forwarding field of view, which eliminates the inconvenience that occurs when the eyes refocus between the scene and device. With this advantage, wearable smart glasses, such as Google glass [1], are now being considered for applications in next generation optics [2]. Smart glasses with touch gesture operations [3], high-speed cellular networks, and software applications may replace mobile phones in the future. Such wearable see-through displays are convenient to the user who can simultaneously see an image on the display with private information, such as the logins of a bank account or a personal social website, as well as the scene ahead, thereby increasing safety.

However, the constraint of the depth of field (DOF) for the projected image of a see-through display is an important issue that cannot be ignored. A see-through display with a fixed-focus image causes an eye reflex between the projected image and the scene, which decreases safety and efficiency and takes few milliseconds during the process. Earlier research [4] provided a method for projecting an image to extend the DOF by a fast focal sweep projection with a tunable lens, while another research [5] proposed smart eyeglasses with tunable eyepieces for autofocusing. However, such technologies for adjusting the optical power of a see-through display in smart glasses may suffer from extra power consumption for the battery operation hours and reduce the system speed.

The extension of the DOF is difficult to achieve using a traditional optical system because of the unknown amount of defocus [67]. As the defocus becomes increasingly severe, the point spread function size will increase, which causes a region of zero values to appear in the optical transfer function (OTF) and modulation transfer function (MTF). This results in a loss of spatial frequency information for the image. Fortunately, Dowski et al. proposed a wavefront coding (WFC) technique to extend the DOF for an incoherent imaging system [8]. The wavefront can be modulated by a system with a cubic phase mask (CPM), cubic phase plate (CPP), and various CPM designs to form a uniform bundle of rays [6,915]. The WFC technique enables the OTF without zero values below the cut-off spatial frequency, which allows the intermediate image generated by a WFC system to be correctly restored by applying a digital filter [8,16], thereby obtaining a clear image. The WFC technique also allows the DOF to be extended up to 30 times that of the traditional criterion [8].

In this study, we developed a WFC see-through focus-free display with an extension of the DOF by the inverted WFC technique process shown in Fig. 1. The intermediate image is coded and obtained by applying a simple digital filter to the original input image in digital imaging processing. The process shows that the system projects an intermediate image and forms a nearly uniform wavefront distribution in a specific region. By using a concave mirror with phase modulation, the intermediate image is decoded using the system. The WFC see-through display combined with the clear projected image and extension of the DOF enables it to be seen while the eyes are focusing on any object in the range of accommodation.

 figure: Fig. 1.

Fig. 1. Projection optics with the inverted WFC technique process for the extension of the DOF.

Download Full Size | PPT Slide | PDF

2. Design of the WFC see-through display

The optical system created using the optical design software CodeV is shown in Fig. 2. The system contains the eye model and the WFC see-through display with the object plane, reflector, concave mirror, and beam splitter. The object plane is simulated as a 16 × 16 mm2 microdisplay with a pixel size of 10 μm and allows a resolution of 1600 × 1600 pixels so that the WFC see-through display can project a virtual image size of 306 × 306 mm2 with 23.6° in the horizontal and vertical directions at 700 mm in front of the eyes. The F-number of the system is 4.8, and the entrance pupil is 10 mm, which allows eye movements and fitting to different users to see the virtual image.

 figure: Fig. 2.

Fig. 2. Layout of the WFC see-through display and eye model.

Download Full Size | PPT Slide | PDF

Considering the position of a CPP, we employ a third-order coefficient of the xy-polynomial surface (cubic coefficient) on the concave mirror with the radius of curvature of 161.63 mm, it avoids the issue that obscures and distorts the view of the real world if the CPP is placed in the view path, and it also eliminates one optical element. Therefore, the system generates a two-dimensional phase function given by Eq. (1),

$$p({x,y} )= \left\{ \begin{array}{rl} { - exp[{ - 2kC({{x^3} + {y^3}} )} ]}\; \; &|x |\le \frac{D}{2},|y |\le \frac{D}{2}\\ {0,\; \;} &{otherwise} \end{array} \right.$$
where k is the wavenumber, D is the diameter of the aperture and C is the cubic coefficient for the phase modulation on the reflector.

Considering the situation where the larger the cubic coefficient, the more the rays deviate away, while the smaller the cubic coefficient, the more sensitive the system is to defocus [17]. Different cubic coefficients assigned on the concave mirror provide different MTFs, the RMS values of the MTFs are shown in Table 1, the RMS values increase as the cubic coefficient increased. The MTF plots for C=0.0004, 0.0008, and 0.0012 are shown in Fig. 3, the cut-off frequency is corresponding to the MTF value of 21 cycles/mm. Thus, the cubic coefficient assigned for the phase modulation used in the design is taken as C=0.0008 to prevent the sensitive MTF and suppressive MTF. The marginal ray height is 1.67 mm (The equivalent coding strength for the cubic coefficient assigned in the pupil of the eye is C=0.00085 with the marginal ray height of 2.97 mm).

 figure: Fig. 3.

Fig. 3. Magnitude of the modulation transfer function of the WFC see-through display.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 1. RMS values of MTFs generated by different cubic coefficients assigned on the concave mirror.

The MTF of the WFC see-through display shown in Fig. 3 has no zero values below the cut-off spatial frequency, which allows the intermediate image to be restored without a loss of spatial frequency information. The intermediate image with a nearly uniform wavefront distribution over a certain range is completely projected and decoded by the WFC see-through display, the DOF of the projected image can be extended, and a clear projected image can be seen even if the eyes are focused on nearby or distant objects. Although, the MTF shown in Fig. 3 is not as expected for traditional optical systems, the system with the MTF is used to decoded the intermediate image. The intermediate image for projection is obtained by using a simple digital filter to the original input image, and the foundation for digital image processing in the spatial frequency domain can be written symbolically as

$${\mathfrak{F}}[{i({x,y} )} ]= {\mathfrak{F}}[{o({y,y} )} ]\cdot {[|{H({{f_x},{f_y}} )} |]^{ - 1}} + {\mathfrak{F}}[{n({x,y} )} ]$$
where ${\mathfrak{F}}[]$ denotes the Fourier transform operation, and $i({x,y} )$ and $o({x,y} )$ are the intensities of the intermediate image and the original input image, respectively. Here, $n({x,y} )$ is the noise and is ignored to simplify the simulation. $H({{f_x},{f_y}} )$ is the OTF, the auto-correction function related to the phase function of Eq. (1). The OTF calculated by applying the stationary-phase approximation method [8] can be approximated as
$$H({{f_x},{f_y}} )\approx \left\{ {\begin{array}{rl} {\left( {\frac{\pi }{{48kC\sqrt {|{{f_x}{f_y}} |} }}} \right)exp[{ - i4k{D^3}C({f_x^3 + f_y^3} )} ],\; \; {f_x} \ne 0\textrm{,}{f_y} \ne 0}\\ {1,\; \; {f_x} = 0\textrm{,}{f_y} = 0} \end{array}} \right.$$
where ${f_x}$ and ${f_y}$ are the spatial frequencies. The OTF enables the match with the system MTF and digital filter, so the intermediate image can be projected and decoded simultaneously. Figure 4 shows the situation where the eyes are focused on objects at different distances. The projected image with the same quality can be seen clearly by the eyes without the accommodation reflex.

3. Simulation Results

The simulation was performed to illustrate that the eyes can simultaneously see both the object at the range of accommodation and the image projected, clearly without re-focusing. The original input image for the simulation is shown in Fig. 5(a). An intermediate image was obtained after applying the digital filter, as shown in Fig. 5(b). A comparison of the see-through display with and without the WFC is shown in Fig. 6. The projected images with and without the WFC as seen by the eye are shown in Fig. 6(a) and 6(b) while focusing on a sidewalk sign at 700 mm. As Fig. 6(c) shows, when the eyes are focused on a distant sign at 10 m, they can clearly see the sign at 10 m as well as the projected image with WFC; however, the projected image without WFC, shown in Fig. 6(d), is blurred, which means that the eyes must refocus to 700 mm, the position of the projected image, by trading off the accommodation reflex time. When the eyes are focused on the nearer object at 250 mm, the projected image with WFC, shown in Fig. 6(e), is clearer than the projected image without WFC, as shown in Fig. 6(f). The simulation results show that the projected images with higher peak signal-to-noise ratio (PSNR) values are clear and sharp. The PSNR is defined as

$${\boldsymbol{PSNR}} = \mathbf {10}{\boldsymbol{lo}}{{\boldsymbol{g}}_{\mathbf{10}}}\left( {\frac{{{{\boldsymbol M}^2}}}{{{\boldsymbol{MSE}}}}} \right)$$
where ${\boldsymbol M}$ is the maximum intensity of the image, and ${\boldsymbol{MSE}}$ is the mean squared error. However, PSNR values on these images cannot show the real image quality seen by the human eye, but it’s a method to obtain the image quality for the simulation.

 figure: Fig. 4.

Fig. 4. Configuration for the eyes sees both the scene and projected image through the WFC see-through display.

Download Full Size | PPT Slide | PDF

 figure: Fig. 5.

Fig. 5. Images using for the simulation. (a) Original input image. (b) Intermediate image.

Download Full Size | PPT Slide | PDF

 figure: Fig. 6.

Fig. 6. Comparison of the projected images with and without WFC seen by the eye. Projected images (a) with WFC and (b) without WFC seen by the eye, while focusing on the sidewalk sign at 700 mm. Projected images (c) with WFC and (d) without WFC seen by the eye, while focusing on the distant sign at 10 m. Projected images (e) with WFC and (f) without WFC seen by the eye, while focusing on the nearer object at 250 mm.

Download Full Size | PPT Slide | PDF

To illustrate that the WFC see-through display can project near-identical images, we use the structural similarity index measurement (SSIM) to compare the projected and the original input image, while the eyes focus on objects at different distances, as shown in Fig. 7. The results show that the projected image with WFC at different distances has a near-identical SSIM value, and it shows high similarity. For the see-through display without WFC, the projected image has the highest SSIM value only when the eye sees it at a projection position of 700 mm.

 figure: Fig. 7.

Fig. 7. SSIM values between the original input image and the seen projected image, while focusing on objects at different distances.

Download Full Size | PPT Slide | PDF

4. Conclusion

In this study, we presented a WFC see-through display that can be applied to smart glasses. The simulation results show that the DOF of the projected image of the WFC see-through display was extended, allowing the projected image to be seen clearly when the eyes focused on objects at any distance. The projected images with WFC are clear and have high similarity, and its PSNR values are high, while the PSNR values for the projected images without WFC decreased. In addition, the see-through display without any autofocusing technique increases the efficiency and safety because the eyes do not spend time on the process of accommodation reflex.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon request.

References

1. https://en.wikipedia.org/wiki/Google_Glass

2. L. -H. Lee and P. Hui, “Interaction methods for smart glasses: a survey,” IEEE Access 6, 28712–28732 (2018). [CrossRef]  

3. S. Rauh, D. Zsebedits, E. Tamplon, S. Bolch, and G. Meixner, “Using Google Glass for mobile maintenance and calibration tasks in the AUDI A8 production line,” 2015 IEEE 20th Conference on Emerging Technologies & Factory Automation (ETFA), Luxembourg, Luxembourg, 2015, pp. 1-4.

4. D Iwai, S Mihara, and K Sato, “Extended depth-of-field projector by fast focal sweep projection,” IEEE Trans. Visual. Comput. Graphics 21(4), 462 (2015). [CrossRef]  

5. N. Hasan, M. Karkhanis, C. Ghosh, F. Khan, T. Ghosh, H. Kim, and C. H. Mastrangelo, “Lightweight smart autofocusing eyeglasses,” Proc. SPIE 10545, MOEMS and Miniaturized Systems XVII, 1054507 (22 February 2018); https://doi.org/10.1117/12.2300737.

6. E. R. Dowski Jr. and G. E. Johnson, “Wavefront coding: a modern method of achieving high-performance and/or low-cost imaging systems,” Proc. SPIE 3779, 137–145 (1999). [CrossRef]  

7. Harel Haim, Alex Bronstein, and Emanuel Marom, “Computational multi-focus imaging combining sparse model with color dependent phase mask,” Opt. Express 23(19), 24547–24556 (2015). [CrossRef]  

8. J. E. R. Dowski Jr. and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34(11), 1859 (1995). [CrossRef]  

9. W. Thomas Cathey and E. R. Dowski, “New paradigm for imaging systems,” Appl. Opt. 41(29), 6080–6092 (2002). [CrossRef]  

10. Albertina Castro, Jorge Ojeda-Castañeda, and Adolf W. Lohmann, “Bow-tie effect: differential operator,” Appl. Opt. 45(30), 7878–7884 (2006). [CrossRef]  

11. Tingyu Zhao, Zi Ye, Wenzi Zhang, Weiwei Huang, and Feihong Yu, “Design of objective lenses to extend the depth of field based on wavefront coding,” Proc. SPIE6834, Optical Design and Testing III, 683414 (28 November 2007); https://doi.org/10.1117/12.756051.

12. Hui Zhao and Yingcai Li, “Performance of an improved logarithmic phase mask with optimized parameters in a wavefront-coding system,” Appl. Opt. 49(2), 229–238 (2010). [CrossRef]  

13. Yasuhisa Takahashi and Shinichi Komatsu, “Optimized free-form phase mask for extension of depth of field in wavefront-coded imaging,” Opt. Lett. 33(13), 1515–1517 (2008). [CrossRef]  

14. Mads Demenikov, Gonzalo Muyo, and Andrew R. Harvey, “Experimental demonstration of continuously variable optical encoding in a hybrid imaging system,” Opt. Lett. 35(12), 2100–2102 (2010). [CrossRef]  

15. P. Török and F.-J. Kao, Optical Imaging and Microscopy: Techniques and advanced systems, Springer, New York2007.

16. M. Demenikov, E. Findlay, and A. R. Harvey, “Miniaturization of zoom lenses with a single moving element,” Opt. Express 17(8), 6118–6127 (2009). [CrossRef]  

17. Chi-Feng Lee and Cheng-Chung Lee, “Application of a cubic phase plate to a reflecting telescope for extension of depth of field,” Appl. Opt. 59(14), 4410–4415 (2020). [CrossRef]  

References

  • View by:

  1. https://en.wikipedia.org/wiki/Google_Glass
  2. L. -H. Lee and P. Hui, “Interaction methods for smart glasses: a survey,” IEEE Access 6, 28712–28732 (2018).
    [Crossref]
  3. S. Rauh, D. Zsebedits, E. Tamplon, S. Bolch, and G. Meixner, “Using Google Glass for mobile maintenance and calibration tasks in the AUDI A8 production line,” 2015 IEEE 20th Conference on Emerging Technologies & Factory Automation (ETFA), Luxembourg, Luxembourg, 2015, pp. 1-4.
  4. D Iwai, S Mihara, and K Sato, “Extended depth-of-field projector by fast focal sweep projection,” IEEE Trans. Visual. Comput. Graphics 21(4), 462 (2015).
    [Crossref]
  5. N. Hasan, M. Karkhanis, C. Ghosh, F. Khan, T. Ghosh, H. Kim, and C. H. Mastrangelo, “Lightweight smart autofocusing eyeglasses,” Proc. SPIE 10545, MOEMS and Miniaturized Systems XVII, 1054507 (22 February 2018); https://doi.org/10.1117/12.2300737 .
  6. E. R. Dowski and G. E. Johnson, “Wavefront coding: a modern method of achieving high-performance and/or low-cost imaging systems,” Proc. SPIE 3779, 137–145 (1999).
    [Crossref]
  7. Harel Haim, Alex Bronstein, and Emanuel Marom, “Computational multi-focus imaging combining sparse model with color dependent phase mask,” Opt. Express 23(19), 24547–24556 (2015).
    [Crossref]
  8. J. E. R. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34(11), 1859 (1995).
    [Crossref]
  9. W. Thomas Cathey and E. R. Dowski, “New paradigm for imaging systems,” Appl. Opt. 41(29), 6080–6092 (2002).
    [Crossref]
  10. Albertina Castro, Jorge Ojeda-Castañeda, and Adolf W. Lohmann, “Bow-tie effect: differential operator,” Appl. Opt. 45(30), 7878–7884 (2006).
    [Crossref]
  11. Tingyu Zhao, Zi Ye, Wenzi Zhang, Weiwei Huang, and Feihong Yu, “Design of objective lenses to extend the depth of field based on wavefront coding,” Proc. SPIE6834, Optical Design and Testing III, 683414 (28 November 2007); https://doi.org/10.1117/12.756051 .
  12. Hui Zhao and Yingcai Li, “Performance of an improved logarithmic phase mask with optimized parameters in a wavefront-coding system,” Appl. Opt. 49(2), 229–238 (2010).
    [Crossref]
  13. Yasuhisa Takahashi and Shinichi Komatsu, “Optimized free-form phase mask for extension of depth of field in wavefront-coded imaging,” Opt. Lett. 33(13), 1515–1517 (2008).
    [Crossref]
  14. Mads Demenikov, Gonzalo Muyo, and Andrew R. Harvey, “Experimental demonstration of continuously variable optical encoding in a hybrid imaging system,” Opt. Lett. 35(12), 2100–2102 (2010).
    [Crossref]
  15. P. Török and F.-J. Kao, Optical Imaging and Microscopy: Techniques and advanced systems, Springer, New York2007.
  16. M. Demenikov, E. Findlay, and A. R. Harvey, “Miniaturization of zoom lenses with a single moving element,” Opt. Express 17(8), 6118–6127 (2009).
    [Crossref]
  17. Chi-Feng Lee and Cheng-Chung Lee, “Application of a cubic phase plate to a reflecting telescope for extension of depth of field,” Appl. Opt. 59(14), 4410–4415 (2020).
    [Crossref]

2020 (1)

2018 (1)

L. -H. Lee and P. Hui, “Interaction methods for smart glasses: a survey,” IEEE Access 6, 28712–28732 (2018).
[Crossref]

2015 (2)

D Iwai, S Mihara, and K Sato, “Extended depth-of-field projector by fast focal sweep projection,” IEEE Trans. Visual. Comput. Graphics 21(4), 462 (2015).
[Crossref]

Harel Haim, Alex Bronstein, and Emanuel Marom, “Computational multi-focus imaging combining sparse model with color dependent phase mask,” Opt. Express 23(19), 24547–24556 (2015).
[Crossref]

2010 (2)

2009 (1)

2008 (1)

2006 (1)

2002 (1)

1999 (1)

E. R. Dowski and G. E. Johnson, “Wavefront coding: a modern method of achieving high-performance and/or low-cost imaging systems,” Proc. SPIE 3779, 137–145 (1999).
[Crossref]

1995 (1)

Bolch, S.

S. Rauh, D. Zsebedits, E. Tamplon, S. Bolch, and G. Meixner, “Using Google Glass for mobile maintenance and calibration tasks in the AUDI A8 production line,” 2015 IEEE 20th Conference on Emerging Technologies & Factory Automation (ETFA), Luxembourg, Luxembourg, 2015, pp. 1-4.

Bronstein, Alex

Castro, Albertina

Cathey, W. T.

Demenikov, M.

Demenikov, Mads

Dowski, E. R.

W. Thomas Cathey and E. R. Dowski, “New paradigm for imaging systems,” Appl. Opt. 41(29), 6080–6092 (2002).
[Crossref]

E. R. Dowski and G. E. Johnson, “Wavefront coding: a modern method of achieving high-performance and/or low-cost imaging systems,” Proc. SPIE 3779, 137–145 (1999).
[Crossref]

Dowski, J. E. R.

Findlay, E.

Haim, Harel

Harvey, A. R.

Harvey, Andrew R.

Hui, P.

L. -H. Lee and P. Hui, “Interaction methods for smart glasses: a survey,” IEEE Access 6, 28712–28732 (2018).
[Crossref]

Iwai, D

D Iwai, S Mihara, and K Sato, “Extended depth-of-field projector by fast focal sweep projection,” IEEE Trans. Visual. Comput. Graphics 21(4), 462 (2015).
[Crossref]

Johnson, G. E.

E. R. Dowski and G. E. Johnson, “Wavefront coding: a modern method of achieving high-performance and/or low-cost imaging systems,” Proc. SPIE 3779, 137–145 (1999).
[Crossref]

Kao, F.-J.

P. Török and F.-J. Kao, Optical Imaging and Microscopy: Techniques and advanced systems, Springer, New York2007.

Komatsu, Shinichi

Lee, Cheng-Chung

Lee, Chi-Feng

Lee, L. -H.

L. -H. Lee and P. Hui, “Interaction methods for smart glasses: a survey,” IEEE Access 6, 28712–28732 (2018).
[Crossref]

Li, Yingcai

Lohmann, Adolf W.

Marom, Emanuel

Meixner, G.

S. Rauh, D. Zsebedits, E. Tamplon, S. Bolch, and G. Meixner, “Using Google Glass for mobile maintenance and calibration tasks in the AUDI A8 production line,” 2015 IEEE 20th Conference on Emerging Technologies & Factory Automation (ETFA), Luxembourg, Luxembourg, 2015, pp. 1-4.

Mihara, S

D Iwai, S Mihara, and K Sato, “Extended depth-of-field projector by fast focal sweep projection,” IEEE Trans. Visual. Comput. Graphics 21(4), 462 (2015).
[Crossref]

Muyo, Gonzalo

Ojeda-Castañeda, Jorge

Rauh, S.

S. Rauh, D. Zsebedits, E. Tamplon, S. Bolch, and G. Meixner, “Using Google Glass for mobile maintenance and calibration tasks in the AUDI A8 production line,” 2015 IEEE 20th Conference on Emerging Technologies & Factory Automation (ETFA), Luxembourg, Luxembourg, 2015, pp. 1-4.

Sato, K

D Iwai, S Mihara, and K Sato, “Extended depth-of-field projector by fast focal sweep projection,” IEEE Trans. Visual. Comput. Graphics 21(4), 462 (2015).
[Crossref]

Takahashi, Yasuhisa

Tamplon, E.

S. Rauh, D. Zsebedits, E. Tamplon, S. Bolch, and G. Meixner, “Using Google Glass for mobile maintenance and calibration tasks in the AUDI A8 production line,” 2015 IEEE 20th Conference on Emerging Technologies & Factory Automation (ETFA), Luxembourg, Luxembourg, 2015, pp. 1-4.

Thomas Cathey, W.

Török, P.

P. Török and F.-J. Kao, Optical Imaging and Microscopy: Techniques and advanced systems, Springer, New York2007.

Zhao, Hui

Zsebedits, D.

S. Rauh, D. Zsebedits, E. Tamplon, S. Bolch, and G. Meixner, “Using Google Glass for mobile maintenance and calibration tasks in the AUDI A8 production line,” 2015 IEEE 20th Conference on Emerging Technologies & Factory Automation (ETFA), Luxembourg, Luxembourg, 2015, pp. 1-4.

Appl. Opt. (5)

IEEE Access (1)

L. -H. Lee and P. Hui, “Interaction methods for smart glasses: a survey,” IEEE Access 6, 28712–28732 (2018).
[Crossref]

IEEE Trans. Visual. Comput. Graphics (1)

D Iwai, S Mihara, and K Sato, “Extended depth-of-field projector by fast focal sweep projection,” IEEE Trans. Visual. Comput. Graphics 21(4), 462 (2015).
[Crossref]

Opt. Express (2)

Opt. Lett. (2)

Proc. SPIE (1)

E. R. Dowski and G. E. Johnson, “Wavefront coding: a modern method of achieving high-performance and/or low-cost imaging systems,” Proc. SPIE 3779, 137–145 (1999).
[Crossref]

Other (5)

https://en.wikipedia.org/wiki/Google_Glass

P. Török and F.-J. Kao, Optical Imaging and Microscopy: Techniques and advanced systems, Springer, New York2007.

N. Hasan, M. Karkhanis, C. Ghosh, F. Khan, T. Ghosh, H. Kim, and C. H. Mastrangelo, “Lightweight smart autofocusing eyeglasses,” Proc. SPIE 10545, MOEMS and Miniaturized Systems XVII, 1054507 (22 February 2018); https://doi.org/10.1117/12.2300737 .

S. Rauh, D. Zsebedits, E. Tamplon, S. Bolch, and G. Meixner, “Using Google Glass for mobile maintenance and calibration tasks in the AUDI A8 production line,” 2015 IEEE 20th Conference on Emerging Technologies & Factory Automation (ETFA), Luxembourg, Luxembourg, 2015, pp. 1-4.

Tingyu Zhao, Zi Ye, Wenzi Zhang, Weiwei Huang, and Feihong Yu, “Design of objective lenses to extend the depth of field based on wavefront coding,” Proc. SPIE6834, Optical Design and Testing III, 683414 (28 November 2007); https://doi.org/10.1117/12.756051 .

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Projection optics with the inverted WFC technique process for the extension of the DOF.
Fig. 2.
Fig. 2. Layout of the WFC see-through display and eye model.
Fig. 3.
Fig. 3. Magnitude of the modulation transfer function of the WFC see-through display.
Fig. 4.
Fig. 4. Configuration for the eyes sees both the scene and projected image through the WFC see-through display.
Fig. 5.
Fig. 5. Images using for the simulation. (a) Original input image. (b) Intermediate image.
Fig. 6.
Fig. 6. Comparison of the projected images with and without WFC seen by the eye. Projected images (a) with WFC and (b) without WFC seen by the eye, while focusing on the sidewalk sign at 700 mm. Projected images (c) with WFC and (d) without WFC seen by the eye, while focusing on the distant sign at 10 m. Projected images (e) with WFC and (f) without WFC seen by the eye, while focusing on the nearer object at 250 mm.
Fig. 7.
Fig. 7. SSIM values between the original input image and the seen projected image, while focusing on objects at different distances.

Tables (1)

Tables Icon

Table 1. RMS values of MTFs generated by different cubic coefficients assigned on the concave mirror.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

p ( x , y ) = { e x p [ 2 k C ( x 3 + y 3 ) ] | x | D 2 , | y | D 2 0 , o t h e r w i s e
F [ i ( x , y ) ] = F [ o ( y , y ) ] [ | H ( f x , f y ) | ] 1 + F [ n ( x , y ) ]
H ( f x , f y ) { ( π 48 k C | f x f y | ) e x p [ i 4 k D 3 C ( f x 3 + f y 3 ) ] , f x 0 , f y 0 1 , f x = 0 , f y = 0
P S N R = 10 l o g 10 ( M 2 M S E )