Abstract

Foveation and (de)focus are two important visual factors in designing near eye displays. Foveation can reduce computational load by lowering display details towards the visual periphery, while focal cues can reduce vergence-accommodation conflict thereby lessening visual discomfort in using near eye displays. We performed two psychophysical experiments to investigate the relationship between foveation and focus cues. The first study measured blur discrimination sensitivity as a function of visual eccentricity, where we found discrimination thresholds significantly lower than previously reported. The second study measured depth discrimination threshold where we found a clear dependency on visual eccentricity. We discuss the study results and suggest further investigation.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The density of retinal ganglion cells is highest in the fovea and monotonically decreases towards the periphery, causing high spatial-resolution only in the center of the visual field [1]. This visual effect has been harnessed by gaze-contingent (foveated) computer rendering [2,3] and imaging [4] techniques. These foveation methods are designed for displays with fixed focal distances, such as desktop monitors, or head-mounted displays (HMDs) which offer binocular depth cues. In the natural world, however, objects can appear at different optical distances. The objects create clear or blurry images on the retina depending on the focal state of the eye, and the visual system is known to infer depth based on the amount and chromaticity of blur [5]. Lack of accommodative cue may cause vergence-accommodation conflicts for binocular viewing, such as VR/AR head-mounted displays. The conflict can be reduced by a variety of optical, display, and computation systems [6,7], such as single movable focal plane, multiple fixed focal planes, or light fields [8]. These systems may require mechanical movements (e.g., movable focal plane(s) [9]), optical components (e.g., multiple focal planes [10]), or extra computation (e.g., light field rendering).

Since defocus blur is a key cue for monocular depth perception [11] and influence vergence-accommodation conflict in binocular vision, in this paper, we ask: under monocular viewing condition, would optical blur and depth perception show similar eccentricity effect as spatial resolution across the visual field? To help answer this question, we conducted psychophysical experiments under two display systems. First, we measured the sensitivity of human viewers to the change in optical blur. We found that the discrimination threshold varied significantly across individuals. The measured thresholds were significantly lower than previously reported. Second, we used a light field display (that can generate accurate and natural retinal blur without extra lens) [8] to measure depth discrimination threshold, which increases as a function of visual eccentricity. Interestingly, depth discrimination threshold changes more systematically with eccentricity than blur discrimination threshold does. We discuss the discrepancy between blur and depth discrimination and suggest future research.

2. Blur discrimination and visual eccentricity

Wang, Ciuffreda, and Irish [12] measured the thresholds of blur detection (noticing blur) and discrimination (differentiating between blur sizes) at various visual eccentricities from fovea to $8\deg$. The thresholds increased monotonically along with eccentricity for both detection ($0.53{D} \rightarrow 1.25{D}$) and discrimination ($0.29{D} \rightarrow 0.72{D}$). Ronchi and Molesini [13] also measured blur detection threshold at farther eccentricities, where it increased from $2$ to $5{D}$ at $7\deg$ of eccentricity to $7$ to $12{D}$ at $60\deg$ of eccentricity. The monotonic increases suggests that we may only weakly perceive blur and depth at far periphery. We note the wide disagreement in measured thresholds at $7$ - $8\deg$ of eccentricity between the two studies and measure the blur detection and discrimination thresholds from fovea to periphery (at $0$, $5$, $10$, and $15 \deg$).

Setup The setup is photographed in Fig. 1. Blur pedestals (the baseline for discrimination tasks) of $-2, -1, 0, 1,$ and $2{D}$ were tested across eccentricities. The stimulus was presented on an LCD display (Acer XB270HU, $2560 \times 1440$ resolution, $144Hz$ refresh rate) located at $80$ cm away from the viewer, where the central pixel subtended 1 arcmin from the eye. A focus-tunable lens (Optotune, EL-16-40-TC, response time $30ms$), placed before one eye of the subject (the other covered by mask), controlled the focal distance of the stimulus. The field of view (FoV) provided by the lens subtended $25\deg$ in diameter. A bite bar was used to precisely position the viewer at the desired location.

 figure: Fig. 1.

Fig. 1. Blur perception study setup and a sampled stimulus (see Visualization 1).

Download Full Size | PPT Slide | PDF

Calibration Every subject went through a calibration procedure before starting the experiment. We first measured the far point of accommodation by using a tumbling E test [14] and a staircase procedure [15]. Second, we quantified scaling and translation caused by the change in lens focal power using a between-image-plane vernier alignment task [16]. These two calibration steps let us present stimuli accurately in terms of focal power, visual size, and location in the visual field.

Stimuli As shown in the bottom left inset of Fig. 1, the visual stimulus for blur detection/discrimination was a bright rectangle ($100 {cd}/{m^{2}}$) drawn on a dark background ($20 {cd}/{m^{2}}$). The size of the foveal rectangle was $0.16$ (W) $\times$ $0.8$ (H) $\deg$. The peripheral rectangles scaled linearly with visual eccentricity: $0.04E$ (W) $\times$ $0.2E$ (H) $\deg$, where $E$ is the visual eccentricity in degree. The focus-tunable lens operated with the display to introduce defocus blur to the rectangles. The presentation time for the rectangle was $0.3\sec$, short enough to prevent human refocus ($0.3-0.4\sec$ according to [17]). The rectangle appeared twice in a random sequential order with different amounts of blur; one with pedestal blur serving as a reference (or no blur for detection) and the other with blur greater or less than pedestal serving as a test signal. We define the difference between pedestal and test blurs as the differential blur. The magnitude of differential blur was adapted based on the 1-up 2-down staircase method with the minimum step size of $0.1{D}$. The two stimuli were separated by 0.5 sec in time. A fixation target (the letter “E” in the circle as marked in Fig. 1) was presented for $0.5\sec$ before, between, and after the two blur discrimination stimuli to fix the subjects’ accommodation. The task was a 2-alternative-forced-choice: subjects had to choose the one that appeared blurrier even if they were not sure.

More than 100 trials were executed per each combination of blur pedestal and visual eccentricity. The total duration, including calibration, training, and breaks, was about 6 hours. A capture of the study can be seen from Visualization 1.

Subjects Four subjects, aged 31 to 48, participated. All subjects had normal or corrected-to-normal visual acuity. One was an author. The other three were unaware of the experimental hypothesis. All subjects provided written consent, and the experiment was conducted in accordance with the Declaration of Helsinki.

Results As in Fig. 2, the Y-axis represents blur discrimination thresholds. Each symbol indicates the $75\%$ performance level centered at a $95\%$ confidence interval. We define $75\%$ performance level as the threshold, which is halfway between random guess (there were two choices) and being perfect. We estimated the threshold by fitting a cumulative Gaussian function to the performance curve drawn as a function of differential blur sizes [18]. Specifically, we estimate participants’ performance levels as a function of the added blur magnitude.

 figure: Fig. 2.

Fig. 2. Blur study results. We plot the blur detection/discrimination thresholds as a function of eccentricity and pedestal/baseline blur ($-2, -1, 0, 1, 2{D}$) for four subjects. All thresholds are computed as differences in diopters, i.e., $|D_a - D_b|$ for test case a and control (pedestal/baseline) case b. For example, a 0.1D of threshold at 2.0D baseline means the subject can perceive the blur caused by 2.1D target with the eye focusing at 2.0D. X-axis represents retinal eccentricity in degree. Y-axis represents measured thresholds in diopter. Each vertical bar indicates the $75\%$ performance level centered at a $95\%$ confidence interval. See Data File 1 for underlying values.

Download Full Size | PPT Slide | PDF

The experimental results are plotted in Fig. 2, where pedestal blur is color coded. Note that $0{D}$ pedestal means detection. The dashed black line shows geometrically estimated detection thresholds by considering a cylindrical blur kernel, a $5mm$ pupil and anatomical ganglion cell densities from [19]. The solid black line shows the prediction curve suggested by [12]. Both lines are for theoretical comparison with our data. The results show three observations. First, the thresholds increased along with eccentricity for some subjects but not all; the values from two subjects (U3 and U4) remained nearly constant and below the theoretical predictions at farther eccentricities. Second, correction for astigmatism of one subject (U1, astigmatism $=-1.25{D}$ at $8\deg$) did not significantly improve, if not reduced, sensitivity to blur. Third, varying pedestals sometimes yielded large threshold differences at the same eccentricity, although the hypothesis is a negative correlation [12]. This may be attributable to individual differences such as peripheral refractive state [20].

3. Depth perception and visual eccentricity

In addition to optical blur, here we study the depth perception with a light field display, which can vary the intensity of light not only spatially but also angularly [21,22].

Setup Figure 3 shows the study setup. We used a parallax-barrier-based [21] ($300 \mu m$ pitch size and $120 \mu m$ pinhole aperture) light field display prototype built with off-the-shelf components ([8]). The display panels was built by horizontally tiling three 5.98-inch 2K panels (Topfoison TF60006A) and a 3D printed housing. It supports $3.2$ views $/\deg$ angular and $579\times 333$ spatial resolutions. Comparing with the designs suggested by previous literature [23,24], the display maintains high angular resolution to provide precise accommodation to validate our study result.

 figure: Fig. 3.

Fig. 3. Depth perception study design. (a) shows simulated retinal images via DLSR camera photography [8]. The focus depth changed from far (left) to near (right). (b) shows the study setup. The bottom inset is a simulated retinal image of the stimuli. The green object is the fixation; the other two are the test targets (see Visualization 2).

Download Full Size | PPT Slide | PDF

During the study, subjects remained seated at $30cm$ ($15\deg$ FoV) from the display with their non-dominant eye occluded. A chinrest was used to precisely control the viewing distance. In this configuration, each pixel in the light field display provides two views both horizontally and vertically for the smallest pupil size ($2 mm$ in diameter).

Stimuli The bottom inset in Fig. 3(b) shows a sampled stimuli. The subjects were instructed to fixate on the fixation target, a small green square at the screen center with depth 3.2D. On the side were two depth discrimination targets in vertically elongated rectangle shapes textured with broadband binary Voronoi diagrams. The two targets were rendered side-by-side with a small $2.5mm$ gap to avoid cues from occlusion while keeping their eccentricities close to the tested condition. To avoid size cue, we dynamically rescaled the sizes of the test targets based on their depths so that they always appear the same size in visual angle. Subjects were instructed to keep watching the fixation target during the entire study.

The two targets appeared across $8$ eccentricities from the fovea to $15\deg$. We used the method of adjustment to measure the depth detection thresholds. At the beginning of each trial, the two test targets were positioned at the same depth as the fixation ($3.2{D}$). Then, subjects pressed up/down arrow buttons to increase/decrease the two targets’ depth separations until they can perceive any depth disparity. A warning appeared when the depth disparity reached $0$ or the hardware limit. We ran $4$ trials for each eccentricity. All conditions were randomized. A capture of the study can be seen from Visualization 2.

Subjects Four subjects, aged 23 to 46, participated. One was an author. The other three were unaware of the experimental hypothesis.

Results Figure 4 plots the mean thresholds with standard deviation. All subjects showed consistent trend of increasing depth detection thresholds as eccentricity grows. We observed less degree of individual differences than we did in the first experiment on blur discrimination.

 figure: Fig. 4.

Fig. 4. The result of depth detection thresholds (Y) against eccentricity (X). See Data File 2 for underlying values.

Download Full Size | PPT Slide | PDF

4. Discussion

We found that blur discrimination threshold is significantly lower than previously reported. There are two notable differences between our methods and those previously reported. First, we used two-alternative-forced-choice tasks, which might have improved subjects’ performance, compared with previous studies using method of adjustment. Second, we did not apply cyclopentolate to subjects as in [12] for dilation and paralyzing accommodation muscles. The former can help with blur detection by creating bigger blur with larger aperture sizes, but the latter can have the opposite effect by interfering with the natural dynamics of accommodation that can be useful for detecting defocus blur. Nonetheless, the blur discrimination threshold reported in this study was surprisingly small. The geometrical calculation suggests that we can even discern the differences smaller than the spacing between retinal ganglion cells. Further investigation to clarify the mechanism for blur detection and discrimination will be beneficial.

Computational displays with depth cues and fast response time can benefit a variety of applications, such as in VR/AR and automotive assistance. To save computation as in foveated rendering, we could leverage the consistently reduced monocular depth perception at larger eccentricity (Fig. 4. However, we could yet do so for the commonly considered root cause, defocus blur. The blur perception thresholds from optical stimulus (Fig. 2) show that some individuals retain high blur sensitivity as far as $15\deg$, with the minimum threshold down to $0.2{D}$. The different trends discovered from our two studies suggest that future research should analyze the whole retina-lens-display pipline, not only the low-level optical/anatomical vision mechanism but also the high-level visual cortex processing.

Funding

National Science Foundation (CNS1650499, NRT1633299, OAC1919752).

Disclosures

The authors declare no conflicts of interest.

References

1. M. Kwon and R. Liu, “Linkage between retinal ganglion cell density and the nonuniform spatial integration across the visual field,” Proc. Natl. Acad. Sci. 116(9), 3827–3836 (2019). [CrossRef]  

2. B. Guenter, M. Finch, S. Drucker, D. Tan, and J. Snyder, “Foveated 3d graphics,” ACM Trans. Graph. 31(6), 1–164 (2012). [CrossRef]  

3. A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D. Luebke, and A. Lefohn, “Towards foveated rendering for gaze-tracked virtual reality,” ACM Trans. Graph. 35(6), 1–12 (2016). [CrossRef]  

4. G. Tan, Y.-H. Lee, T. Zhan, J. Yang, S. Liu, D. Zhao, and S.-T. Wu, “Foveated imaging for near-eye displays,” Opt. Express 26(19), 25076–25085 (2018). [CrossRef]  

5. S. A. Cholewiak, G. D. Love, and M. S. Banks, “Creating correct blur and its effect on accommodation,” J. Vis. 18(9), 1 (2018). [CrossRef]  

6. G. Tan, T. Zhan, Y.-H. Lee, J. Xiong, and S.-T. Wu, “Polarization-multiplexed multiplane display,” Opt. Lett. 43(22), 5651–5654 (2018). [CrossRef]  

7. T. Zhan, Y.-H. Lee, and S.-T. Wu, “High-resolution additive light field near-eye display by switchable pancharatnam–berry phase lenses,” Opt. Express 26(4), 4863–4872 (2018). [CrossRef]  

8. Q. Sun, F.-C. Huang, J. Kim, L.-Y. Wei, D. Luebke, and A. Kaufman, “Perceptually-guided foveation for light field displays,” ACM Trans. Graph. 36(6), 1–13 (2017). [CrossRef]  

9. N. Padmanaban, R. Konrad, T. Stramer, E. A. Cooper, and G. Wetzstein, “Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays,” Proc. Natl. Acad. Sci. 114(9), 2183–2188 (2017). [CrossRef]  

10. T. Zhan, J. Zou, M. Lu, E. Chen, and S.-T. Wu, “Wavelength-multiplexed multi-focal-plane seethrough near-eye displays,” Opt. Express 27(20), 27507–27513 (2019). [CrossRef]  

11. J. Read, “Visual perception: Understanding visual cues to depth,” Curr. Biol. 22(5), R163–R165 (2012). [CrossRef]  

12. B. Wang, K. J. Ciuffreda, and T. Irish, “Equiblur zones at the fovea and near retinal periphery,” Vision Res. 46(21), 3690–3698 (2006). [CrossRef]  

13. L. Ronchi and G. Molesini, “Depth of focus in peripheral vision,” Ophthalmic Res. 7(3), 152–157 (1975). [CrossRef]  

14. H. R. Taylor, “Applying new design principles to the construction of an illiterate e chart,” Optom. & Vis. Sci. 55(5), 348–351 (1978). [CrossRef]  

15. H. Levitt, “Transformed up-down methods in psychoacoustics,” J. Acoust. Soc. Am. 49(2B), 467–477 (1971). [CrossRef]  

16. K. Akeley, S. J. Watt, A. R. Girshick, and M. S. Banks, “A stereo display prototype with multiple focal distances,” ACM Trans. Graph. 23(3), 804–813 (2004). [CrossRef]  

17. F. Campbell and G. Westheimer, “Dynamics of accommodation responses of the human eye,” The J. physiology 151(2), 285–295 (1960). [CrossRef]  

18. H. H. Schütt, S. Harmeling, J. H. Macke, and F. A. Wichmann, “Painfree and accurate bayesian estimation of psychometric functions for (potentially) overdispersed data,” Vision Res. 122, 105–123 (2016). [CrossRef]  

19. A. B. Watson, “A formula for human retinal ganglion cell receptive field density as a function of visual field location,” J. Vis. 14(7), 15 (2014). [CrossRef]  

20. A. Seidemann, F. Schaeffel, A. Guirao, N. Lopez-Gil, and P. Artal, “Peripheral refractive errors in myopic, emmetropic, and hyperopic young subjects,” J. Opt. Soc. Am. A 19(12), 2363–2373 (2002). [CrossRef]  

21. F. E. Ives, “A novel stereogram,” J. Franklin Inst. 153(1), 51–52 (1902). [CrossRef]  

22. D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013). [CrossRef]  

23. Y. Takaki, “High-density directional display for generating natural three-dimensional images,” Proc. IEEE 94(3), 654–663 (2006). [CrossRef]  

24. F.-C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: Immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60 (2015). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. M. Kwon and R. Liu, “Linkage between retinal ganglion cell density and the nonuniform spatial integration across the visual field,” Proc. Natl. Acad. Sci. 116(9), 3827–3836 (2019).
    [Crossref]
  2. B. Guenter, M. Finch, S. Drucker, D. Tan, and J. Snyder, “Foveated 3d graphics,” ACM Trans. Graph. 31(6), 1–164 (2012).
    [Crossref]
  3. A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D. Luebke, and A. Lefohn, “Towards foveated rendering for gaze-tracked virtual reality,” ACM Trans. Graph. 35(6), 1–12 (2016).
    [Crossref]
  4. G. Tan, Y.-H. Lee, T. Zhan, J. Yang, S. Liu, D. Zhao, and S.-T. Wu, “Foveated imaging for near-eye displays,” Opt. Express 26(19), 25076–25085 (2018).
    [Crossref]
  5. S. A. Cholewiak, G. D. Love, and M. S. Banks, “Creating correct blur and its effect on accommodation,” J. Vis. 18(9), 1 (2018).
    [Crossref]
  6. G. Tan, T. Zhan, Y.-H. Lee, J. Xiong, and S.-T. Wu, “Polarization-multiplexed multiplane display,” Opt. Lett. 43(22), 5651–5654 (2018).
    [Crossref]
  7. T. Zhan, Y.-H. Lee, and S.-T. Wu, “High-resolution additive light field near-eye display by switchable pancharatnam–berry phase lenses,” Opt. Express 26(4), 4863–4872 (2018).
    [Crossref]
  8. Q. Sun, F.-C. Huang, J. Kim, L.-Y. Wei, D. Luebke, and A. Kaufman, “Perceptually-guided foveation for light field displays,” ACM Trans. Graph. 36(6), 1–13 (2017).
    [Crossref]
  9. N. Padmanaban, R. Konrad, T. Stramer, E. A. Cooper, and G. Wetzstein, “Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays,” Proc. Natl. Acad. Sci. 114(9), 2183–2188 (2017).
    [Crossref]
  10. T. Zhan, J. Zou, M. Lu, E. Chen, and S.-T. Wu, “Wavelength-multiplexed multi-focal-plane seethrough near-eye displays,” Opt. Express 27(20), 27507–27513 (2019).
    [Crossref]
  11. J. Read, “Visual perception: Understanding visual cues to depth,” Curr. Biol. 22(5), R163–R165 (2012).
    [Crossref]
  12. B. Wang, K. J. Ciuffreda, and T. Irish, “Equiblur zones at the fovea and near retinal periphery,” Vision Res. 46(21), 3690–3698 (2006).
    [Crossref]
  13. L. Ronchi and G. Molesini, “Depth of focus in peripheral vision,” Ophthalmic Res. 7(3), 152–157 (1975).
    [Crossref]
  14. H. R. Taylor, “Applying new design principles to the construction of an illiterate e chart,” Optom. & Vis. Sci. 55(5), 348–351 (1978).
    [Crossref]
  15. H. Levitt, “Transformed up-down methods in psychoacoustics,” J. Acoust. Soc. Am. 49(2B), 467–477 (1971).
    [Crossref]
  16. K. Akeley, S. J. Watt, A. R. Girshick, and M. S. Banks, “A stereo display prototype with multiple focal distances,” ACM Trans. Graph. 23(3), 804–813 (2004).
    [Crossref]
  17. F. Campbell and G. Westheimer, “Dynamics of accommodation responses of the human eye,” The J. physiology 151(2), 285–295 (1960).
    [Crossref]
  18. H. H. Schütt, S. Harmeling, J. H. Macke, and F. A. Wichmann, “Painfree and accurate bayesian estimation of psychometric functions for (potentially) overdispersed data,” Vision Res. 122, 105–123 (2016).
    [Crossref]
  19. A. B. Watson, “A formula for human retinal ganglion cell receptive field density as a function of visual field location,” J. Vis. 14(7), 15 (2014).
    [Crossref]
  20. A. Seidemann, F. Schaeffel, A. Guirao, N. Lopez-Gil, and P. Artal, “Peripheral refractive errors in myopic, emmetropic, and hyperopic young subjects,” J. Opt. Soc. Am. A 19(12), 2363–2373 (2002).
    [Crossref]
  21. F. E. Ives, “A novel stereogram,” J. Franklin Inst. 153(1), 51–52 (1902).
    [Crossref]
  22. D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013).
    [Crossref]
  23. Y. Takaki, “High-density directional display for generating natural three-dimensional images,” Proc. IEEE 94(3), 654–663 (2006).
    [Crossref]
  24. F.-C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: Immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60 (2015).
    [Crossref]

2019 (2)

M. Kwon and R. Liu, “Linkage between retinal ganglion cell density and the nonuniform spatial integration across the visual field,” Proc. Natl. Acad. Sci. 116(9), 3827–3836 (2019).
[Crossref]

T. Zhan, J. Zou, M. Lu, E. Chen, and S.-T. Wu, “Wavelength-multiplexed multi-focal-plane seethrough near-eye displays,” Opt. Express 27(20), 27507–27513 (2019).
[Crossref]

2018 (4)

2017 (2)

N. Padmanaban, R. Konrad, T. Stramer, E. A. Cooper, and G. Wetzstein, “Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays,” Proc. Natl. Acad. Sci. 114(9), 2183–2188 (2017).
[Crossref]

Q. Sun, F.-C. Huang, J. Kim, L.-Y. Wei, D. Luebke, and A. Kaufman, “Perceptually-guided foveation for light field displays,” ACM Trans. Graph. 36(6), 1–13 (2017).
[Crossref]

2016 (2)

H. H. Schütt, S. Harmeling, J. H. Macke, and F. A. Wichmann, “Painfree and accurate bayesian estimation of psychometric functions for (potentially) overdispersed data,” Vision Res. 122, 105–123 (2016).
[Crossref]

A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D. Luebke, and A. Lefohn, “Towards foveated rendering for gaze-tracked virtual reality,” ACM Trans. Graph. 35(6), 1–12 (2016).
[Crossref]

2015 (1)

F.-C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: Immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60 (2015).
[Crossref]

2014 (1)

A. B. Watson, “A formula for human retinal ganglion cell receptive field density as a function of visual field location,” J. Vis. 14(7), 15 (2014).
[Crossref]

2013 (1)

D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013).
[Crossref]

2012 (2)

J. Read, “Visual perception: Understanding visual cues to depth,” Curr. Biol. 22(5), R163–R165 (2012).
[Crossref]

B. Guenter, M. Finch, S. Drucker, D. Tan, and J. Snyder, “Foveated 3d graphics,” ACM Trans. Graph. 31(6), 1–164 (2012).
[Crossref]

2006 (2)

B. Wang, K. J. Ciuffreda, and T. Irish, “Equiblur zones at the fovea and near retinal periphery,” Vision Res. 46(21), 3690–3698 (2006).
[Crossref]

Y. Takaki, “High-density directional display for generating natural three-dimensional images,” Proc. IEEE 94(3), 654–663 (2006).
[Crossref]

2004 (1)

K. Akeley, S. J. Watt, A. R. Girshick, and M. S. Banks, “A stereo display prototype with multiple focal distances,” ACM Trans. Graph. 23(3), 804–813 (2004).
[Crossref]

2002 (1)

1978 (1)

H. R. Taylor, “Applying new design principles to the construction of an illiterate e chart,” Optom. & Vis. Sci. 55(5), 348–351 (1978).
[Crossref]

1975 (1)

L. Ronchi and G. Molesini, “Depth of focus in peripheral vision,” Ophthalmic Res. 7(3), 152–157 (1975).
[Crossref]

1971 (1)

H. Levitt, “Transformed up-down methods in psychoacoustics,” J. Acoust. Soc. Am. 49(2B), 467–477 (1971).
[Crossref]

1960 (1)

F. Campbell and G. Westheimer, “Dynamics of accommodation responses of the human eye,” The J. physiology 151(2), 285–295 (1960).
[Crossref]

1902 (1)

F. E. Ives, “A novel stereogram,” J. Franklin Inst. 153(1), 51–52 (1902).
[Crossref]

Akeley, K.

K. Akeley, S. J. Watt, A. R. Girshick, and M. S. Banks, “A stereo display prototype with multiple focal distances,” ACM Trans. Graph. 23(3), 804–813 (2004).
[Crossref]

Artal, P.

Banks, M. S.

S. A. Cholewiak, G. D. Love, and M. S. Banks, “Creating correct blur and its effect on accommodation,” J. Vis. 18(9), 1 (2018).
[Crossref]

K. Akeley, S. J. Watt, A. R. Girshick, and M. S. Banks, “A stereo display prototype with multiple focal distances,” ACM Trans. Graph. 23(3), 804–813 (2004).
[Crossref]

Benty, N.

A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D. Luebke, and A. Lefohn, “Towards foveated rendering for gaze-tracked virtual reality,” ACM Trans. Graph. 35(6), 1–12 (2016).
[Crossref]

Campbell, F.

F. Campbell and G. Westheimer, “Dynamics of accommodation responses of the human eye,” The J. physiology 151(2), 285–295 (1960).
[Crossref]

Chen, E.

Chen, K.

F.-C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: Immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60 (2015).
[Crossref]

Cholewiak, S. A.

S. A. Cholewiak, G. D. Love, and M. S. Banks, “Creating correct blur and its effect on accommodation,” J. Vis. 18(9), 1 (2018).
[Crossref]

Ciuffreda, K. J.

B. Wang, K. J. Ciuffreda, and T. Irish, “Equiblur zones at the fovea and near retinal periphery,” Vision Res. 46(21), 3690–3698 (2006).
[Crossref]

Cooper, E. A.

N. Padmanaban, R. Konrad, T. Stramer, E. A. Cooper, and G. Wetzstein, “Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays,” Proc. Natl. Acad. Sci. 114(9), 2183–2188 (2017).
[Crossref]

Drucker, S.

B. Guenter, M. Finch, S. Drucker, D. Tan, and J. Snyder, “Foveated 3d graphics,” ACM Trans. Graph. 31(6), 1–164 (2012).
[Crossref]

Finch, M.

B. Guenter, M. Finch, S. Drucker, D. Tan, and J. Snyder, “Foveated 3d graphics,” ACM Trans. Graph. 31(6), 1–164 (2012).
[Crossref]

Girshick, A. R.

K. Akeley, S. J. Watt, A. R. Girshick, and M. S. Banks, “A stereo display prototype with multiple focal distances,” ACM Trans. Graph. 23(3), 804–813 (2004).
[Crossref]

Guenter, B.

B. Guenter, M. Finch, S. Drucker, D. Tan, and J. Snyder, “Foveated 3d graphics,” ACM Trans. Graph. 31(6), 1–164 (2012).
[Crossref]

Guirao, A.

Harmeling, S.

H. H. Schütt, S. Harmeling, J. H. Macke, and F. A. Wichmann, “Painfree and accurate bayesian estimation of psychometric functions for (potentially) overdispersed data,” Vision Res. 122, 105–123 (2016).
[Crossref]

Huang, F.-C.

Q. Sun, F.-C. Huang, J. Kim, L.-Y. Wei, D. Luebke, and A. Kaufman, “Perceptually-guided foveation for light field displays,” ACM Trans. Graph. 36(6), 1–13 (2017).
[Crossref]

F.-C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: Immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60 (2015).
[Crossref]

Irish, T.

B. Wang, K. J. Ciuffreda, and T. Irish, “Equiblur zones at the fovea and near retinal periphery,” Vision Res. 46(21), 3690–3698 (2006).
[Crossref]

Ives, F. E.

F. E. Ives, “A novel stereogram,” J. Franklin Inst. 153(1), 51–52 (1902).
[Crossref]

Kaplanyan, A.

A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D. Luebke, and A. Lefohn, “Towards foveated rendering for gaze-tracked virtual reality,” ACM Trans. Graph. 35(6), 1–12 (2016).
[Crossref]

Kaufman, A.

Q. Sun, F.-C. Huang, J. Kim, L.-Y. Wei, D. Luebke, and A. Kaufman, “Perceptually-guided foveation for light field displays,” ACM Trans. Graph. 36(6), 1–13 (2017).
[Crossref]

Kim, J.

Q. Sun, F.-C. Huang, J. Kim, L.-Y. Wei, D. Luebke, and A. Kaufman, “Perceptually-guided foveation for light field displays,” ACM Trans. Graph. 36(6), 1–13 (2017).
[Crossref]

A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D. Luebke, and A. Lefohn, “Towards foveated rendering for gaze-tracked virtual reality,” ACM Trans. Graph. 35(6), 1–12 (2016).
[Crossref]

Konrad, R.

N. Padmanaban, R. Konrad, T. Stramer, E. A. Cooper, and G. Wetzstein, “Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays,” Proc. Natl. Acad. Sci. 114(9), 2183–2188 (2017).
[Crossref]

Kwon, M.

M. Kwon and R. Liu, “Linkage between retinal ganglion cell density and the nonuniform spatial integration across the visual field,” Proc. Natl. Acad. Sci. 116(9), 3827–3836 (2019).
[Crossref]

Lanman, D.

D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013).
[Crossref]

Lee, Y.-H.

Lefohn, A.

A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D. Luebke, and A. Lefohn, “Towards foveated rendering for gaze-tracked virtual reality,” ACM Trans. Graph. 35(6), 1–12 (2016).
[Crossref]

Levitt, H.

H. Levitt, “Transformed up-down methods in psychoacoustics,” J. Acoust. Soc. Am. 49(2B), 467–477 (1971).
[Crossref]

Liu, R.

M. Kwon and R. Liu, “Linkage between retinal ganglion cell density and the nonuniform spatial integration across the visual field,” Proc. Natl. Acad. Sci. 116(9), 3827–3836 (2019).
[Crossref]

Liu, S.

Lopez-Gil, N.

Love, G. D.

S. A. Cholewiak, G. D. Love, and M. S. Banks, “Creating correct blur and its effect on accommodation,” J. Vis. 18(9), 1 (2018).
[Crossref]

Lu, M.

Luebke, D.

Q. Sun, F.-C. Huang, J. Kim, L.-Y. Wei, D. Luebke, and A. Kaufman, “Perceptually-guided foveation for light field displays,” ACM Trans. Graph. 36(6), 1–13 (2017).
[Crossref]

A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D. Luebke, and A. Lefohn, “Towards foveated rendering for gaze-tracked virtual reality,” ACM Trans. Graph. 35(6), 1–12 (2016).
[Crossref]

D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013).
[Crossref]

Macke, J. H.

H. H. Schütt, S. Harmeling, J. H. Macke, and F. A. Wichmann, “Painfree and accurate bayesian estimation of psychometric functions for (potentially) overdispersed data,” Vision Res. 122, 105–123 (2016).
[Crossref]

Molesini, G.

L. Ronchi and G. Molesini, “Depth of focus in peripheral vision,” Ophthalmic Res. 7(3), 152–157 (1975).
[Crossref]

Padmanaban, N.

N. Padmanaban, R. Konrad, T. Stramer, E. A. Cooper, and G. Wetzstein, “Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays,” Proc. Natl. Acad. Sci. 114(9), 2183–2188 (2017).
[Crossref]

Patney, A.

A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D. Luebke, and A. Lefohn, “Towards foveated rendering for gaze-tracked virtual reality,” ACM Trans. Graph. 35(6), 1–12 (2016).
[Crossref]

Read, J.

J. Read, “Visual perception: Understanding visual cues to depth,” Curr. Biol. 22(5), R163–R165 (2012).
[Crossref]

Ronchi, L.

L. Ronchi and G. Molesini, “Depth of focus in peripheral vision,” Ophthalmic Res. 7(3), 152–157 (1975).
[Crossref]

Salvi, M.

A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D. Luebke, and A. Lefohn, “Towards foveated rendering for gaze-tracked virtual reality,” ACM Trans. Graph. 35(6), 1–12 (2016).
[Crossref]

Schaeffel, F.

Schütt, H. H.

H. H. Schütt, S. Harmeling, J. H. Macke, and F. A. Wichmann, “Painfree and accurate bayesian estimation of psychometric functions for (potentially) overdispersed data,” Vision Res. 122, 105–123 (2016).
[Crossref]

Seidemann, A.

Snyder, J.

B. Guenter, M. Finch, S. Drucker, D. Tan, and J. Snyder, “Foveated 3d graphics,” ACM Trans. Graph. 31(6), 1–164 (2012).
[Crossref]

Stramer, T.

N. Padmanaban, R. Konrad, T. Stramer, E. A. Cooper, and G. Wetzstein, “Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays,” Proc. Natl. Acad. Sci. 114(9), 2183–2188 (2017).
[Crossref]

Sun, Q.

Q. Sun, F.-C. Huang, J. Kim, L.-Y. Wei, D. Luebke, and A. Kaufman, “Perceptually-guided foveation for light field displays,” ACM Trans. Graph. 36(6), 1–13 (2017).
[Crossref]

Takaki, Y.

Y. Takaki, “High-density directional display for generating natural three-dimensional images,” Proc. IEEE 94(3), 654–663 (2006).
[Crossref]

Tan, D.

B. Guenter, M. Finch, S. Drucker, D. Tan, and J. Snyder, “Foveated 3d graphics,” ACM Trans. Graph. 31(6), 1–164 (2012).
[Crossref]

Tan, G.

Taylor, H. R.

H. R. Taylor, “Applying new design principles to the construction of an illiterate e chart,” Optom. & Vis. Sci. 55(5), 348–351 (1978).
[Crossref]

Wang, B.

B. Wang, K. J. Ciuffreda, and T. Irish, “Equiblur zones at the fovea and near retinal periphery,” Vision Res. 46(21), 3690–3698 (2006).
[Crossref]

Watson, A. B.

A. B. Watson, “A formula for human retinal ganglion cell receptive field density as a function of visual field location,” J. Vis. 14(7), 15 (2014).
[Crossref]

Watt, S. J.

K. Akeley, S. J. Watt, A. R. Girshick, and M. S. Banks, “A stereo display prototype with multiple focal distances,” ACM Trans. Graph. 23(3), 804–813 (2004).
[Crossref]

Wei, L.-Y.

Q. Sun, F.-C. Huang, J. Kim, L.-Y. Wei, D. Luebke, and A. Kaufman, “Perceptually-guided foveation for light field displays,” ACM Trans. Graph. 36(6), 1–13 (2017).
[Crossref]

Westheimer, G.

F. Campbell and G. Westheimer, “Dynamics of accommodation responses of the human eye,” The J. physiology 151(2), 285–295 (1960).
[Crossref]

Wetzstein, G.

N. Padmanaban, R. Konrad, T. Stramer, E. A. Cooper, and G. Wetzstein, “Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays,” Proc. Natl. Acad. Sci. 114(9), 2183–2188 (2017).
[Crossref]

F.-C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: Immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60 (2015).
[Crossref]

Wichmann, F. A.

H. H. Schütt, S. Harmeling, J. H. Macke, and F. A. Wichmann, “Painfree and accurate bayesian estimation of psychometric functions for (potentially) overdispersed data,” Vision Res. 122, 105–123 (2016).
[Crossref]

Wu, S.-T.

Wyman, C.

A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D. Luebke, and A. Lefohn, “Towards foveated rendering for gaze-tracked virtual reality,” ACM Trans. Graph. 35(6), 1–12 (2016).
[Crossref]

Xiong, J.

Yang, J.

Zhan, T.

Zhao, D.

Zou, J.

ACM Trans. Graph. (6)

B. Guenter, M. Finch, S. Drucker, D. Tan, and J. Snyder, “Foveated 3d graphics,” ACM Trans. Graph. 31(6), 1–164 (2012).
[Crossref]

A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D. Luebke, and A. Lefohn, “Towards foveated rendering for gaze-tracked virtual reality,” ACM Trans. Graph. 35(6), 1–12 (2016).
[Crossref]

Q. Sun, F.-C. Huang, J. Kim, L.-Y. Wei, D. Luebke, and A. Kaufman, “Perceptually-guided foveation for light field displays,” ACM Trans. Graph. 36(6), 1–13 (2017).
[Crossref]

K. Akeley, S. J. Watt, A. R. Girshick, and M. S. Banks, “A stereo display prototype with multiple focal distances,” ACM Trans. Graph. 23(3), 804–813 (2004).
[Crossref]

D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013).
[Crossref]

F.-C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: Immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60 (2015).
[Crossref]

Curr. Biol. (1)

J. Read, “Visual perception: Understanding visual cues to depth,” Curr. Biol. 22(5), R163–R165 (2012).
[Crossref]

J. Acoust. Soc. Am. (1)

H. Levitt, “Transformed up-down methods in psychoacoustics,” J. Acoust. Soc. Am. 49(2B), 467–477 (1971).
[Crossref]

J. Franklin Inst. (1)

F. E. Ives, “A novel stereogram,” J. Franklin Inst. 153(1), 51–52 (1902).
[Crossref]

J. Opt. Soc. Am. A (1)

J. Vis. (2)

A. B. Watson, “A formula for human retinal ganglion cell receptive field density as a function of visual field location,” J. Vis. 14(7), 15 (2014).
[Crossref]

S. A. Cholewiak, G. D. Love, and M. S. Banks, “Creating correct blur and its effect on accommodation,” J. Vis. 18(9), 1 (2018).
[Crossref]

Ophthalmic Res. (1)

L. Ronchi and G. Molesini, “Depth of focus in peripheral vision,” Ophthalmic Res. 7(3), 152–157 (1975).
[Crossref]

Opt. Express (3)

Opt. Lett. (1)

Optom. & Vis. Sci. (1)

H. R. Taylor, “Applying new design principles to the construction of an illiterate e chart,” Optom. & Vis. Sci. 55(5), 348–351 (1978).
[Crossref]

Proc. IEEE (1)

Y. Takaki, “High-density directional display for generating natural three-dimensional images,” Proc. IEEE 94(3), 654–663 (2006).
[Crossref]

Proc. Natl. Acad. Sci. (2)

M. Kwon and R. Liu, “Linkage between retinal ganglion cell density and the nonuniform spatial integration across the visual field,” Proc. Natl. Acad. Sci. 116(9), 3827–3836 (2019).
[Crossref]

N. Padmanaban, R. Konrad, T. Stramer, E. A. Cooper, and G. Wetzstein, “Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays,” Proc. Natl. Acad. Sci. 114(9), 2183–2188 (2017).
[Crossref]

The J. physiology (1)

F. Campbell and G. Westheimer, “Dynamics of accommodation responses of the human eye,” The J. physiology 151(2), 285–295 (1960).
[Crossref]

Vision Res. (2)

H. H. Schütt, S. Harmeling, J. H. Macke, and F. A. Wichmann, “Painfree and accurate bayesian estimation of psychometric functions for (potentially) overdispersed data,” Vision Res. 122, 105–123 (2016).
[Crossref]

B. Wang, K. J. Ciuffreda, and T. Irish, “Equiblur zones at the fovea and near retinal periphery,” Vision Res. 46(21), 3690–3698 (2006).
[Crossref]

Supplementary Material (4)

NameDescription
» Data File 1       Raw data of blur discrimination experiment
» Data File 2       Raw data of depth discrimination experiment
» Visualization 1       Procedure and stimulus used in experiment 1
» Visualization 2       Procedure and stimulus used in experiment 2

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. Blur perception study setup and a sampled stimulus (see Visualization 1).
Fig. 2.
Fig. 2. Blur study results. We plot the blur detection/discrimination thresholds as a function of eccentricity and pedestal/baseline blur ($-2, -1, 0, 1, 2{D}$) for four subjects. All thresholds are computed as differences in diopters, i.e., $|D_a - D_b|$ for test case a and control (pedestal/baseline) case b. For example, a 0.1D of threshold at 2.0D baseline means the subject can perceive the blur caused by 2.1D target with the eye focusing at 2.0D. X-axis represents retinal eccentricity in degree. Y-axis represents measured thresholds in diopter. Each vertical bar indicates the $75\%$ performance level centered at a $95\%$ confidence interval. See Data File 1 for underlying values.
Fig. 3.
Fig. 3. Depth perception study design. (a) shows simulated retinal images via DLSR camera photography [8]. The focus depth changed from far (left) to near (right). (b) shows the study setup. The bottom inset is a simulated retinal image of the stimuli. The green object is the fixation; the other two are the test targets (see Visualization 2).
Fig. 4.
Fig. 4. The result of depth detection thresholds (Y) against eccentricity (X). See Data File 2 for underlying values.

Metrics