Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Diffuser-based computational imaging funduscope

Open Access Open Access

Abstract

Poor access to eye care is a major global challenge that could be ameliorated by low-cost, portable, and easy-to-use diagnostic technologies. Diffuser-based imaging has the potential to enable inexpensive, compact optical systems that can reconstruct a focused image of an object over a range of defocus errors. Here, we present a diffuser-based computational funduscope that reconstructs important clinical features of a model eye. Compared to existing diffuser-imager architectures, our system features an infinite-conjugate design by relaying the ocular lens onto the diffuser. This offers shift-invariance across a wide field-of-view (FOV) and an invariant magnification across an extended depth range. Experimentally, we demonstrate fundus image reconstruction over a 33° FOV and robustness to ±4D refractive error using a constant point-spread-function. Combined with diffuser-based wavefront sensing, this technology could enable combined ocular aberrometry and funduscopic screening through a single diffuser sensor.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Over a billion people worldwide currently suffer from poor vision that could be improved with prescription eyeglasses [1]. A major barrier to providing eyeglasses to this population is access to the trained personnel and expensive equipment required for a comprehensive eye examination. Economic restrictions, a lack of clinical providers, and distance to healthcare settings all limit access to effective ocular diagnosis and treatment [13]. To overcome these barriers, eye care providers, such as Aravind Eye Care System and L V Prasad Eye Institute, are implementing an approach that relies heavily on point-of-care screening provided by trained residents at the community level [46]. This strategy alleviates issues with transportation and reduces cultural barriers that prevent uptake of services [6]. For these minimally-trained workers to be effective, there is a need for low-cost, portable, and easy-to-use devices capable of the objective assessment of a wide variety of ophthalmic diseases.

A comprehensive eye examination includes both refraction to provide eyeglass prescriptions and inspection to screen for abnormalities. Numerous inexpensive, point-of-care tools for managing ophthalmic conditions have recently been developed. With the widespread adoption of mobile phones and rapid advancement of their camera technology, the potential of smartphone-based ophthalmic imaging has been recognized [711]. Other portable techniques have been demonstrated with an inexpensive point-and-shoot camera [12] and with a Raspberry Pi [13]. A computational single-pixel ophthalmoscope was recently demonstrated for increased visibility through opacities [14]. To improve efficiency and quality of prescribing eyeglasses, accurate, low-cost ocular aberrometry has been demonstrated with both handheld [15,16] and smartphone-based autorefractors [17]. It is likely that many such devices will be required to tackle the diverse causes of global visual impairment such as age-related macular degeneration, glaucoma, and uncorrected refractive error [18]. A simple device that combines funduscopy and aberrometry, two essential parts of a comprehensive eye exam, could reduce costs and training time while increasing efficiency.

Diffuser cameras have been explored as an attractive alternative to conventional approaches to measure lightfield information [1925]. With a thorough characterization of the diffuser’s caustic point-spread-function (PSF) under incoherent illumination, plenoptic analysis [21] allows view synthesis and digital refocusing [19]. Lens-free diffuser-based imaging can also be implemented by formulating the reconstruction as an inverse problem, enabling single-shot volumetric photography [26] and microscopy [27]. Further, utilizing temporal multiplexing by the rolling shutter of a CMOS sensor, video reconstruction is possible from a single diffuser image [28]. Extended depth-of-field photography has been demonstrated by deconvolving with the invariant far-field diffuser PSF [29]. Diffuser-based phase imaging under spatially partially coherent illumination [30,31] is possible by solving the transport-of-intensity equation [32,33]. Wavefront sensing [34] and ocular aberrometry [35] have also recently been demonstrated with diffusers. In each instance, the diffuser can be treated as a randomly distributed microlens array with slightly varying foci [19]. However, compared to the spot pattern formed by a microlens array, the caustic pattern formed by the diffuser is more structured that produces a better evenly distributed transfer function. This in turn makes the deconvolution problem better conditioned and minimizes the reconstruction artifacts [27].

Diffuser imaging is enabled by computational imaging, which synergistically combines optics and algorithms [36]. It belongs to the class of computational imaging architectures that reduce the overall optics complexity by computation, including coded aperture imaging [37,38], lens-free holography [39], compound imager [24,40], and lightfield/integral imager [23,41]. Diffuser imaging is attractive because it uses a simple and cheap optical element, does not require any special alignment between the diffuser and the image sensor [19,35], and achieves the imaging capability by a single-shot under both spatially coherent [35] and incoherent illumination [19,27].

In this paper, we develop and demonstrate funduscopy of a model eye with a diffuser-based computational imager. Our diffuser-imager design is particularly inspired by the work by Antipa, et al. [19]. In [19], the diffuser is used in a “finite-conjugate” configuration, in which each object point projects a spherical wavefront to the diffuser, similar to the standard “single-lens” imager. In funduscopy, due to the presence of the ocular lens, the “infinite-conjugate” configuration is better suited, in which each in-focus object point on the retina projects a planar wavefront to the diffuser. In this configuration, the diffuser can be treated as the second lens in a telecentric 4F system, which provides both shift-invariance across the field-of-view (FOV) and an invariant magnification under defocus. Conveniently, this configuration is identical to that used in the diffuser-based ocular aberrometry [35]. Taken together, we believe this diffuser-based sensing framework has exciting potential advantages in ocular imaging, and could enable simultaneous funduscopy and ocular aberrometry on the same platform.

The diffuser-imager is integrated with an illumination module based on an annular ring of LEDs, which provides flood-illumination across a 33$^{\circ }$ FOV. We demonstrate the imaging capability of the proposed device by reconstructing various incoherent objects, including patterns on a self-emitting OLED screen and incoherently illuminated printed patterns through a simple model eye, as well as through a commercial model eye fundus. In addition, we assess image reconstruction quality, and demonstrate robustness of the reconstruction algorithm to refractive error imparted on the object or the PSF used for reconstruction. This work shows promise for the future development of a device that could perform funduscopy and aberrometry through a single diffuser camera.

2. Methods

We image the fundus with a “DiffuserCam”, which consists of an iris placed adjacent to a holographic diffuser, separated $f_d$ from the image sensor. To achieve large FOV ocular imaging, we jointly designed the optical hardware, calibration procedures, acquisition methods, and reconstruction algorithms. Our workflow consists of three stages as summarized in Fig.  1, including an initial and single-time system PSF calibration, image acquisition, and computational reconstruction. In the PSF calibration stage, the system response is captured by displaying a point source on an OLED screen placed at the front focal plane of a simple eye model. In the acquisition stage, the image sensor captures a measurement of the flood-illuminated fundus through the diffuser. In the reconstruction stage, we solve a regularized deconvolution problem to recover a high-quality fundus image. In this section, we first describe our general optical design, then we lay out the specific experimental implementations for optimizing the system performance, and lastly we outline the theoretical principles and formulation of our reconstruction algorithm.

 figure: Fig. 1.

Fig. 1. Overview of the proposed diffuser ocular imaging workflow, including: (a) PSF calibration, (b) diffuser-image acquisition, and (c) imaging reconstruction. The PSF is a highly structured caustic pattern. During acquisition, the sensor captures a 2D image resulting from the PSF convolved with the remitted light of the fundus. A high-quality retinal image is reconstructed by solving a regularized deconvolution problem.

Download Full Size | PDF

2.1 Optical design

Existing fundus cameras use lens-based imaging, where high image quality is achieved by optimizing lens design, typically resulting in multi-element, bulky, and expensive systems [13]. We propose the use of a thin diffuser as an alternative for the imaging lens, which would significantly reduce size, weight, and cost of the whole system, in addition to being compatible with diffuser-based aberrometry. In this system (Fig.  1(b)), the eye is illuminated with an LED ring through a beam splitter. The reflected light from the fundus first passes through a 4F relay lens system. The diffuser is placed at the conjugate plane of the eye lens, which captures parallel beams from an emmetropic eye. The image sensor is placed at the "back-focal plane" of a diffuser, where sharp caustic patterns form [26]. To achieve large-FOV imaging and illumination, a carefully designed optical system is required. In the following, we describe our design of the imaging path and the illumination path, as shown in Fig.  2(a) and (b), respectively.

 figure: Fig. 2.

Fig. 2. Zemax ray-tracing of the (a) imaging and (b) illumination paths of the diffuser imager. (a) The cornea image is relayed to the diffuser so that each point on the retina produces a shifted caustic PSF. (b) Ring LEDs are also relayed to the cornea to avoid central illumination and reduce specular reflection. The FOV of the system is currently limited by the numerical aperture of the illumination optics.

Download Full Size | PDF

To simulate an eye in a Zemax model, we use a curved retinal surface ($R$ = 14 mm), a bi-convex lens ($f = 25$ mm, Thorlabs LB1761-A), and an iris ($d = 7.7$ mm) (Fig.  2(a)). This results in collimated light exiting the pupil from any spot on the fundus. A relay system images the cornea to the diffuser, so that each point on the fundus produces a parallel beam with a distinct angle impinging on the same region on the diffuser within the desired FOV. This increases the linear shift-invariance (LSI) of the system over a large FOV, which in turn simplifies the reconstruction algorithm. To further reduce large-angle distortions, we use the Double-Gauss design [42]. In our design, two lens groups (Lens 1 and Lens 2) each containing two doublets are inserted. Lens 1 has a short focal length $f_1=37.5$ mm and can collect a large angle of reflected light leaving the eye. The light is relayed by Lens 2 through the beam splitter (Thorlabs BSW16) and onto the diffuser. To avoid the beam splitter limiting the large field angle, Lens 2 should also have a short focal length $f_2=50$ mm to achieve a small magnification. However, the volume of the beam splitter limits the minimum length of $d_4+d_5$ to larger than 40mm. The relay system was designed by taking all of these constraints into consideration. Lens 1 consists of two $f=75$ mm doublets (Thorlabs AC508-075-A) and Lens 2 consists of two $f=100$ mm doublets (Thorlabs AC508-100-A). The Zemax simulation, in which we modeled the system with the actual lenses used, shows that the reflected angle within $\pm 15^{\circ }$ can be well imaged to the diffuser plane without shift. For reflected angle between $15^{\circ }$ and $25^{\circ }$, the collimated light exiting the pupil can still fully illuminate the iris adjacent to the diffuser despite the beam shift due to aberrations, indicating that the LSI condition is approximately maintained. For the field angle larger than $25^{\circ }$, the beam shift passes beyond the edge of the iris, which results in a different caustic PSF shape and violation of the LSI condition. For $30^{\circ }$ field angles, the shift is significant–only $6.3\%$ of the area of the entrance aperture to the DiffuserCam is illuminated by the collimated beam.

We model the thin diffuser as an array of randomly distributed microlenses with approximately the same focal length $f_d$, similar to [19]. By placing the camera $f_d$ away from the diffuser, the PSF contains high-contrast caustic patterns. Precise control of this distance is not required because the caustic patterns from the diffuser are intrinsically more robust to defocus than a standard lens. Intuitively, this is because each diffuser feature can be treated as a low numerical aperture (NA) lens that provides a large depth-of-field (DOF), which removes the need for precise focus control. Under the LSI condition, the PSF size is set by the size $d$ of the iris placed in front of the diffuser. The extent of the diffuser-image is approximately $d+2s$, where the maximum displacement, $s$, is related to the imaging-path FOV $\theta _{\mathrm {imag}}$ by $s = (f_1/f_2)f_d\theta _{\mathrm {imag}}/2$. In practice, one needs to choose an image sensor size $D > d+2s$. We found this condition is achievable using off-the-shelf components with a $0.5^{\circ }$ holographic diffuser (Edmunds Optics 47-989, $f_d\approx 6$ mm), the relay system described above to provide >$50^{\circ }$ FOV, which is within the range found in high-end commercial fundus cameras.

As shown in Fig.  2(b), the off-axis LED ring is conjugate to eye lens through the same beam splitter and relay system. The achievable FOV is further limited by the illuminated area on the fundus, which approximately spans an angular FOV: $\theta _{\mathrm {illum}} = (f_2/f_1)\theta _{\mathrm {LED}}$. The emitting angular range of the LED $\theta _{\mathrm {LED}}$ needs to be optimized to maximize the measurement contrast. Our preliminary prototype uses a ring LED to minimize specular reflection with a limited $\theta _{\mathrm {LED}}$ that achieved a $33^{\circ }$ FOV. The ring diameter was designed to be 8 mm for approximately matching the pupil size after being demagnified by the relay system, and practically constrained by the coarse LED grid in the prototype. By adjusting the distance between the LED ring and Lens 2, the illumination FOV can be further fine tuned.

2.2 Hardware design $\&$ implementation

An overview of the diffuser-based funduscope is shown in Fig.  3(a). The setup is compact, especially for the illumination and DiffuserCam parts, as shown in Fig.  3(b). To increase the contrast and reduce the stray-light background, a pair of crossed polarizers are placed in front of the LED and the diffuser imager. The LED ring is implemented with an off-the-shelf LED matrix (Spacing = 2 mm, SparkFun LuMini LED Matrix - $8\times 8$ (64 $\times$ APA102-2020)). Due to the discrete grid of the array, the actual diameter of the ring is 9.5 mm in our prototype (Fig.  3(c)). In addition, the LED array is covered by a 3D-printed cap to limit the illumination angle (Fig.  3(c)) in order to block the stray-light from entering the sensor directly. For the diffuser imager, the diffuser and a 3D-printed iris with 3.2 mm diameter are placed adjacent to one another and $\sim$6 mm before a monochromatic sCMOS image sensor (Thorlabs Quantalux, 5.04 $\mu$m pixel size, 1920 $\times$ 1080 pixels, full-well capacity 23000 e$^-$, dynamic range 87 dB).

 figure: Fig. 3.

Fig. 3. Experimental setup of the diffuser-based funduscope. (a) Overhead view of the imaging and illumination paths, (b) details of compact components including the LED ring, a pair of polarizers crossed between the LED array and diffuser, and the diffuser camera, (c) a 3D-printed cap to confine the divergence angle of the illumination, and (d) diffuser camera components including a 3D printed iris placed in front of the diffuser to limit the size of the PSF.

Download Full Size | PDF

2.3 Algorithm

The final fundus images are reconstructed following the general inverse problem framework that combines two complementary sources of information: the forward model describing the imaging process with the pre-calibrated PSF, and the prior describing the structural or statistical information about fundus images.

Specifically, we used a 2D LSI model that assumes the raw measurements are the convolution of the object and a single invariant PSF that is pre-captured with an on-axis point source. With this LSI model, our preliminary experiments show high-resolution reconstruction in the central FOV region of a diffuser image of the retinal object. However, the direct deconvolution algorithm is inevitably sensitive to noise due to the poor conditioning of the inverse problem using the non-conjugate imaging geometry. We mitigated this poor-conditioning by incorporating priors in the deconvolution algorithm. The prior we used is through the L-2 norm regularizer. Accordingly, we formulate the regularized inverse problem through the minimization of:

$$\hat{\mathbf{x}}=\mathop{\textrm{argmin}}\limits_{\mathbf{x}}||\mathbf{y}-\mathbf{h}*\mathbf{x}||^{2}_{2}+\mu||\mathbf{x}||^{2}_{2}, $$
where $\mathbf {y}$ denotes the measurement, $\mathbf {x}$ the object, $\mathbf {h}$ is the PSF, and $\mu ||\mathbf {x}||^{2}_{2}$ is the L-2 regularization term with weight $\mu$. This Tikhonov regularized solution is conveniently calculated by first performing the Fourier transform, then Fourier domain filtering, and lastly inverse Fourier transforming. The optimal regularization parameter $\mu$ is found by picking the visually-optimal reconstruction when varying $\mu$ in a predefined small range.

3. Results

In this section, we assess the diffuser funduscope in three scenarios. First, we conducted PSF calibration and FOV analysis of the imaging system using a simple model eye with a self-illuminated OLED screen for its retina. Second, we replaced the self-illuminated OLED screen with various objects printed on paper, and projected the system’s off-axis, ring illumination to the simple model eye for reconstruction. The first two experiments show the effect of the system’s illumination on the FOV. Last, we use a commercial model eye to assess the image quality of the diffuser funduscope with a physiologically realistic object.

3.1 Simple model eye with self-illuminating object

For analyzing the FOV of the imaging system in the absence of illumination limitations, we measured a self-illuminated simple model eye (Fig.  4(a)). The simple model eye is composed of an object placed at the focal plane of a biconvex lens ($f = 25$ mm) and a 7.7 mm aperture, which provides a crude model of the human eye [35]. We used an OLED screen as a self-illuminated object, placed at the focal plane. The PSF was measured first by displaying an on-axis point source (diameter 275 $\mu$m) on the screen, which produced the caustic pattern, shown in Fig.  4(a). Next, the screen displayed an array of equally spaced point sources for calibrating the FOV as shown in Fig.  4(b). Here, the green outlined insert is our raw measurement and the right-hand-side is the regularized reconstruction. We observe in this setting that the system is able to reconstruct over a 51.2$^\circ$ angular FOV. Figure  4(c) shows our reconstruction from displaying an image of a retina with diabetic retinopathy [43] on the screen, which demonstrated a relatively wide FOV of 48.3$^\circ$. The directly reconstructed images are shown in Figs.  4(b)(i) and (c)(i). We observe uneven intensity distribution in these images possibly due to the mismatch between the model curved retina (in Fig.  2) and the flat OLED screen used for displaying the object. To compensate for this effect, we applied an additional flat-field post-correction to the reconstruction, as shown in Figs.  4(b)(ii) and (c)(ii). Cutlines are made across characteristic feature regions to compare the reconstruction and the original displayed object.

 figure: Fig. 4.

Fig. 4. Experimental FOV characterization of the imaging path of our prototype by using a self-illuminated retina. The design in (a) provides $\sim 50^\circ$ FOV, as demonstrated on (b) a dot-array object and (c) a retinal object. The raw diffuser image acquired from each object is shown in the green-outlined inset. The direct reconstructed images in (b)(i) and (c)(i) and the ones with flat field post-correction in (b)(ii) and (c)(ii) are compared. Cutlines are compared between the flat-field corrected reconstruction and the displayed object. The PCC, SSIM, and PSNR of each reconstructed image are computed against the original displayed object.

Download Full Size | PDF

The reconstruction quality is further quantified by comparing the recovered image with the original displayed object using the Pearson correlation coefficient (PCC), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM). For the dot array object in Fig.  4(b), the flat-field correction leads to reconstruction quality improvement, in particular–the SSIM improves from 0.59 to 0.85. This can be understood from an improvement in the luminance agreement between points at large FOV angles. For the retinal image object in Fig.  4(c), the loss of contrast reduces the SSIM. Looking at the cutlines, though the vascular features are blurred in the reconstruction, the major features are captured, and both bright and dark edges are followed.

3.2 Simple model eye with external illumination

Next, we tested diffuser imaging with an external illumination system. In this experiment, the simple model eye was also used, however we substituted the OLED screen with a printed paper object, as shown in Fig.  5(a). The same PSF measured from the on-axis point source on OLED screen (from section 3.1) was used for the reconstruction, as illustrated in Fig.  5(a). We first analyzed the FOV by imaging a printed ruler with both positive and negative contrast (Fig.  5(b)). The imaging system reconstructed an image of the ruler 15 mm in length, equivalent to a 33.4$^\circ$ angular FOV. We then printed and imaged the same retinal image displayed in Fig.  4(c). In this printed fundus, the FOV was moved to three locations by translating the model eye laterally. Due to the planar object and uneven illumination, non-uniform intensity distribution in the direct reconstruction was also observed. We applied the same flat-field correction to each reconstruction in the results shown in Fig.  5(b) and (c). Cutlines are made across characteristic features to compare the reconstruction and the corresponding printed pattern. Similar to our previous observation, aside from minor reduced contrast, we show major features are faithfully recovered. In particular, in each FOV of Fig.  5(c), vessels and other small features can clearly be resolved. The reconstruction quality is further inspected by PCC, PSNR and SSIM. As highlighted by the comparison for the ruler pattern with the positive and negative contrast in Fig.  5(b), while the PCC remains similar, the PSNR and SSIM are much reduced for the positive contrast case (Fig.  5(b)(ii)). This degradation indicates a potential challenge in reconstructing an object with sparse negative-contrasted features with an otherwise uniform bright background since it results in a less structured measurement with low contrast, as evident by comparing the cutlines from the raw measurements between the two cases in Fig.  5(b). For the printed retina object, the cutlines again show that important clinical features like the vessels, hemorrhages, and optic disk are captured by the reconstruction at all three FOVs explored in Fig.  5(c).

 figure: Fig. 5.

Fig. 5. Experimental FOV characterization of external illumination and imaging. (a) Test objects are printed on paper and placed at the focal plane of the simple model eye. The PSF used is the same as was acquired previously from the OLED screen. (b,c) Our design provides $\sim$33$^\circ$ FOV, as demonstrated with (b) ruler patterns with both positive and negative contrast, and (c) a retinal image. All reconstructions are flat-field corrected. Cutlines of the raw images are shown to compare the measurement contrast from the positive and negative contrasted ruler patterns. Cutlines are compared between the reconstruction and the printed pattern. The PCC, SSIM, and PSNR of each reconstructed image are computed against the original printed pattern.

Download Full Size | PDF

3.3 Commercial model eye with external illumination

To investigate the performance of our combined imaging and illumination system in a physiologically-realistic scenario, we next imaged a commercial model eye (HEINE Ophthalmoscope Trainer), which has realistic retinal structures and allows varying amounts of refractive error. Figure  6(a) shows the overall procedure of this experiment. First, raw data is acquired (green insert) without refractive error (0D), and reconstructed using the same PSF calibrated in the previous experiments (from Fig.  4(a)). Next, to assess the robustness of the system to refractive error, we tested two scenarios: (1) reconstruction of fundus images with varying refractive errors using a single emmetropic PSF acquired at 0D refractive error, and (2) reconstruction of the fundus of an emmetropic model eye using ametropic PSFs acquired at a range of different retinal positions. Figure  6(b) shows that when the refractive error is within a range of -4D to 4D, no significant degradation of image quality occurs. In Fig.  6(c), the distance at which the PSF was acquired was varied by $\pm$4 mm. This simulates PSFs acquired in eyes that are too short or too long, thus testing the sensitivity of the reconstruction to changes in the PSF that could result if calibration is done in an eye with refractive error. Using a thin-lens approximation, this spans refractive error from -5.5D to +7.6D. The reconstruction results indicate that when the refractive error is as high as -5.5D or +7.6D, we observe slight degradation of image quality in both contrast and resolution. All reconstructed images are flat-field corrected to compensate for uneven illumination. A cutline across the optic disk region for each reconstructed image is shown, indicating consistent contrast is achieved across the defocus range investigated.

 figure: Fig. 6.

Fig. 6. Diffuser imaging of a commercial model eye. (a) Left: The data acquisition and PSF calibration are individually performed on different setups. Right: The reconstruction from both the measurement and the PSF taken with no refractive error (0D). (b) The reconstruction results using fundus measurements under different refractive errors with a 0D PSF. (c) The reconstruction results of a 0D fundus measurement using aberrated PSFs. Cutlines are shown in each reconstructed images and demonstrate consistent contrast with different refractive errors or defocused PSFs.

Download Full Size | PDF

4. Discussion

4.1 Self-illuminated object

The initial set of experiments with a self-illuminated OLED screen used as the retina in a simple model eye provided insight into the fundamental operation and potential performance of the diffuser-based funduscope in the absence of illumination constraints. For initial calibration and subsequent deconvolution, the system PSF was measured by illuminating an on-axis point source on the OLED screen. As seen in Fig.  4(a), the PSF is a highly structured caustic pattern, which is the fundamental signal used in both DiffuserCam [26] and diffuser-based ocular aberrometry [35].

Next, a point source array was illuminated on the screen (Fig.  4(b)), and the acquired image (green insert) is deconvolved with the system PSF to reconstruct the object. From this result, we can determine that the imaging path is able to provide a 51$^{\circ }$ angular FOV. Interestingly, when looking at the acquired signal before reconstruction (Fig.  4(b), green insert), a structured pattern is observed, consistent with the expected appearance of the PSF convolved with a point array.

When the image of a retina with diabetic retinopathy is displayed on the OLED screen (Fig.  4(c)), a similar 48.3$^{\circ }$ FOV is reconstructed. Many important features of the retina are visible, including the optic disk and healthy vasculature, as well as retinal scarring and hemorrhage. The detection FOV is similar to the 45$^{\circ }$ FOV typically achieved by non-mydriatic fundus photography [44]. The reconstructed structures at large field angles are more blurred and have lower contrast, due to distortion of a flat screen being projected by a single bi-convex lens. When exiting the simple model eye lens, these rays are not parallel, which changes their PSF, making it no longer spatially invariant. When applying our deconvolution algorithm with a fixed PSF, the reconstruction performance of structure at high field angle decreases.

4.2 External illumination with a simple model eye

In the next set of experiments, we used external illumination via a ring LED, and demonstrated simple model eye reconstructions of test objects printed on paper (Fig.  5). We first used printed rulings of known size for calibration (Fig.  5(b)). From these objects, we observed a high contrast reconstruction over a 33.4$^{\circ }$ FOV. Comparing these results with the 51$^{\circ }$ FOV demonstrated with the OLED retina (Fig.  4(b)), it is apparent that the current system FOV is limited by the extent of the LED illumination. This can be mitigated by improving the illumination numerical aperture. Using the same two printed ruler patterns in Fig.  5(b) with opposite contrast demonstrates the impact of signal sparsity on measurement contrast and reconstruction quality. The sparse-printed rulings (white rulings and black background, left) were captured with higher contrast and reconstructed with better quality, as compared to its counterpart, the dense-printed rulings (black rulings on white background, right).

Lastly, we printed the same retina pattern used on the OLED screen (Fig.  4(c)) for imaging and reconstruction using the ring LED illumination (Fig.  5(c)). Again we observe a similar FOV as observed with the ruler, less than when we used the displayed retina on the OLED screen, further indicating our FOV is limited by the illumination extent. However, despite a more limited FOV, we still observe the same clinical retinal features reconstructed in the self-illuminated pattern.

4.3 Commercial model eye reconstruction

In the last set of experiments, we replaced the simple model eye with a commercial model eye to test a more physiologically realistic object in the presence of refractive error and aberrated PSFs (Fig.  6). Reconstructions of the commercial model eye fundus were performed with LED illumination, using the same PSF acquired from the on-axis point source in the calibration step (Fig.  4(a)), and with 0D refractive error. The initial result of the commercial model eye fundus reconstruction is shown in Fig.  6(a), where the optic disk and blood vessels are clearly visible. A similar angular FOV was observed here with the commercial model eye as was previously demonstrated with the simple model eye. The FOV, though limited by the extent of illumination provided by the ring LED, still reveals a greater FOV than conventional direct ophthalmoscopy [44].

After reconstructing an initial image of the commercial model eye fundus with emmetropia, a refractive error of between -4D and 4D was introduced (Fig.  6(b)). Despite the refractive error, the diffuser funduscope still produced fundus images of similar quality to the emmetropic case. Importantly, fundus images were reconstructed using the same PSF as applied in the 0D case and in the reconstructions of Fig.  5, and no re-calibration was required.

Finally, we evaluated the image reconstruction quality produced if the PSF were acquired from locations other than the focal point of the model eye lens (Fig.  6(c)). PSFs were acquired as the OLED screen was translated from -4mm to +4mm, to simulate the PSF acquired from eyes that are too long (myopia), and too short (hyperopia), respectively. Next, the raw data acquired from the 0D refractive error case was reconstructed with these varied PSFs. Again the fundus was reconstructed successfully over a similar 33$^{\circ }$ FOV, though some degradation of contrast and resolution begin to appear at the extreme refractive errors.

Together, these results demonstrate that the diffuser funduscope is robust to refractive error over a range of myopia and hyperopia large enough to cover a substantial range of clinical cases [45]. This is because the 0.5$^{\circ }$ holographic diffuser used in this study can be modeled as a random array of irregular lenslets, each with a very large f/# and large depth-of-focus. Further, the reconstruction quality is similar to that achieved by other computational ophthalmoscope techniques [14]. Overall, we believe our imaging system is robust and stable to refractive error within a reasonable range, and shows promise for improving the accessibility of medical diagnosis of retinal disease.

4.4 Special considerations in diffuser-based computational imaging

The diffuser-imaging follows a “non-focal” imaging geometry – the PSF spreads over an extended area. Accordingly, each pixel on the image sensor measures mixed signals from multiple object points. This generally reduces the image contrast in the raw measurements as compared to traditional “focused” imaging systems. Consequently, image sensors with low read-noise, large well capacity, and high dynamic range are desired to better capture the encoded information. In our prototype, we chose an sCMOS camera that provides <1 e$^-$ median read-noise, 23000 e$^-$ full-well capacity and 87 dB dynamic range. Image quality can be further improved with better image sensor, as shown in [19]. The image contrast can also be significantly improved by using a properly-designed microlens array [24] or customized aperiodic microlens array [27]. Our future work will investigate these strategies with the additional considerations of keeping the platform low-cost and compatible to ocular aberrometry.

Our reconstruction was implemented by the Tikhonov regularization algorithm, which has the advantage that it provides a closed-form solution, is computationally efficient and fast. In our implementation, reconstructing a 1080$\times$1920-pixel image using MATLAB on Mac OS machine takes about 0.26 seconds. However, this L-2 regularization strategy suffers from a fundamental trade-off between the reconstructed resolution, image contrast, and ringing artifacts [46]. This limited the reconstruction quality, especially for complex objects, e.g. Figure  4(c). The reconstruction quality is further affected by the presence of a strong background, e.g. Figure  5(b). It has been shown that these limitations can be alleviated by incorporating advanced image priors, such as sparsity [19] and deep neural network learned priors [47], using an iterative algorithm. However, these algorithms typically require significantly larger computational cost and longer execution time. Recently, end-to-end task-specific deep learning models emerge to be an appealing solution for achieving both high-quality image reconstruction and highly efficient inference implementation [4851]. Given the recent achievements in automatic analysis of retinal images [52,53], combining our diffuser-funduscopy and data-driven deep learning models may be a promising future direction to pursue.

5. Conclusion

Visual impairment is a pressing global health concern, and many of its causes are avoidable. This problem can be improved by the development and distribution of robust, low-cost diagnostic devices that require minimal training to operate. In this paper we develop and demonstrate one such approach for retinal imaging by developing and characterizing a compact diffuser funduscope. We demonstrate high-quality funduscopy of a model eye that is robust to a large range of refractive error. Further, the point spread function used for deconvolution is a caustic pattern produced by the same holographic diffuser from which ocular aberrometry has been previously demonstrated. In future work, we envision these two techniques working synergistically, allowing simultaneous measurement of refractive error and funduscopy in one compact, inexpensive device.

Funding

Johns Hopkins Medical Scientist Training Program; National Institute of Biomedical Imaging and Bioengineering (R21 EB024700); National Science Foundation (1711156, 1813848).

Acknowledgments

We thank Shivang R. Dave and Ahhyun Stephanie Nam from PlenOptika, Inc., for fruitful discussions on funduscopy.

Disclosures

GNM and NJD are listed as co-inventors on a provisional patent application assigned to Johns Hopkins University that is related to the technologies described in this article. They may be entitled to future royalties from this intellectual property.

References

1. N. J. Durr, S. R. Dave, E. Lage, S. Marcos, F. Thorn, and D. Lim, “From Unseen to Seen: Tackling the Global Burden of Uncorrected Refractive Errors,” Annu. Rev. Biomed. Eng. 16(1), 131–153 (2014). [CrossRef]  

2. S. Resnikoff, W. Felch, T.-M. Gauthier, and B. Spivey, “The number of ophthalmologists in practice and training worldwide: a growing gap despite more than 200,000 practitioners,” Br. J. Ophthalmol. 96(6), 783–787 (2012). [CrossRef]  

3. A. E. Fletcher, “Low Uptake of Eye Services in Rural India,” Arch. Ophthalmol. 117(10), 1393 (1999). [CrossRef]  

4. G. N. Rao, “An infrastructure model for the implementation of VISION 2020: The Right to Sight,” Can. J. Ophthalmol. 39(6), 589–590 (2004). [CrossRef]  

5. V. K. Rangan and R. D. Thulasiraj, “Making Sight Affordable - Innovations Case Narrative: The Aravind Eye Care System,” Innov. Technol. Governance, Glob. 2(4), 35–49 (2007). [CrossRef]  

6. G. N. Rao, R. C. Khanna, S. M. Athota, V. Rajshekar, and P. K. Rani, “Integrated model of primary and secondary eye care for underserved rural areas: The L V Prasad Eye Institute experience,” Indian J. Ophthalmol. 60(5), 396–400 (2012). [CrossRef]  

7. R. K. Lord, V. A. Shah, A. N. San Filippo, and R. Krishna, “Novel Uses of Smartphones in Ophthalmology,” Ophthalmology 117(6), 1274–1274.e3 (2010). [CrossRef]  

8. M. E. Giardini, I. A. Livingstone, S. Jordan, N. M. Bolster, T. Peto, M. Burton, and A. Bastawrous, “A smartphone based ophthalmoscope,” 36th Annu. Int. Conf. IEEE Eng. Medicine Biol. Soc. EMBC (2014), pp. 2177–2180.

9. S. Mamtora, M. T. Sandinha, A. Ajith, A. Song, and D. H. Steel, “Smart phone ophthalmoscopy: a potential replacement for the direct ophthalmoscope,” Eye 32(11), 1766–1771 (2018). [CrossRef]  

10. T. N. Kim, F. Myers, C. Reber, P. J. Loury, P. Loumou, D. Webster, C. Echanique, P. Li, J. R. Davila, R. N. Maamari, N. A. Switz, J. Keenan, M. A. Woodward, Y. M. Paulus, T. Margolis, and D. A. Fletcher, “A smartphone-based tool for rapid, portable, and automated wide-field retinal imaging,” Transl. Vis. Sci. Technol. 7(5), 21 (2018). [CrossRef]  

11. M. Arima, T. Majima, S. Tsukamoto, T. Hara, I. Wada, S. Nakao, and K. H. Sonoda, “The utility of a new fundus camera using a portable slit lamp combined with a smartphone,” Acta Ophthalmol. 97(5), e814–e816 (2019). [CrossRef]  

12. K. Tran, T. A. Mendel, K. L. Holbrook, and P. A. Yates, “Construction of an inexpensive, hand-held fundus camera through modification of a consumer "point-and-shoot" camera,” Invest. Ophthalmol. Visual Sci. 53(12), 7600–7607 (2012). [CrossRef]  

13. B. Y. Shen and S. Mukai, “A Portable, Inexpensive, Nonmydriatic Fundus Camera Based on the Raspberry Pi® Computer,” J. Ophthalmol. 2017, 1–5 (2017). [CrossRef]  

14. B. Lochocki, A. Gambín, S. Manzanera, E. Irles, E. Tajahuerce, J. Lancis, and P. Artal, “Single pixel camera ophthalmoscope,” Optica 3(10), 1056 (2016). [CrossRef]  

15. N. J. Durr, S. R. Dave, F. A. Vera-Diaz, D. Lim, C. Dorronsoro, S. Marcos, F. Thorn, and E. Lage, “Design and Clinical Evaluation of a Handheld Wavefront Autorefractor,” Optom. Vis. Sci. 92(12), 1140–1147 (2015). [CrossRef]  

16. N. J. Durr, S. R. Dave, D. Lim, S. Joseph, T. D. Ravilla, and E. Lage, “Quality of eyeglass prescriptions from a low-cost wavefront autorefractor evaluated in rural India: Results of a 708-participant field study,” BMJ Open Ophthalmol. 4(1), e000225 (2019). [CrossRef]  

17. K. J. Ciuffreda and M. Rosenfield, “Evaluation of the SVOne: A Handheld, Smartphone-Based Autorefractor,” Optom. Vis. Sci. 92(12), 1133–1139 (2015). [CrossRef]  

18. R. Bourne, S. Resnikoff, and P. Ackland, “GBVI - Global Cause Estimates,” Tech. rep., International Agency for the Prevention of Blindness (2020).

19. N. Antipa, S. Necula, R. Ng, and L. Waller, “Single-shot diffuser-encoded light field imaging,” in 2016 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2016), pp. 1–11

20. B. C. Platt and R. Shack, “History and principles of shack-hartmann wavefront sensing,” J. Refract. Surg. 17(5), S573–S577 (2001). [CrossRef]  

21. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR 2, 1–11 (2005).

22. H. Li, C. Guo, D. Kim-Holzapfel, W. Li, Y. Altshuller, B. Schroeder, W. Liu, Y. Meng, J. B. French, K.-I. Takamaru, M. A. Frohman, and S. Jia, “Fast, volumetric live-cell imaging using high-resolution light-field microscopy,” Biomed. Opt. Express 10(1), 29–49 (2019). [CrossRef]  

23. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in ACM SIGGRAPH 2006 Papers, (Association for Computing Machinery, 2006), pp. 924–934.

24. Y. Xue, I. G. Davison, D. A. Boas, and L. Tian, “Single-shot 3d widefield fluorescence imaging with a computational miniature mesoscope,” ArXiv:2003.11994 (2020).

25. Y. Chen, B. Xiong, Y. Xue, X. Jin, J. Greene, and L. Tian, “Design of a high-resolution light field miniscope for volumetric imaging in scattering tissue,” Biomed. Opt. Express 11(3), 1662–1678 (2020). [CrossRef]  

26. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5(1), 1–9 (2018). [CrossRef]  

27. G. Kuo, F. L. Liu, I. Grossrubatscher, R. Ng, and L. Waller, “On-chip fluorescence microscopy with a random microlens diffuser,” Opt. Express 28(6), 8384–8399 (2020). [CrossRef]  

28. N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from stills: Lensless imaging with rolling shutter,” in 2019 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2019), pp. 1–8.

29. O. Cossairt, C. Zhou, and S. Nayar, “Diffusion coded photography for extended depth of field,” ACM Trans. Graph. 29(4), 1–10 (2010). [CrossRef]  

30. L. Lu, J. Sun, J. Zhang, Y. Fan, Q. Chen, and C. Zuo, “Quantitative phase imaging camera with a weak diffuser,” Front. Phys. 7, 1–11 (2019). [CrossRef]  

31. C. Wang, Q. Fu, X. Dun, and W. Heidrich, “Modeling classical wavefront sensors,” Opt. Express 28(4), 5273 (2020). [CrossRef]  

32. M. R. Teague, “Deterministic phase retrieval: a Green’s function solution,” J. Opt. Soc. Am. A 73(11), 1434–1441 (1983). [CrossRef]  

33. J. C. Petruccelli, L. Tian, and G. Barbastathis, “The transport of intensity equation for optical path length recovery using partially coherent illumination,” Opt. Express 21(12), 14430–14441 (2013). [CrossRef]  

34. P. Berto, H. Rigneault, and M. Guillon, “Wavefront sensing with a thin diffuser,” Opt. Lett. 42(24), 5117 (2017). [CrossRef]  

35. G. N. McKay, F. Mahmood, and N. J. Durr, “Large dynamic range autorefraction with a low-cost diffuser wavefront sensor,” Biomed. Opt. Express 10(4), 1718 (2019). [CrossRef]  

36. J. N. Mait, G. W. Euliss, and R. A. Athale, “Computational imaging,” Adv. Opt. Photonics 10(2), 409–483 (2018). [CrossRef]  

37. J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single-frame 3d fluorescence microscopy with ultraminiature lensless flatscope,” Sci. Adv. 3(12), e1701548 (2017). [CrossRef]  

38. J. Shin, D. N. Tran, J. R. Stroud, S. Chin, T. D. Tran, and M. A. Foster, “A minimally invasive lens-free computational microendoscope,” Sci. Adv. 5(12), eaaw5595 (2019). [CrossRef]  

39. E. McLeod and A. Ozcan, “Unconventional methods of imaging: computational microscopy and compact implementations,” Rep. Prog. Phys. 79(7), 076001 (2016). [CrossRef]  

40. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics (TOMBO): concept and experimental verification,” Appl. Opt. 40(11), 1806–1813 (2001). [CrossRef]  

41. A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94(3), 591–607 (2006). [CrossRef]  

42. W. Mandler, “Design of basic double gauss lenses,” in 1980 International Lens Design Conference, vol. 237 (International Society for Optics and Photonics, 1980), pp. 222–233.

43. A. Budai, R. Bock, A. Maier, J. Hornegger, and G. Michelson, “Robust vessel segmentation in fundus images,” Int. J. Biomed. Imaging 2013, 1–11 (2013). [CrossRef]  

44. D. D. Mackay, P. S. Garza, B. B. Bruce, N. J. Newman, and V. Biousse, “The demise of direct ophthalmoscopy,” Neurol. Clin. Pract. 5(2), 150–157 (2015). [CrossRef]  

45. J. Schwiegerling, Field Guide to Visual and Ophthalmic Optics (SPIE, 2004).

46. M. Bertero and P. Boccacci, Introduction to inverse problems in imaging (CRC, 1998).

47. K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, and L. Waller, “Learned reconstructions for practical mask-based lensless imaging,” Opt. Express 27(20), 28075–28090 (2019). [CrossRef]  

48. G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6(8), 921–943 (2019). [CrossRef]  

49. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4(9), 1117–1125 (2017). [CrossRef]  

50. Y. Li, Y. Xue, and L. Tian, “Deep speckle correlation: a deep learning approach toward scalable imaging through scattering media,” Optica 5(10), 1181–1190 (2018). [CrossRef]  

51. Y. Xue, S. Cheng, Y. Li, and L. Tian, “Reliable deep-learning-based phase imaging with uncertainty quantification,” Optica 6(5), 618–629 (2019). [CrossRef]  

52. V. Gulshan, L. Peng, M. Coram, M. C. Stumpe, D. Wu, A. Narayanaswamy, S. Venugopalan, K. Widner, T. Madams, J. Cuadros, R. Kim, R. Raman, P. C. Nelson, J. L. Mega, and D. R. Webster, “Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs,” JAMA 316(22), 2402–2410 (2016). [CrossRef]  

53. R. Gargeya and T. Leng, “Automated identification of diabetic retinopathy using deep learning,” Ophthalmology 124(7), 962–969 (2017). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Overview of the proposed diffuser ocular imaging workflow, including: (a) PSF calibration, (b) diffuser-image acquisition, and (c) imaging reconstruction. The PSF is a highly structured caustic pattern. During acquisition, the sensor captures a 2D image resulting from the PSF convolved with the remitted light of the fundus. A high-quality retinal image is reconstructed by solving a regularized deconvolution problem.
Fig. 2.
Fig. 2. Zemax ray-tracing of the (a) imaging and (b) illumination paths of the diffuser imager. (a) The cornea image is relayed to the diffuser so that each point on the retina produces a shifted caustic PSF. (b) Ring LEDs are also relayed to the cornea to avoid central illumination and reduce specular reflection. The FOV of the system is currently limited by the numerical aperture of the illumination optics.
Fig. 3.
Fig. 3. Experimental setup of the diffuser-based funduscope. (a) Overhead view of the imaging and illumination paths, (b) details of compact components including the LED ring, a pair of polarizers crossed between the LED array and diffuser, and the diffuser camera, (c) a 3D-printed cap to confine the divergence angle of the illumination, and (d) diffuser camera components including a 3D printed iris placed in front of the diffuser to limit the size of the PSF.
Fig. 4.
Fig. 4. Experimental FOV characterization of the imaging path of our prototype by using a self-illuminated retina. The design in (a) provides $\sim 50^\circ$ FOV, as demonstrated on (b) a dot-array object and (c) a retinal object. The raw diffuser image acquired from each object is shown in the green-outlined inset. The direct reconstructed images in (b)(i) and (c)(i) and the ones with flat field post-correction in (b)(ii) and (c)(ii) are compared. Cutlines are compared between the flat-field corrected reconstruction and the displayed object. The PCC, SSIM, and PSNR of each reconstructed image are computed against the original displayed object.
Fig. 5.
Fig. 5. Experimental FOV characterization of external illumination and imaging. (a) Test objects are printed on paper and placed at the focal plane of the simple model eye. The PSF used is the same as was acquired previously from the OLED screen. (b,c) Our design provides $\sim$ 33 $^\circ$ FOV, as demonstrated with (b) ruler patterns with both positive and negative contrast, and (c) a retinal image. All reconstructions are flat-field corrected. Cutlines of the raw images are shown to compare the measurement contrast from the positive and negative contrasted ruler patterns. Cutlines are compared between the reconstruction and the printed pattern. The PCC, SSIM, and PSNR of each reconstructed image are computed against the original printed pattern.
Fig. 6.
Fig. 6. Diffuser imaging of a commercial model eye. (a) Left: The data acquisition and PSF calibration are individually performed on different setups. Right: The reconstruction from both the measurement and the PSF taken with no refractive error (0D). (b) The reconstruction results using fundus measurements under different refractive errors with a 0D PSF. (c) The reconstruction results of a 0D fundus measurement using aberrated PSFs. Cutlines are shown in each reconstructed images and demonstrate consistent contrast with different refractive errors or defocused PSFs.

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

x ^ = argmin x | | y h x | | 2 2 + μ | | x | | 2 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.