Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Utilizing minicomputer technology for low-cost photorefraction: a feasibility study

Open Access Open Access

Abstract

Eccentric photorefraction is an objective technique to determine the refractive errors of the eye. To address the rise in prevalence of visual impairment, especially in rural areas, a minicomputer-based low-cost infrared photorefractor was developed using off-the-shelf hardware components. Clinical validation revealed that the developed infrared photorefractor demonstrated a linear working range between +4.0 D and −6.0 D at 50 cm. Further, measurement of astigmatism from human eye showed absolute error for cylinder of 0.3 D and high correlation for axis assessment. To conclude, feasibility was shown for a low-cost, portable and low-power driven stand-alone device to objectively determine refractive errors, showing potential for screening applications. The developed photorefractor creates a new avenue for telemedicine for ophthalmic measurements.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Uncorrected refractive error is still the most prevalent cause [1] for blindness and mild and severe visual impairment (MSVI), especially in the rural areas as their access the ocular health care is limited [2]. This limited access to ophthalmological or optometrical services is caused by a lack of trained personnel but also due to limited low-cost devices which could be used without special training. Especially the screening for amblyopia risk factors in early childhood, which are mainly refractive error [3] and strabismus [4], and appropriate treatment is expected to significantly reduce the prevalence and severity of amblyopia [5]. Both factors can be easily screened using eccentric photorefraction. However, the use of this technique is not limited to childhood screening. It can further be used in the assessing the accomodative lag [6], e.g. for myopia progression management or as a tool for telemedicine approaches to mobile screen for refraction in healthy participants [7] and in uncooperative patients [8].

The first photographic refraction of eye was developed at the MIT Research Laboratory of Electronics in 1962 by the Howland brothers [9]. Two photorefraction methods, namely, isotropic photorefraction and orthogonal photorefraction, were described to measure the refraction of eye [10]. In 1979, Kaakinen and Tommila invented the photoretinoscopy based method of photorefraction, where a white light source was positioned eccentrically to a camera sensor [11]. However, the optics of eccentric photorefraction were described in 1985 by Howland et al. and Bobier et al. independently [12,13]. The first use of IR-LEDs for photorefraction was demonstrated by Schaeffel et al. in chickens [14]. Subsequently, it was shown that eccentric photorefraction could be used for humans continuously using a video camera without causing pupil constriction due to use of IR-LEDs [15]. In eccentric photorefraction, the flash of the light source produces a retinal reflection of the IR light, which is evaluated in the subject’s pupil plane. Early technical setups [15], where multiple light sources utilizing different eccentricities were used, showed sensitivity of $\pm 0.3$ D for a range of refractive errors of $\pm 5.0$ D, and revealing the feasibility of eccentric photorefraction for vision screening. This led to the development of commercial available photoscreeners, which result in good sensitivity for the detection of amblyopia risk factors in children, like 80 % for SureSight Vision Screener [16] or 98 % for the PlusOptix [17] device. Recent developments made use of novel smartphone technologies and utilized photorefraction on mobile devices [18]. However, the modern smartphone models are equipped with infrared filter and increasing eccentricities between the flash light and the camera center [19]. Furthermore, the flash of smartphones emit in the visible lights spectrum and the broad spectral properties result in limited measurement range for photorefraction.

Therefore, the purpose of the current study was to develop a compact low-cost infrared photorefractor, which is suitable as an attachment for mobile application and provides a good range of determinable refractive errors with high sensitivity.

2. Material and methods

An infrared photorefractor consisting of low-cost hardware components and a customized software was built. The combination of off-the-shelf hardware parts and public available software libraries made it possible to measure refractive errors including astigmatism using. The following sections describing the development and providing results to show technical feasibility.

2.1 Hardware

An infrared photorefractor was developed using off-the-shelf components such as a single board minicomputer (Raspberry Pi 3B+, Raspberry Pi Foundation, Cambridge, United Kingdom), a 5-inch TFT-display (Raspberry Pi 5TD, Simac Electronics GmbH, Neukirchen-Vluyn, Germany), a 5000 mAh power bank (S-5000, Intenso International GmbH, Vechta, Germany), IR-LEDs (SFH-4356, OSRAM Opto Semiconductors GmbH, Regensburg, Germany) and an infrared-sensitive camera (Pi-camera NOIR v2.0, Raspberry Pi Foundation, Cambridge, United Kingdom). An external memory card (Micro-SD 64 GB, Sandisk, California, United States) was used for installing the Raspberry Pi operating system and data storage. A technical drawing of dimensions the camera and developed photorefractor can be seen in Fig. 1.

 figure: Fig. 1.

Fig. 1. A. Schematic diagram of the arrangement of IR-LEDs in the developed eccentric infrared photorefractor. There are three IR-LED segments at an angle of $60^{\circ }$ each. Each segment has nine IR-LEDs. The camera is placed at the center denoted by a cross in the setup. B. Developed eccentric infrared photorefractor. The camera is placed at the center behind an IR passing filter.

Download Full Size | PDF

To overcome the limitations of using smartphone cameras, Pi-camera NOIR (No Infrared) v2.0 was used as it can capture images and videos in infrared range. In comparison to the smartphone cameras, an IR-blocking filter is not added in front of the lens. The usage of IR-LEDs for illumination in photorefraction is beneficial as they do not affect the pupil size and remove the influence of longitudinal chromatic aberration [20]. An 8-megapixel sensor (IMX219, Sony Corporation, Tokyo, Japan) is used by camera and videos with a resolution upto 1080p can be captured at 30 frames per second (fps). It is connected to the minicomputer using a camera serial interface (CSI) cable. Additionally, an infrared passing filter ($\lambda _{cutoff}\,=\,820$ nm), blocking visible wavelengths was placed in front of the camera, which avoided effects to scene illumination in image due to change in ambient light. The ambient illuminance was < 50 lux during measurements.

The usage of a Pi-camera NOIR v2.0 make it possible to capture image in the infrared spectrum. In order to provide controlled illumination, IR-LEDs were used and synchronized with the image capturing using the Python interface of the minicomputer. The IR-LEDs had a peak wavelength of 860 nm with a half angle of $20^{\circ }$ [21]. A forward current of 100 mA was used to operate the IR-LEDs. They were arranged in arrays as per the requirement of an eccentric photorefractor as proposed by Gekeler et al. (1997) [22]. The IR-LEDs were arranged in 3 arrays with 9 LEDs in each array as shown in Fig. 1. The rows of LEDs in each array were placed at eccentricities of 7mm, 11 mm and 15 mm from the optical center of the camera. The center-to-center distance between each LED in an array was 4 mm. The three LED arrays were placed at $0^{\circ }$, $60^{\circ }$ and $120^{\circ }$ with respect to the vertical axis of the camera to measure the sphere, cylinder and axis of the refractive error of the eye. GPIO (general purpose input/output) pins available on minicomputer were used for digitally controlling the IR-LEDs.

The photobiological safety calculations for the IR-LEDs were performed as per International Electrotechnical Commission (IEC) standard 62471 [23]. For the corneal hazard calculations exposure limits for the eye, the maximum allowed limit $E_{IR}$ was 100 W m$^{-2}$ for exposure time > 1000 seconds for the wavelength range 780 nm to 3000 nm [23]. The total irradiance, $E_{IR}$ of the LED used was 0.36 W m$^{-2}$, resulting in a safety factor of 277 [23]. The retinal thermal hazard exposure limit, $L_{IR}$ was calculated as 1.0 x 10$^{7}$ W m$^{-2}$ sr$^{-1}$ as per the equation from IEC 62471 standard [23]. The radiance value of LED, $L_{IR}$ was measured 4.8 * 10$^{3}$ W m$^{-2}$ sr$^{-1}$. These calculations assume perfect sustained position of the LEDs with a very long exposure time greater than 1000 seconds while the total measurements with the infrared photorefractor would last less than 100 seconds. The radiation energy of the LED is a factor 21 below the safety factor for retinal thermal hazard [23].

2.2 Software

Standard operating system (Raspbian NOOBS-New Out of the Box Software, Raspberry Pi Foundation, Cambridge, United Kingdom) was used. The programming language (Python 3, Python Software Foundation, Delaware, United States) was employed with Open Source Computer Vision (OpenCV) library for the software development for the infrared photorefractor [24].

The software workflow operating the infrared photorefractor was performed in a step-wise method. The components of software implementation are shown in Fig. 2.

 figure: Fig. 2.

Fig. 2. Components of software implementation of infrared photorefractor.

Download Full Size | PDF

2.2.1 IR-LED control

The first step for the software implementation is to control the sequence and the timing which IR-LED array segment should be powered. The output of the GPIO pin was set as high or low for switching on or off the particular IR-LED array. In order to achieve the best illumination setting for the recording, the brightness level of IR-LED array was controlled by employing pulse width modulation (PWM). The pins were initialized at a frequency of 100 Hz. To control the brightness, the duty cycle of the output to the GPIO pins were used by 30 % , 60 % and 90 % from the initial frequency. The spectral irradiance at a distance of 5 cm and 50 cm from the photorefractor was measured using a spectrometer (USB4000 UV-VIS, Ocean Optics Inc., Florida, USA).

2.2.2 Video acquisition

The video was acquired using a python script to control the settings of the camera. Synchronized to the recording, the different segments of LEDs would turn on sequentially and at all three brightness levels separately. To control the camera settings, the PiCamera library was used which provides an object-oriented interface to the underlying library libmmal. The video was acquired with the camera parameters as shown in Table 1, which were found to be advantageous for the purpose of photorefraction.

Tables Icon

Table 1. Parameter settings for Raspberry Pi-camera NOIR v2.0

The shutter speed, ISO and camera gains were set to obtain the desired exposure to ensure that the brightness intensity of the pupils lie in the range of 100 to 150 gray scale values, corresponding to the linear range of the camera. The automatic exposure control of the camera software was turned off. Similarly, the disabling of automatic white balance was necessary for the infrared illumination of the camera. These manual parameter settings were crucial in obtaining the best exposed images for a particular level of LED brightness and enable the photorefraction usage.

Since the recording needs to be performed in synchronized fashion between the illumination source and the camera, the latency between the both needs to be minimized. Using the Linux terminal window of the Raspberry Pi with the raspivid library and the LED control using a python script does not support a low latency synchronization. A latency of about 2 seconds between the switching of LEDs and video recording was present. This problem was resolved by using the PiCamera library with python to directly access the camera system [25]. The use of python script for capturing videos using the PiCamera library resulted in a synchronized illumination and video capturing with a latency of lower than 0.5 seconds but was not free of limitation. The control of the analog and digital gains for the camera sensor were not accessible using the PiCamera library [25]. However, they could be accessed directly using the raspivid command on the Linux terminal. These parameters were crucial to control the exposure settings for the infrared photorefractor. The gains could be accessed only using MMAL (Multi-Media Abstraction Layer) API (Application Programming Interface) [25,26]. Specific functions were implemented using the MMAL APIs to access gain parameters of the camera. This enabled controlling the exposure settings completely from within the python script. Thus, it became possible to use a single programming language along with all the necessary functionalities from the Linux terminal with minimum latency for the developed infrared photorefractor.

2.2.3 Face and eye detection

The synchronization between infrared illumination and the camera recording enable video capture at a 50 cm distance to a participants face. In order to easily detect the eyes pupils in an image, a well-established machine learning approach was used. A facial landmark detector was implemented inside Dlib which produces 68 (x, y)-coordinates of facial landmarks in an image. These coordinates map to specific facial features such as face margins, eyes, nose and eyebrows. A pre-trained shape predictor, called iBUG 300-W dataset, was used to determine a face in the gray-value image [27]. It is based on the database Multi-PIE where frontal faces were normalized using the location of 68 manually established facial feature points for different facial expressions, multiple recording sessions for 337 subjects. In the current study, the framework was employed to robustly detect faces and eyes in videos frames at different levels of IR illumination, with or without spectacles or trial frames.

Out of the 68-point mappings from the facial landmarks, the 12 points, #37 to #48 were used to locate the eyes. For example, the first six points #37 to #42 were used to determine the region of interest (ROI) around the right eye as shown in Fig. 3(A).

 figure: Fig. 3.

Fig. 3. Workflow of detection of eye using mapping points #37 to #42 from the shape predictor and calculation of intensity profile in the pupil along the power meridian. A. Detected eye (red) B. Pupil location (green) using Circular Hough transform C. Pupil center (x,y) (yellow) D. Intensity profile measurement along the $0^{\circ }$ power meridian (blue).

Download Full Size | PDF

2.2.4 Pupil detection

The pupil is detected within the coordinates of the ROI locating the eye by using a circular Hough transform-based algorithm [28]. It is a robust method to detect circles in an image even with presence of noise, occlusion or varying illumination [29]. For the fixed measurement distance, the range of diameters for the circle detection could been fixed between 40 to 50 pixels and the sensitivity factor was set at 0.93. These values for circle detection can be modified based on the use case scenario. The detected circle parameters were obtained as the x- and y-coordinates of the pupil center along with the diameter of the pupil for each image frame. The pixels within the detected pupil were used for subsequent analysis and calculations of the respective brightness profiles and the refraction values.

2.2.5 Refraction calculation along three meridians

Brightness slopes are evaluated in $0^{\circ }$, $60^{\circ }$ and $120^{\circ }$ meridians around the x- and y-coordinates of the pupil center with 80 % of pupil diameter. For each meridian the gray-values of the pixels were read and a linear regression was fitted [22], while the peak values, e.g. from the first Purkinje image, were removed from the fitting. The slopes of the intensity profile were averaged for five consecutive frames and this was repeated five times. The average of these measurements along each meridian were converted into meridional powers R(0), R(60) and R(120) as per Eq. 1 according to the formula from Gekeler and Schaeffel (1997) [22]. The standard deviation from the 25 frame could be used as an estimation for data quality.

$$R(angle)=Conversion\, factor\,*\,Slope\,+\,Offset$$
These meridional power values were then used to calculate the power vectors $M$, $J_{0}$ and $J_{45}$ and further to calculate the sphere, cylinder and axis as shown in the following Eq. 27 as per Leube et al. (2018) [30].
$$M=\frac{R(0)+R(60)+R(120)}{3}$$
$$J_0=\frac{2*R(0)-R(60)-R(120)}{3}$$
$$J_{45}= \frac{R(60)-R(120)}{\sqrt{3}}$$
$$Sph=M+\sqrt{(J_0^2+J_{45}^2)}$$
$$Cyl=-2*\sqrt{(J_0^2+J_{45}^2)}$$
$$Axis=0.5* \tan^{-1} (\frac{J_{45}}{J_0})$$

2.3 Assessment of the working range of the infrared photorefractor

For determining the working range of the developed infrared photorefractor, twenty eye-healthy participants from the University Tuebingen with a mean age of $29 \pm 6.1$ years. The age range of the participants was between 24 - 38 years. Participants with ocular pathologies, corneal laser surgery or other ocular health issues were excluded. The mean spherical equivalent refractive errors of the right and left eyes were $-0.78 \pm 1.57$ D and $-0.51 \pm 1.40$ D. The experiments followed the tenets of the declaration of Helsinki of 1964 and approval from the ethical board committee of the University of Tuebingen was obtained for this investigation. Informed consent was collected from all subjects after indications and potential consequences of the measurements had been explained in detail.

The working range of the infrared photorefractor is given by the linear range for spherical powers [22]. Spherical linearity of the infrared photorefractor was assessed by using calibration measurements. Calibration was performed with the aim to determine the change in slope of the brightness slope with respect to an induced refractive error using ophthalmic trial lenses (+6.0 D to −6.0 D) in steps of 1.0 D and further to calculate an individual conversion factor as defined by Schaeffel et al. (1993) [31]. This was done to verify, whether the infrared photorefractor output shows a linear relationship with the refractive status and to confirm technical feasibility of the setup. The calibration measurements were performed additionally for an artificial eye. An ophthalmic trial lens was placed in front of the artificial eye as mentioned earlier.

For the calibration measurements, a trial frame with correction lenses of the habitual values was placed in front of the eyes [31]. The infrared photorefractor setup was placed at a distance of 50 cm, which simulates a normal hand-hold setting. It was ensured that the eye was aligned to the camera. Additionally, an infrared passing filter ($\lambda _{cutoff}\,=\,820$ nm), blocking visible wavelengths was placed in front of the measured eye in order to avoid induced accommodation during the measurement with negative power lenses. The fellow eye of the participant was left open with only the habitual far correction being placed and the subject was instructed to fixate at a far distance of about 5.0 m on a high-contrast Maltese cross target. The calibration measurements were repeated 5 times for each trial lens. The calibration data was used to analyze the relationship between the induced refractive error and the slope of the pupil brightness to determine the linear range (working range) of the infrared photorefractor.

2.4 Assessment of feasibility of astigmatism measurements

To determine the sphero-cylindrical refractive errors, IR-LED arrays positioned at $0^{\circ }$, $60^{\circ }$ and $120^{\circ }$ were operated sequentially and synchronized with the camera. Brightness slopes were evaluated along the respective meridians with the full pupil diameter and meridional powers were calculated using individual calibration factors and offset values for each eye, as described before. The measured values of meridional power are transformed in to power vectors $M$, $J_0$ and $J_{45}$ [32] for refraction calculations, using Eq. (27).

2.4.1 Artificial eye

For this experiment, an artificial eye made of silicon was placed at a working distance of 50 cm and a −2.0 D cylindrical lens was placed in front of it with an IR-passing filter ($\lambda _{cutoff}\,=\,820\,nm$). The measurement started with placing the cylinder lens at $0^{\circ }$ axis. Measurements were obtained rotating the cylinder lens in increments of axis value by $10^{\circ }$ until $180^{\circ }$. The slopes of the intensity profiles were calculated for each angle for the three different IR-LED array segments, that were operating synchronously with the camera.

2.4.2 Human eye

The refractive measurements were also performed to calculate astigmatism in a human eye to evaluate the performance of the infrared photorefractor. For this experiment, a participant was seated at 50 cm working distance and a trial frames with the habitual correction was placed. Similarly to the artificial eye measurement, an additional −2.0 D cylindrical lens was placed in front of the right eye with an IR-passing filter ($\lambda _{cutoff}\,=\,820$ nm). The left eye was kept open and the subject was asked to fixate at a high contrast Maltese cross target placed at a distance of about 5 m. The measurement started with $0^{\circ }$ axis of the cylinder lens and were measured by increments of $10^{\circ }$ until $180^{\circ }$. The slopes of the intensity profiles were calculated for each angle for different IR-LED array segments.

3. Results

3.1 Irradiance control using pulse width modulation

To control the the output illumation of the modulated LEDs, spectrometer measurements were carried out at very close distance (5 cm) for the off condition of the LED and for three duty cycle values as shown in Fig. 4. The maximum value of spectral irradiance was obtained as 43.7 µW cm$^{-2}$ nm$^{-1}$, 29.1 µW cm$^{-2}$ nm$^{-1}$ and 16.1 µW cm$^{-2}$ nm$^{-1}$ for the 90 %, 60 % and 30 % duty cycles, respectively. These values were always observed at a peak wavelength of 857 nm. Thus, it was seen that the LEDs emit higher illumination power with a higher duty cycle percentage and vice-versa. Additionally, the measurements were conducted at the used working distance of 50 cm from the infrared photorefractor and maximum values were lower than 1.0 nW cm$^{-2}$ nm$^{-1}$ for all applied duty cycles.

 figure: Fig. 4.

Fig. 4. Spectrometer measurements to measure spectral irradiance of IR-LEDs at a distance of 50 cm at different duty cycle percentages.

Download Full Size | PDF

3.2 Linear range for assessing spherical refraction

The calibration plots between slope of pupil brightness and trial lens power showed a linear trend. The individual conversion factors were calculated for the linear range of measurement for the infrared photorefractor. The calibration plot between the power of trial lens and slope of pupil brightness is shown in Fig. 5.

 figure: Fig. 5.

Fig. 5. Mean calibration curve for power of trial lenses (induced refractive errors) vs slope of pupil brightness. The linear and non-linear regionsof the camera are shown. The measurements were linear in the range −4.0 D to +6.0 D trial lenses.

Download Full Size | PDF

The slopes of the pupil brightness were linear in the range for the trial lens with powers −2.0 D to +6.0 D in the mean calibration curve. Therefore, the following plot was used to obtain the conversion factor for the linear range of measurement from the slope. The equation for calculating the refraction from the slope from the calibration curve is shown in Eq. (8).

$$Refraction = 0.98 * slope + 1.35$$

Therefore, 0.98 was obtained as conversion factor and 1.3 was the offset. These values were used for the derivation of refraction from the obtained pupil brightness slopes in the subsequent measurements.

3.3 Assessment of astigmatic refractive errors

3.3.1 Performance in an artificial eye

The values of sphere, cylinder and axis were calculated for the artificial eye using Eq. (27). The average absolute errors for $M$, $J_0$ and $J_{45}$ were calculated as −2.55, −1.37 and 0.46, respectively. The results obtained have been shown in Fig. 6.

 figure: Fig. 6.

Fig. 6. Performance of the developed infrared photorefractor for astigmatism measurements in an artificial eye. (A) Sphere and cylinder measurements (B) Axis measurements.

Download Full Size | PDF

3.3.2 Performance in human eye

The values of sphere, cylinder and axis were calculated for the human subject using Eq. ( 3)–(8). The average errors for $M$, $J_0$ and $J_{45}$ were calculated as −1.12, −0.51 and −0.34, respectively. The results obtained have been shown in Fig. 7.

 figure: Fig. 7.

Fig. 7. Performance of the developed infrared photorefractor for astigmatism measurements in a human eye. (A) Sphere and cylinder measurements (B) Axis measurements.

Download Full Size | PDF

4. Discussion

A Raspberry Pi board with a Pi-camera NOIR v2.0 has been used to develop an infrared photorefractor which can be used connected to mobile devices for vision screening. The hardware costs are less than $80\,{\euro }$ (combined) which provides low-cost components in order to further built a screening device based on off-the shelf hardware. It has been developed at a very affordable price for its feasibility in low-and-middle income countries to enable screening of uncorrected refractive errors in a mobile and easy to accomplish procedure and can be equipped with software interfaces, focused on user experienced, either for self-administered refraction screening or for untrained personnel enabling telemedicine applications.

4.1 Digital camera and illumination control

The advantage of the described hardware and software setting for photorefraction is that the entire recording can be controlled digitally and therefore be adaptable to many different recording environments using the same framework. The Pi-camera NOIR v2.0 was employed to allow imaging in the infrared range and therefore does not have an IR filter, as common smartphone cameras. The small size and easy connectivity to the Raspberry Pi is another advantage for its use for the infrared photorefractor. In the previous studies [22,31], cameras were used with a manually adjustable aperture size to control the exposure settings of the infrared photorefractor. However, in this study purely digital methods were used to control the camera gains and capturing parameters like the exposure settings. This enables a synchronized adjustment of brightness of the LED arrays to achieve the best conditions such as maintaining brightness of pupil in linear range of camera for measurement of the refractive errors. The complete digital access to the settings of camera and LED would also facilitate a remote control of the infrared photorefractor via web-server based access, which make the shown device usable for telemedicine applications.

In a previous study by Gekeler et al. (1997), an opto-coupler was used to switch between the LED arrays [22]. In this study, the control of the LED-array segments was performed differently. It was controlled by using the GPIO libraries of Raspberry Pi where a binary value of 1 or 0 was used to turn on or off the LED array, respectively through switching of an NPN-transistor. Thus, a simple circuit with less electronic components was used for this study. This possibility enabled the use of a single minicomputer to control the LEDs, camera as well as the processing of the videos to determine refraction. Thus, a complete digital control of the infrared photorefractor system from a low cost minicomputer was achieved in this study.

The brightness of the LED arrays was controlled using PWM with three different duty cycle values with the GPIO pins of Raspberry Pi. Thus, multiple brightness levels could be achieved for the measurements and an optimum brightness level could be chosen for determining the meridional powers. The average pixel brightness level parameter was used as recommended by Schaeffel et al. (1993) to determine fundus reflex brightness, which was used as a factor to determine conversion factor [31]. There are other methods to control the brightness of the LEDs such as using a digital or analog potentiometer. Additionally, a microcontroller such as Arduino could be used to control the brightness of LEDs using a hardware PWM [33]. However, use of such electronic components would lead to making the device bulky and require more power for operation. Other software methods to achieve the dimming of the LEDs include gated pulse width modulation (GPWM) and binary-weighted PWM (BPWM) [34]. However, these are beneficial when a large number of grayscale intensity levels are needed such as for LED video displays [34]. The use of GPIO pins for operating the IR-LEDs with PWM allowed the entire system to run using a single programming language and it was synchronized with the video acquisition and data analysis.

4.2 Experimental validation of assessment of refractive errors

In this experiment, a working distance of 50 cm was used to give the desired results for the measurement of the refractive errors. In the previous studies, various subject to camera distance in the range of 80 cm to 1.5 m have been used, resulting in different working ranges for the photorefractors [15,22,31,35]. The mean linear range of measurement achieved with the current setup is between +4.0 D and −6.0 D of refractive errors. Choi et al.(2000) observed a linear measurement range between −4.0 D and +6.0 D with a distance of 1 m [35]. In 1993, Schaeffel et al. (1993) [31] had a working distance of 1.3 m and they achieved a range of measurement between −5.0 D and +5.0 D. However, the linear range is further depended on the eccentricity between light source and camera [36]. Using multiple eccentricities have the advantage that the LEDs at smaller eccentricities increase sensitivity to smaller changes in defocus while those at higher eccentricities increase the range of measurement [36]. Therefore, the use of an extended light source increases the overall working range while making the photorefractor with a smaller dead zone [36]. Although the range in this experiment has reduced in the hyperopic side, it enables the use of a low-cost camera for the measurement due to limitation in available pupil pixels at a further distance. A lower working distance was chosen to investigate the potential use of the infrared photorefractor as a hand held device to self-screen the refractive errors. The advantages and disadvantages of lower and higher working distances have been described in Table 2.

Tables Icon

Table 2. Advantages and disadvantages of different working distance of photorefractor

In previous studies, calibration factor of a photorefractor has been shown to depend on a variety of factors such as camera system, light source, working distance, pupil size and ethnicity [3739]. However, due to a small number of subjects, these factors were not evaluated at this stage for evaluating feasibility of the developed photorefractor.

When a cylinder lens was rotated in front of the human eye, the measured cylinders displayed sinusoidal variations with a period of approximately $60^{\circ }$. This observation was similar to the observation by Gekeler et al.(1997) [22]. In their experiment, the sphere was also sinusoidally varying but had an anti-phase behavior of $60^{\circ }$ with the measured cylinder. The average offset of the sphere and cylinder values was 0.37 D and 0.30 D as compared to 0.289 D and 0.248 D for Gekeler et al. (1997) [22]. The performance of the infrared photorefractor for the determination of axis showed an average absolute error of $9^{\circ }$ for the artificial eye and $10^{\circ }$ for the human eye in this experiment as compared to $9.05^{\circ }$ and $8.27^{\circ }$ for the photorefractor from Gekeler et al.(1997) [22]. This deviations could be partly explained due to a tilt of the lenses which were placed in front of the eye resulting in the apparent modulation [22].

The commercially available photorefractor devices have been used for the last two decades for screening infants as well as children in schools and clinics [35,4046]. But the major disadvantage of these devices is that they are very expensive and therefore are not accessible throughout the world due to poor financial status of the several low-and-middle income countries. The infrared photorefractor developed in this study costs less than $80\,{\euro }$ for all the hardware components used. This would make the screening accessible to infants and children at an affordable cost throughout the world and thus reduce the incidence of strabismus and amblyopia in infants.

4.3 Remote connectivity abilities of an infrared photorefractor

The connectivity options such as Bluetooth and Wi-Fi on the Raspberry Pi board can be used to connect the device to a data server. Recorded videos and results could be sent to eye professionals [7] and could further facilitate remote photorefraction. A secure access using a remote server (VNC Server, RealVNC Limited, Cambridge, United Kingdom) with password protection was tested during this study. However, refraction measurements were not performed remotely during this study, but they are possible. It would enable the use of this device to obtain recordings for remote areas. This would allow measurement of refraction without any necessary training. The detection of faces from the recordings could be used to estimate the head pose of the user [4749]. This has already shown to be used for applications such as drowsiness detection, gesture control and behaviour in intelligent environments [5054]. A feedback could be provided to users to guide them to align their head position with respect to the camera and thus, improving accuracy of measurement of refractive errors.

5. Conclusion

Using minicomputer technology together with simple infrared LED arrays showed principal technical feasibility for low-cost photorefraction as a handheld gadget. Linear range for assessment of refractive errors revealed significant potential for myopia screening and determination of astigmatic errors, in accordance with previous studies. Further investigations are necessary, including higher number of subjects for validation of this device, and especially children. The developed infrared photorefractor is portable, stand-alone, low-cost and has the possibility to be used remotely for determination of refractive errors.

Funding

Bundesministerium für Bildung und Forschung (ZUK 63); Deutsche Forschungsgemeinschaft (Open Access Publishing Fund).

Disclosures

AL & SW: Carl Zeiss Vision International GmbH (E). RA: No conflicts of interests related to this article.

References

1. D. Pascolini and S. P. Mariotti, “Global estimates of visual impairment: 2010,” Br. J. Ophthalmol. 96(5), 614–618 (2012). [CrossRef]  

2. S. Resnikoff, D. Pascolini, S. P. Mariotti, and G. P. Pokharel, “Global magnitude of visual impairment caused by uncorrected refractive errors in 2004,” Bull. W. H. O. 86(1), 63–70 (2008). [CrossRef]  

3. J. Sjöstrand and M. Abrahamsson, “Risk factors in amblyopia,” Eye 4(6), 787–793 (1990). [CrossRef]  

4. M. Pascual, J. Huang, M. G. Maguire, M. T. Kulp, G. E. Quinn, E. Ciner, L. A. Cyert, D. Orel-Bixler, B. Moore, G.-s. Ying, and V. in Preschoolers (VIP) Study Group, “Risk factors for amblyopia in the vision in preschoolers study,” Ophthalmology 121(3), 622–629.e1 (2014). [CrossRef]  

5. M. Eibschitz-Tsimhoni, T. Friedman, J. Naor, N. Eibschitz, and Z. Friedman, “Early screening for amblyogenic risk factors lowers the prevalence and severity of amblyopia,” J. Am. Assoc. for Pediatr. Ophthalmol. Strabismus 4(4), 194–199 (2000). [CrossRef]  

6. A. Seidemann and F. Schaeffel, “An evaluation of the lag of accommodation using photorefraction,” Vision Res. 43(4), 419–430 (2003). [CrossRef]  

7. Y.-L. Chen, J. Lewis, N. Kerr, and R. A. Kennedy, “Computer-based real-time analysis in mobile ocular screening,” Telemed. e-Health 12(1), 66–72 (2006). [CrossRef]  

8. G. Cibis, “Video vision development assessment in diagnosis and documentation of microtropia,” Binocular vision & strabismus quarterly 20(3), 151–158 (2005).

9. B. Howland, “Photographic method for study of vision from a distance,” Research Laboratory of Electronics (RLE) Quarterly Progress Report (QPR) 67, 197–204 (1962).

10. H. C. Howland, O. Braddick, J. Atkinson, and B. Howland, “Optics of photorefraction: orthogonal and isotropic methods,” J. Opt. Soc. Am. 73(12), 1701–1708 (1983). [CrossRef]  

11. K. Kaakinen and V. Tommila, “A clinical study on the detection of strabismus, anisometropia or ametropia of children by simultaneous photography of the corneal and the fundus reflexes,” Acta Ophthalmol. 57(4), 600–611 (2009). [CrossRef]  

12. H. C. Howland, “Optics of photoretinoscopy: results from ray tracing,” Optom. Vis. Sci. 62(9), 621–625 (1985). [CrossRef]  

13. W. Bobier and O. Braddick, “Eccentric photorefraction: optical analysis and empirical measures,” Optom. Vis. Sci. 62(9), 614–620 (1985). [CrossRef]  

14. F. Schaeffel, H. C. Howland, and L. Farkas, “Natural accommodation in the growing chicken,” Vision Res. 26(12), 1977–1993 (1986). [CrossRef]  

15. F. Schaeffel, L. Farkas, and H. C. Howland, “Infrared photoretinoscope,” Appl. Opt. 26(8), 1505–1509 (1987). [CrossRef]  

16. V. in Preschoolers Study Group, “Sensitivity of screening tests for detecting vision in preschoolers-targeted vision disorders when specificity is 94%,” Optom. Vis. Sci. 82(5), 432–438 (2005). [CrossRef]  

17. N. S. Matta, E. L. Singman, and D. I. Silbert, “Performance of the plusoptix vision screener for the detection of amblyopia risk factors in children,” J. Am. Assoc. for Pediatr. Ophthalmol. Strabismus 12(5), 490–492 (2008). [CrossRef]  

18. R. W. Arnold and M. D. Armitage, “Performance of four new photoscreeners on pediatric patients with high risk amblyopia,” J. Pediatr. Ophthalmol. Strabismus 51(1), 46–52 (2014). [CrossRef]  

19. P. Ghassemi, B. Wang, J. Wang, Q. Wang, Y. Chen, and T. Joshua Pfefer, “Evaluation of Mobile Phone Performance for Near-Infrared Fluorescence Imaging,” IEEE Trans. Biomed. Eng. 64(7), 1650–1653 (2017). [CrossRef]  

20. Y. Chen and F. Schaeffel, “Factors affecting the calibration of white light eccentric photorefraction,” Invest. Ophthalmol. Vis. Sci. 56(1), 526–537 (2015). [CrossRef]  

21. Osram Opto Semiconductors GmbH, “Datasheet SFH 4356 Version 1.3,” Tech. rep., Osram Opto Semiconductors GmbH (2018).

22. F. Gekeler, F. Schaeffel, H. C. Howland, and J. Wattam-Bell, “Measurement of astigmatism by automated infrared photoretinoscopy,” Optom. Vis. Sci. 74(7), 472–482 (1997). [CrossRef]  

23. “IEC 62471:2006 Photobiological safety of lamps and lamp systems,” International standard, International Electrotechnical Commission (2006).

24. G. Bradski, “The OpenCV Library,” Dr. Dobb’s Journal of Software Tools (2000).

25. D. Jones, “Picamera 1.13 Documentation,” Tech. rep. (2020).

26. Raspberry Pi Foundation, “Official Raspberry Pi Camera Guide,” Tech. rep., Raspberry Pi Foundation, UK (2013).

27. C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, and M. Pantic, “300 faces in-the-wild challenge: The first facial landmark localization challenge,” in Proceedings of the IEEE International Conference on Computer Vision Workshops, (2013), pp. 397–403.

28. R. O. Duda and P. E. Hart, “Use of the hough transformation to detect lines and curves in pictures,” Commun. ACM 15(1), 11–15 (1972). [CrossRef]  

29. “Image Processing Toolbox™ Reference R 2016,” Tech. rep., The Mathworks Inc (2016).

30. A. Leube, C. Kraft, A. Ohlendorf, and S. Wahl, “Self-assessment of refractive errors using a simple optical approach,” Clin. Exp. Optom. 101(3), 386–391 (2018). [CrossRef]  

31. F. Schaeffel, H. Wilhelm, and E. Zrenner, “Inter-individual variability in the dynamics of natural accommodation in humans: relation to age and refractive errors,” J. Physiol. 461(1), 301–320 (1993). [CrossRef]  

32. L. N. Thibos, W. Wheeler, and D. Horner, “Power vectors: an application of fourier analysis to the description and statistical analysis of refractive error,” Optom. Vis. Sci. 74(6), 367–375 (1997). [CrossRef]  

33. G. Recktenwald, “Basic Pulse Width Modulation,” Tech. rep., Portland State University (2011).

34. L. Svilainis, “Led brightness control for video display application,” Displays 29(5), 506–511 (2008). [CrossRef]  

35. M. Choi, S. Weiss, F. Schaeffel, A. Seidemann, H. C. Howland, B. Wilhelm, and H. Wilhelm, “Laboratory, clinical, and kindergarten test of a new eccentric infrared photorefractor (powerrefractor),” Optom. Vis. Sci. 77(10), 537–548 (2000). [CrossRef]  

36. A. Roorda, M. C. Campbell, and W. R. Bobier, “Slope-based eccentric photorefraction: theoretical analysis of different light source configurations and effects of ocular aberrations,” J. Opt. Soc. Am. A 14(10), 2547–2556 (1997). [CrossRef]  

37. N. G. Sravani, V. K. Nilagiri, and S. R. Bharadwaj, “Photorefraction estimates of refractive power varies with the ethnic origin of human eyes,” Sci. Rep. 5(1), 7976 (2015). [CrossRef]  

38. S. R. Bharadwaj, N. G. Sravani, J.-A. Little, A. Narasaiah, V. Wong, R. Woodburn, and T. R. Candy, “Empirical variability in the calibration of slope-based eccentric photorefraction,” J. Opt. Soc. Am. A 30(5), 923–931 (2013). [CrossRef]  

39. P. J. Blade and T. R. Candy, “Validation of the powerrefractor for measuring human infant refraction,” Optom. Vis. Sci. 83(6), 346–353 (2006). [CrossRef]  

40. P. Y. Tong, R. E. Bassin, E. Enke-Miyazaki, J. P. Macke, J. M. Tielsch, D. R. Stager Sr, G. R. Beauchamp, M. M. Parks, T. N. C. E. Care, and F. V. S. S. Group, “Screening for amblyopia in preverbal children with photoscreening photographs: Ii. sensitivity and specificity of the mti photoscreener,” Ophthalmology 107(9), 1623–1629 (2000). [CrossRef]  

41. P. Y. Tong, E. Enke-Miyazaki, R. E. Bassin, J. M. Tielsch, D. R. Stager Sr, G. R. Beauchamp, M. M. Parks, and T. N. C. E. Care, “Screening for amblyopia in preverbal children with photoscreening photographs,” Ophthalmology 105(5), 856–863 (1998). [CrossRef]  

42. A. M. Thompson, T. Li, L. B. Peck, H. C. Howland, R. Counts, and W. R. Bobier, “Accuracy and precision of the tomey viva infrared photorefractor,” Optom. Vis. Sci. 73(10), 644–652 (1996). [CrossRef]  

43. S. P. Donahue, T. M. Johnson, and T. C. Leonard-Martin, “Screening for amblyogenic factors using a volunteer lay network and the mti photoscreener: Initial results from 15,000 preschool children in a statewide effort,” Ophthalmology 107(9), 1637–1644 (2000). [CrossRef]  

44. B. E. Berry, B. D. Simons, R. M. Siatkowski, and J. C. Schiffman, “Preschool vision screening using the mti-photoscreener,” Pediatric nursing 27(1), 27–34 (2001).

45. K. M. Mohan, J. M. Miller, V. Dobson, E. M. Harvey, and D. L. Sherrill, “Inter-rater and intra-rater reliability in the interpretation of mti photoscreener photographs of native american preschool children,” Optom. Vis. Sci. 77(9), 473–482 (2000). [CrossRef]  

46. C. Williams, R. Lumb, I. Harvey, and J. M. Sparrow, “Screening for refractive errors with the topcon pr2000 pediatric refractometer,” Investigative ophthalmology & visual science 41(5), 1031–1037 (2000).

47. T. Vatahska, M. Bennewitz, and S. Behnke, “Feature-based head pose estimation from images,” in 2007 7th IEEE-RAS International Conference on Humanoid Robots, (IEEE, 2007), pp. 330–335.

48. M. Martin, F. Van De Camp, and R. Stiefelhagen, “Real time head model creation and head pose estimation on consumer depth cameras,” in 2014 2nd International Conference on 3D Vision, vol. 1 (IEEE, 2014), pp. 641–648.

49. M. Voit, K. Nickel, and R. Stiefelhagen, “Neural network-based head pose estimation and multi-view fusion,” in Multimodal Technologies for Perception of Humans, R. Stiefelhagen and J. Garofolo, eds. (Springer Berlin Heidelberg, 2007), pp. 291–298.

50. E. Murphy-Chutorian and M. M. Trivedi, “Head pose estimation in computer vision: A survey,” IEEE Trans. Pattern Anal. Mach. Intell. 31(4), 607–626 (2009). [CrossRef]  

51. S. Baker, I. Matthews, J. Xiao, R. Gross, T. Kanade, and T. Ishikawa, “Real-time non-rigid driver head tracking for driver mental state estimation,” in 11th World Congress on Intelligent Transportation Systems, (2004).

52. Z. Guo, H. Liu, Q. Wang, and J. Yang, “A fast algorithm face detection and head pose estimation for driver assistant system,” in 2006 8th international Conference on Signal Processing, vol. 3 (IEEE, 2006).

53. L.-P. Morency, C. M. Christoudias, and T. Darrell, “Recognizing gaze aversion gestures in embodied conversational discourse,” in Proceedings of the 8th international conference on Multimodal interfaces, (2006), pp. 287–294.

54. S. Basu, T. Choudhury, B. Clarkson, and A. Pentland, “Towards measuring human interactions in conversational settings,” in Proc. IEEE CVPR Workshop on Cues in Communication, (2001).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. A. Schematic diagram of the arrangement of IR-LEDs in the developed eccentric infrared photorefractor. There are three IR-LED segments at an angle of $60^{\circ }$ each. Each segment has nine IR-LEDs. The camera is placed at the center denoted by a cross in the setup. B. Developed eccentric infrared photorefractor. The camera is placed at the center behind an IR passing filter.
Fig. 2.
Fig. 2. Components of software implementation of infrared photorefractor.
Fig. 3.
Fig. 3. Workflow of detection of eye using mapping points #37 to #42 from the shape predictor and calculation of intensity profile in the pupil along the power meridian. A. Detected eye (red) B. Pupil location (green) using Circular Hough transform C. Pupil center (x,y) (yellow) D. Intensity profile measurement along the $0^{\circ }$ power meridian (blue).
Fig. 4.
Fig. 4. Spectrometer measurements to measure spectral irradiance of IR-LEDs at a distance of 50 cm at different duty cycle percentages.
Fig. 5.
Fig. 5. Mean calibration curve for power of trial lenses (induced refractive errors) vs slope of pupil brightness. The linear and non-linear regionsof the camera are shown. The measurements were linear in the range −4.0 D to +6.0 D trial lenses.
Fig. 6.
Fig. 6. Performance of the developed infrared photorefractor for astigmatism measurements in an artificial eye. (A) Sphere and cylinder measurements (B) Axis measurements.
Fig. 7.
Fig. 7. Performance of the developed infrared photorefractor for astigmatism measurements in a human eye. (A) Sphere and cylinder measurements (B) Axis measurements.

Tables (2)

Tables Icon

Table 1. Parameter settings for Raspberry Pi-camera NOIR v2.0

Tables Icon

Table 2. Advantages and disadvantages of different working distance of photorefractor

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

R ( a n g l e ) = C o n v e r s i o n f a c t o r S l o p e + O f f s e t
M = R ( 0 ) + R ( 60 ) + R ( 120 ) 3
J 0 = 2 R ( 0 ) R ( 60 ) R ( 120 ) 3
J 45 = R ( 60 ) R ( 120 ) 3
S p h = M + ( J 0 2 + J 45 2 )
C y l = 2 ( J 0 2 + J 45 2 )
A x i s = 0.5 tan 1 ( J 45 J 0 )
R e f r a c t i o n = 0.98 s l o p e + 1.35
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.