Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

LCD-based digital eyeglass for modulating spatial-angular information

Open Access Open Access

Abstract

Using programmable aperture to modulate spatial-angular information of light field is well-known in computational photography and microscopy. Inspired by this concept, we report a digital eyeglass design that adaptively modulates light field entering human eyes. The main hardware includes a transparent liquid crystal display (LCD) and a mini-camera. The device analyzes the spatial-angular information of the camera image in real time and subsequently sends a command to form a certain pattern on the LCD. We show that, the eyeglass prototype can adaptively reduce light transmission from bright sources by ~80% and retain transparency to other dim objects meanwhile. One application of the reported device is to reduce discomforting glare caused by vehicle headlamps. To this end, we report the preliminary result of using the reported device in a road test. The reported device may also find applications in military operations (sniper scope), laser counter measure, STEM education, and enhancing visual contrast for visually impaired patients and elderly people with low vision.

© 2015 Optical Society of America

1. Introduction

Imaging using a programmable aperture is a well-known technique in computational photography and microscopy. Over the past few years, programmable-aperture imaging has been demonstrated for various applications, including depth-of-field extension [1], lensless imaging [2], natural matting [3], 4D light field acquisition [4], and multimodal microscopy imaging [5]. A typical implementation of programmable aperture is to use a transparent liquid crystal display (LCD) to adaptively control the aperture pattern at the pupil plane. For photographic applications, the LCD can be placed at the detection path to modulate the angular information of the incoming light [4]. It has also been shown that, an external aperture can be placed in front of a camera lens for modulating both the spatial and angular information of the incoming light [6]. For microscopy applications, our group have recently demonstrated the use of a LCD for modulating the illumination angle and performing super-resolution Fourier ptychographic imaging [5].

Similar to the photographic/microscopy imaging systems, human eye is also an optical system that captures light entering the cornea. Inspired by the previous research on programmable-aperture imaging, we can adopt the same concept for the modulating light that enters human eyes. In this paper, we report the design of a digital eyeglass that uses a mini-camera for image acquisition and LCDs for light modulation [6]. Different from previous programmable-aperture imaging techniques, the goal of the reported device is to adaptively reduce light transmission from bright sources. The device operates by first analyzing the spatial-angular information of the camera image and then sending a command to form certain patterns on the LCDs. We show that, the LCDs positioned in front of human eyes can adaptively reduce light transmission from bright sources by ~80%, and remain transparent to other dim objects meanwhile.

We envision a broad range of applications for the reported device. First, vision glare from vehicle headlamp and other bright sources has been long recognized as a cause for traffic accidents. In particular, high-intensity discharge lamps used in many high-end car models have raised many public concerns to the National Highway Traffic Safety Administration [7]. The glare problem also affect the probability of driving at all. Elderly or visually impaired drivers, in particular, require a longer time to recover their vision sensitivity after being exposed to glare [8]. The reported device may provide a simple solution for the driving public to reduce light transmission from headlamps of oncoming cars. Second, in military operation, our design can enhance the functionality of sniper scope when the targets are buried by surrounding bright light sources. The reported device can be placed in front of sniper scope to reduce light transmission from bright sources, helping the sniper to better aim at the target. Third, non-lethal laser weapon have been widely used for blinding enemy’s vision for a short period of time. In some events, protesters also aim high-power laser pointers at enforcement officials. The reported device may find application in laser counter measure and reduce the light transmission from laser pointers/weapons. Fourth, the development of the reported device involves basic electronic circuitry, microcontroller programming, product design with 3D printer, and computational image processing. Such a combination represents an excellent education tool for students who want to pursue their career in the STEM field.

This paper is structured as follows: in section 2, we will describe the design principle of the reported device and the construction of the prototype. In section 3, we will show the experiential performance of the device. In particular, we will show that, the reported device is able to adaptively reduce light transmission from bright sources by ~80%. We will also report the road test result by mounting the device in a car. Finally, we will summarize the results in section 4 and discuss the future directions.

2. LCD-based digital eyeglass: concept and operation

The schematic of the reported device is shown in Fig. 1(a). In this setup, we used a low-cost mini-camera to identify incident angles of bright sources and placed two pieces of transparent LCDs in front of two eyes. The working principle of the reported device can be explained as two steps: 1) we use the camera to capture the image of the scene. This image will be threshold and processed to identify the incident angles of bright light sources. The value for thresholding can be based on visual discomfort. It can be easily adjusted for different persons. 2) Based on the incident angles of light sources, we will set certain patterns on the displays to modulate the light entering the eye (selectively reduce transmission from the identified bright sources; the red arrows in Fig. 1(a)). The displays, on the other hand, remain transparent to other dim objects (blue arrows in Fig. 1(a)).

 figure: Fig. 1

Fig. 1 Schematic and prototype setup of the LCD-based digital eyeglass. (a) We use a mini-camera to identify the incident angles of bright light sources and place two liquid crystal displays in front of two eyes. Based on the incident angles of light sources, we set certain patterns on the displays to selectively reduce light transmission from those sources. The displays, on the other hand, remain transparent to dim objects. (b) The LCD-based digital eyeglass prototype.

Download Full Size | PDF

Figure 1(b) shows the prototype of the reported device. The core component is a low-cost transparent liquid crystal display (48 by 84 pixels, Nokia 5110 display, Amazon), as shown in the inset of Fig. 1(b). We used a microcontroller to set different patterns on the display. The refreshing rate of the current prototype is about 10 frames per second, limited by the clock of the microcontroller and the employed serial peripheral interface bus. For each eye, we put two displays in serial to improve the extinction ratio. We used a 3D printer to print an eyeglass case to house different components. For this prototype, all wires were connected to a laptop for image acquisition and pattern displaying. A micro-processer can be used to replace the laptop in the future (refer to Section 4 for discussion).

There are three operation modes of the LCD-based digital eyeglass.

Mode 1: We locate the bright sources and instantaneously reduce light transmission from them. Other parts of the display would remain transparent. Depending on the moving speed of the bright sources, we may require a high rate of image acquisition and display refreshing. The associated computation also needs to be time efficient. Nevertheless, the state of the art electronics can easily beat the human brain’s reaction time.

Mode 2: The design can be further simplified if we focus on the application of nighttime driving. We only need to block the headlamp light from the oncoming lane. We can preset two patterns on the display. One pattern is to block the light from the entire oncoming lane and the other one is transparent. Based on the detected light signal from the oncoming lane, we can switch between the two preset patterns. In this mode, the display refresh rate can be very low since we only need to switch between two patterns. The camera may not be necessary either; we can, for example, angle one photodiode towards the oncoming lane to detect the signal.

Mode 3: An intermediate mode between Mode 1 and 2. In this case, we use the webcam to detect the incident angles. We then ‘add’ the complementary dark spots to the display in real time without clearing the existing pattern. If no light source can be identified, the display will be reset to transparent. The logic behind this mode can be explained as follows: for nighttime driving, the bright sources from the oncoming lane typically follows a simple and steady trajectory. Therefore, if we keep adding the corresponding dark spots to the existing pattern, we can form a pattern to block the light transmission from the oncoming lane. If no oncoming car is detected, we then clear the display. Compared to mode 1, the requirement on display refreshing rate is eased as we aim to reduce light transmission from the entire oncoming lane. However, we note that, for mode 2 and 3, if the entire lane is blocked by reducing transmittance, the visibility of potential hazards in that lane may be reduced, potentially increasing risk.

With the limitation of the refreshing rate of the employed display (10 fps), we will focus on Mode 3 in this paper. Our prototype can easily be upgraded to Mode 1 when operated with a higher refreshing rate (as a reference point, the refreshing rate of a two-megapixel monitor is typically 60 fps).

3. Experiments and road test

Figure 2(a) shows the schematic of the experiment for testing the eyeglass concept. In order to measure the light reduction using the display, we used a camera as the eye for image acquisition. Figure 2(b1) shows the captured image of the camera. By setting different patterns on the liquid crystal display, we can selectively reduce light transmission from different regions of the scene. In Fig. 2(b2), we set the display transparent and captured the reference image. In Fig. 2(b3), we show a dark pattern on the display (inset of Fig. 2(b1)) and captured the corresponding image. In this case, light transmission is reduced for the left part of the object (also refer to Media 1). This experiment verifies the working principle of the reported device.

 figure: Fig. 2

Fig. 2 Experimental verification of the working principle of reported eyeglass. (a) Schematic of the experiment. We used a camera to serve as an eye to capture images of the scene. (b) The capture images with and without setting a dark pattern on the display. An 8 pixel by 8 pixel dark pattern on the liquid crystal display results in 80% light transmission reduction. Also refer to Media 1.

Download Full Size | PDF

In Fig. 3, we investigate the light reduction percentage by setting different patterns on the display. Figure 3(b1) shows the intensity line traces of the captured images. We can see that, an 8 pixels by 8 pixels pattern on the display results in ~80% reduction of light transmission. In Fig. 3(b2), we draw the light reduction percentages as a function of pattern size. The reported device can achieve a maximum of 84% light reduction. Further attenuation of light transmission can be achieved by using a display with a higher extinction ratio or use multiple displays in serial. In Fig. 3(b3), we also characterized the angular dependence of the light reduction percentage. We can see that, the light reduction percentage remains at 80% within the 30 degree range of the incident angle.

 figure: Fig. 3

Fig. 3 Characterization of the reported device. (a1)-(a4) The captured images by setting different patterns on the display, with the size ranging from 0 pixel by 0 pixel to 12 pixels by 12 pixels. (b1) The intensity line traces across the captured images. (b2) The light reduction percentage as a function of the pattern size. The reported device is able to reduce ~80% light transmission from the bright sources. (b3) The light reduction percentage as a function of the incident angles.

Download Full Size | PDF

In Fig. 4, we placed a bright light source to simulate the headlamp of the oncoming car. Figures 4(a) and 4(b) show the comparison with and without the dark pattern on the display (also refer to Media 2). Figures 4(b) can also be related to potential application in sniper scope. In Fig. 4(c), we demonstrate the mode 3 operation of the reported device: the dark pattern is kept adding to the existing pattern until no light source is identified (also refer to Media 3). In Fig. 5, we demonstrate the mode 3 operation using the glass prototype (also refer to Media 4 and Media 5).

 figure: Fig. 4

Fig. 4 Demonstration of the reported device for reducing light transmission from bright light sources. (a) and (b) Captured images with and without showing the dark pattern on the display (Media 2). (b) The reported device may find applications in sniper scope. (c) Mode 3 operation of the reported device (Media 3).

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Mode 3 operation using the glass prototype. We turned off the left display and used it as a reference. Also refer to Media 4 and Media 5.

Download Full Size | PDF

We also performed a preliminary road test experiment to validate the operation of reported device. As shown in Fig. 6(a), we placed the setup at the driver side of a car. In this experiment, we on purposely turned off left half of the display. Therefore, dark patterns can only be shown on the right half part of the display. Figure 6(b) shows the light transmission from the headlamp with (Fig. 6(b1)) and without the dark pattern (Fig. 6(b2)). We can see that, light transmission from the headlamp increases when the oncoming car exits the dark pattern region (also refer to Media 6). We note that, the image acquisition process using camera is different from the human visual reaction. We don’t feel any discomfort by looking at overexposed images. On the other hand, we feel strong discomfort when exposed to bright sources at night. Therefore, the perception of Fig. 6 may be different from the real experience when we wear the glass and look at the headlamps. To validate the effectiveness of the reported device, human factors need to be considered in our future study.

 figure: Fig. 6

Fig. 6 Preliminary road test. (a) We mounted the setup in a car and turned off the left half of the display to demonstrate the operation of the reported device. (b1) Light from headlamps is reduced by showing a pattern on the display (the dark pattern is enclosed by the red dash line). (b2) Light transmission from the headlamp increases when the oncoming car exits the dark pattern region. Also refer to Media 6 for more details.

Download Full Size | PDF

4. Discussion and conclusion

In summary, we developed a digital eyeglass by using a LCD for adaptive light modulating. There are several important topics worth discussing: 1) we use one camera to capture image and recover the incident angles of the bright sources. In this process, we assume the incident angle is the same for both eyes. A simple calculation shows that, if a light source is placed 4 meters away from the driver, the difference of incident angles between two eyes is less than 1 degree. By using the reported device, the suppression view angle of an 8 pixels by 8 pixels dark pattern is about 7 degree, substantially larger than 1 degree. Therefore, our assumption regarding incident angle is valid for objects that is 4 meters away from the observer. To further improve the result, we can use two cameras to capture two perspective images and recover the 3D positions of the light sources. 2) The employed display has a low extinction ratio. The measured maximum light reduction is only about 80%. A display with a higher fill factor can be used to improve the result. 3) Due to the presence of LCD polarizers, the reported glass has a transmittance of ~30% in the transparent state. With better polarizers, the transmittance can be improved to 50%. 4) The current prototype uses a laptop to acquire and process images from the webcam and control the displays. Mobile computing chip can be used to replace the computer for better system integration.

We envision three directions for future development. 1) The motion of headlamp sources typically follows a linear and steady trajectory along the highway. We can, therefore, perform trajectory prediction to shorten the response time of the device. This calculation can be made in parallel in the presence of multiple light sources. 2) Since we use camera to capture images of the road, other safety features can be incorporate into the reported device, such as forward collision warning and pedestrian collision warning. 3) In recent years, smart glass has drawn a great amount of attentions in the consumer market. Prominent examples include google glass, Epson Moverio, and Microsoft Hololens. Since these smart glasses are all equipped with camera, the reported device can be design as an add-on component to these existing products. 4) The reported device can adjust the contrast of visual perception by selectively dimming different parts of the scene. By using a LCD with a smaller pixel size, it can improve visual contrast for visually impaired patients and elderly people with low vision.

Acknowledgments

We thank Mr. Siyuan Dong for helping some of the experiments. We are also grateful for the discussion with Prof. Bahram Javidi.

References and links

1. A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26(3), 69 (2007). [CrossRef]  

2. A. Zomet and S. K. Nayar, “Lensless imaging with a controllable aperture,” in Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, (IEEE, 2006), 339–346. [CrossRef]  

3. Y. Bando, B.-Y. Chen, and T. Nishita, “Extracting depth and matte using a color-filtered aperture,” in ACM Transactions on Graphics (TOG), (ACM, 2008), 134.

4. C.-K. Liang, T.-H. Lin, B.-Y. Wong, C. Liu, and H. H. Chen, “Programmable aperture photography: multiplexed light field acquisition,” ACM Trans. Graph. 27(3), 55 (2008). [CrossRef]  

5. K. Guo, Z. Bian, S. Dong, P. Nanda, Y. M. Wang, and G. Zheng, “Microscopy illumination engineering using a low-cost liquid crystal display,” Biomed. Opt. Express 6(2), 574–579 (2015). [PubMed]  

6. D. Reddy, J. Bai, and R. Ramamoorthi, “External mask based depth and light field camera,” in Computer Vision Workshops (ICCVW),2013IEEE International Conference on, (IEEE, 2013), 37–44. [CrossRef]  

7. J. Bullough, N. Skinner, R. Pysar, L. Radetsky, A. Smith, and M. Rea, “Nighttime glare and driving performance: Research findings,” Technical Report, Department of Transpotation (2008).

8. R. H. Hemion, “A preliminary cost-benefit study of headlight glare reduction,” Technical Report, Department of Transpotation (1969).

Supplementary Material (6)

Media 1: MOV (3557 KB)     
Media 2: MOV (1104 KB)     
Media 3: MP4 (824 KB)     
Media 4: MP4 (3786 KB)     
Media 5: MP4 (6700 KB)     
Media 6: MOV (3497 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 Schematic and prototype setup of the LCD-based digital eyeglass. (a) We use a mini-camera to identify the incident angles of bright light sources and place two liquid crystal displays in front of two eyes. Based on the incident angles of light sources, we set certain patterns on the displays to selectively reduce light transmission from those sources. The displays, on the other hand, remain transparent to dim objects. (b) The LCD-based digital eyeglass prototype.
Fig. 2
Fig. 2 Experimental verification of the working principle of reported eyeglass. (a) Schematic of the experiment. We used a camera to serve as an eye to capture images of the scene. (b) The capture images with and without setting a dark pattern on the display. An 8 pixel by 8 pixel dark pattern on the liquid crystal display results in 80% light transmission reduction. Also refer to Media 1.
Fig. 3
Fig. 3 Characterization of the reported device. (a1)-(a4) The captured images by setting different patterns on the display, with the size ranging from 0 pixel by 0 pixel to 12 pixels by 12 pixels. (b1) The intensity line traces across the captured images. (b2) The light reduction percentage as a function of the pattern size. The reported device is able to reduce ~80% light transmission from the bright sources. (b3) The light reduction percentage as a function of the incident angles.
Fig. 4
Fig. 4 Demonstration of the reported device for reducing light transmission from bright light sources. (a) and (b) Captured images with and without showing the dark pattern on the display (Media 2). (b) The reported device may find applications in sniper scope. (c) Mode 3 operation of the reported device (Media 3).
Fig. 5
Fig. 5 Mode 3 operation using the glass prototype. We turned off the left display and used it as a reference. Also refer to Media 4 and Media 5.
Fig. 6
Fig. 6 Preliminary road test. (a) We mounted the setup in a car and turned off the left half of the display to demonstrate the operation of the reported device. (b1) Light from headlamps is reduced by showing a pattern on the display (the dark pattern is enclosed by the red dash line). (b2) Light transmission from the headlamp increases when the oncoming car exits the dark pattern region. Also refer to Media 6 for more details.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.