Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Line-defect calibration for line-scanning projection display

Open Access Open Access

Abstract

A method of line-defect calibration for line-scanning projection display is developed to accomplish acceptable display uniformity. The line scanning display uses a line modulating imaging and scanning device to construct a two-dimensional image. The inherent line-defects in an imaging device and optical lenses are the most fatal performance-degrading factor that should be overcome to reach the basic display uniformity level. Since the human eye recognizes line defects very easily, a method that perfectly removes line defects is required. Most line imaging devices are diffractive optical devices that require a coherent light source. This particular requirement makes the calibration method of sequential single pixel measurement and correction insufficient to take out the line defects distributed on screen due to optical crosstalk. In this report, we present a calibration method using a recursively converging algorithm that successfully transforms the unacceptable line-defected images into a uniform display image.

©2009 Optical Society of America

1. Introduction

With the advent of the diffractive linear optical modulator [1,2,3], projection display markets using two-dimensional imaging devices such as DLP [4], LCD [5] and LCOS [6] have been challenged. These newly developed linear imaging devices provide a 2-D image with the help of a mechanical sweep of a mirror. Because the imaging device needs only single line of pixels, the size of the active imaging device is 10 times smaller than the standard 2-D imaging device so that 10 times more chips can be produced on a single fabrication wafer. In addition, the number of active mirror pixels is 100 times smaller than that for 2-D devices, which means that a far better yield per wafer can be obtained assuming the same fabrication level as achieved for 2-D imaging devices. However, despite the improved productivity of linear optical modulators, line-scanning display has raised further issues related to non-uniformity problems that have not been solved by fabrication technology [7]. The disadvantage is that the small intensity defects level that would be generally acceptable in a 2-D imaging device are noticeable and rather annoying to human eyes. A direct approach to remove defects is to fabricate chips with deflect-less uniformity. The indirect approach is to create an intermediate LUT (Look-Up Table) for image data transformation to generate a uniform display. The direct approach requires a large investment in high-precision fabrication facilities to improve device uniformity, which would greatly increase production costs. The indirect approach needs calibration of the defected pixels but provides a reasonable way to keep production costs down. The latter approach using the calibration method of sequential single pixel measurement and correction [7] poses a new difficulty in uniformity correction because of the inherent optical crosstalk in diffractive line-scanning display systems. In this paper, we explain a line-defect calibration method based on a new recursively converging algorithm to overcome optical crosstalk and demonstrate the creation of a successful defect-free uniform image in a laser projection display.

2. Line-scanning display system

Samsung has developed the SOM (Spatial Optical Modulator), a new diffractive linear optical modulator and scanning optical module that could be embedded in a cellular phone for future portable display. The implemented projection display system consists of line-scanning optics, an imaging device (SOM) and driving electronics.

2.1 Line-scanning optics and imaging device

Optical modulation schemes of the SOM are shown in Fig. 1(a). The individual pixels of the SOM can be optically modeled as a dynamic reflection phase grating. One pixel is composed of multiple grating periods. In Figs. 1(c) and 1(d), two periods make one pixel. Each period is composed of one top mirror (or ribbon) and one bottom mirror. When a monochromatic beam is vertically incident on the SOM, it is reflected back from the top mirror and bottom mirror (hence the light beam has double path when it is reflected from the bottom mirror) and the reflected beams are diffracted into multiple spatial-harmonic orders. By controlling the position of the top mirror with respect to the bottom reflector, the diffracted energy can be redistributed between the 0th and 1st diffraction orders.

 figure: Fig. 1.

Fig. 1. Detailed schematics of the optic and imaging device is described. (a) Optical Modulation Scheme. (b) Principal scheme of line-scanning optics. (c) SOM device structure. (d) SOM displacement shape.

Download Full Size | PDF

Figure 1(b) shows the schematic diagram of line-scanning optics using 0th order diffractive beams. In the central mirror part of the SOM device, a laser shines through the illumination lens system. When the OPD (optical path difference) between the bottom and top mirrors is an odd multiple of λ/4, the beam is diffracted ±1 order and blocked by a Schlieren stop that is located in the Fourier plane of the projection lens. When the OPD is a multiple of λ/2, the bottom and top mirrors are like flat mirrors, and the 0th order coming through the projection lens is magnified to the screen. The magnified line image on the screen is progressively scanned by a vibrating mechanical mirror to complete the 2-D image.

The moving structure of the SOM is shown in Fig. 1(c). The bridge structure consisting of SixNy and Al layers is actuated by a PZT (lead-zirconate-titanate) material force. When a voltage is applied to the PZT, horizontal shrinkage of the PZT material induces a vertical motion of the bridged ribbon as shown in Fig. 1(d). The gap-height is defined as the gap between the top mirror and the bottom reflector. The piezoelectric actuation of the individual grating micro-mirrors controls the gap-height to achieve the variable phase shift, providing diffractive light modulation of the irradiating laser. The light intensity change is expressed as follows:

In=cos2[2πsnλ]

where In is the normalized light intensity of the n-th pixel, sn is the n-th pixel’s gap-height plus the displacement driven by the voltage, and λ is the wavelength of the laser. The light propagation has double path when the light beam is reflected from bottom mirror. When sn is an even multiple of λ/4, the 0th diffraction order intensity reaches its maximum value. When sn is odd multiple of λ/4, the 0th diffraction order intensity reaches its minimum value.

2.2 Driving electronics

The video data in the form of RGB (Red Green Blue) gray levels is converted to an SOM driving voltage controlling the gap-height.[8] There is an intermediate LUT (Look-Up Table) to convert the RGB data of the original image to the SOM driving voltage. Initially, 0 gray is set to 0 V, 255 gray to 10 V and intermediate gray levels are linearly interpolated to obtain the corresponding driving voltages.

3. Influence of line defects on display image quality

 figure: Fig. 2.

Fig. 2. Image simulation to see the influence of defects on display depending on the imaging device. (a) Original image, no defect. (b) Simulated point defects of two dimensional imaging device, with defect STD=2% and deviation limit <±4.5%. (c) Simulated line defects of a linear imaging device, with defect STD=2% and deviation limit <±4.5%. (d) Simulated line defects of a linear imaging device, with defect STD=1% and deviation limit <±2.5%.

Download Full Size | PDF

Even though the line scanning display system uses a simpler and smaller imaging device, there has been a bottleneck to reach the production stage. A breakthrough is necessary to remove line defects that are too magnified and annoying to be acceptable by the human eye. There have always been point defects in a 2-D display that adapts a DLP, LCOS or LCD 2-D imaging device.[9] However, the point-defects in two-dimensional imaging devices that have small differences in light intensity from the average intensity of the surrounding pixels are not noticeable by the normal human eye. In our visual experiment, point defects of less than ±4.5% intensity difference from average are unrecognizable by the normal human eye. In contrast, line defects in line-scanning displays with the same intensity deviations are readily noticeable by the normal human eye. That is to say, the specification of line defects should be stronger than that of point defects. According to our visual experiments, intensity differences caused by line defects should deviate by less than ±2.5% from average to be a commercially acceptable product. Figure 2(b) shows an image simulated to have point defects with a standard deviation (STD) of 2% and deviation limit of ±4.5% randomly distributed in two dimensions, generated from an original image shown in Fig. 2(a). To compare the line defect influence, the same STD and deviation limit is applied to the line profile as shown in Fig. 2(c).

As can be easily seen in Fig. 2(c), the same defects seem much bigger and noticeable and generate uncomfortable feelings. To search for the acceptable level of line defects, a STD of 1% and a deviation limit of ±2.5% are applied to the same image and, for this case, almost no line defects are noticeable by the human eye. In this way, we decided that the line defect level in SOM line-scanning projection displays should be decreased to meet a specification where the STD <=1.0% and the deviation limit from the average value is ±2.5%.

4. Sources of defects in the image

There are two main sources of defects in the line-scanning optical system, imaging devices and optical lenses. Figure 3(a) shows the first defect source that originates from the ribbon gap-height difference that results in differences of diffracted light intensity. The color differences in the SOM active mirrors represent the differences in gap-height. To fabricate standing ribbons with a 1% STD in intensity requires a gap-height STD of ~0.7 nm. It costs too much for micro-fabrication to meet this tough specification. Considering the ribbon layer thickness and stress distribution within a chip, a gap-height with ±5% STD is acceptable for our fabrication capability. Therefore, gap-height adjustment with different driving voltages for each pixel is a low-cost choice for mass production. Figure 3(b) shows the second source of defects that comes from optical lens scratches and dig inherent in the lens fabrication [10, 11, 12]. Aspheric lenses in particular show more scratches since their shape is formed by rougher fabrication than spherical lenses. When a beam is passing through a lens, it creates a spherical wave-front with a very small amplitude at lens scratches. In the case of coherent light, the transmitted beam and small amplitude spherical wave-front beams interfere with each other and create visible fringes in the image plane. These intensity profiles with visible interference fringes from lenses remain even after the imaging device’s gap-height defect is calibrated. While research is ongoing to improve defects in optical lenses [13], defect calibration is a cheaper and better way for creating a defect-less uniform screen with minimal production costs.

 figure: Fig. 3.

Fig. 3. Image defect sources from the SOM device and optical lens. (a) Defects in SOM device. (b) Defects in optical lens.

Download Full Size | PDF

5. A recursively converging calibration algorithm

A way to correct for defects is sequential detection of individual pixel intensities and the creation of a LUT for image conversion [7]. This simple calibration method has a disturbing noise source. The light intensity detected by a single pixel is different when all pixels are moving together as in a movie, resulting in an insufficiently uniform display; this makes it hard to find the correct voltage to drive the individual ribbons. The following subsections explain the previous method [7] of calibration and origins of the crosstalk noise. To overcome the problem of crosstalk noise, we present a recursively converging logic of line-defect free calibration.

 figure: Fig. 4.

Fig. 4. Optical scheme of initial black and white calibration: (a) Pixel actuation during calibration; (b) Projective optics schematic. Only beams passing through the aperture are projected on the screen and detected by a photodiode. c) Shift of the calibration curve due to crosstalk effect. Blue solid line is the true curve and the red and green dotted lines are the shifted curves. Each Vmin and Vmax is the voltage of the black and white intensity, respectively. Nribbon=2.

Download Full Size | PDF

5.1 Single pixel sweeping method of calibration

To find the voltage for the gray levels, the target pixel ribbon is swept from low voltage to high voltage as shown in Fig. 4(a). All diffracted SOM light beams that pass through the Schlieren stop are projected on the screen by the objective lens as shown in Fig. 4(b), and parts of the light beams are gathered by a photodiode. The black and white voltage can be easily found from the intensity vs. voltage graph by curve-fitting the electric noise factors [7]. However, this method does not provide the correct black and white voltages due to optical crosstalk inherent in the imaging system. The voltage of the white image measured by single pixel sweeping does not correspond to the voltage of the white image when neighboring pixels are displaced together due to the optical crosstalk between neighboring pixels. The voltage for a specific intensity is actually shifted up or down from the true position as shown in Fig. 4(c). Without introducing a compensating algorithm in the calibration, it would be impossible to obtain good image quality in a SOM-based laser projector system.

5.2 Optical crosstalk

The optical calibration deterioration by crosstalk arises from the coherent characteristics of lasers in the diffractive projection system. During image projection and calibration, only a part of the energy of the diffracted beam (working diffractive order) is used, as shown in Fig. 4(b). The light beam selection for image projection and calibration is made by the aperture of the Schlieren stop in the Fourier plane of the projection lens. The numerical aperture of the beams passing through the Schlieren stop could be calculated as NA=d/(2F), where F is the focal length of the projection lens. All light passing through the aperture is collected by a photodiode. To find the corresponding voltages for the ribbon positions with the minimum and maximum optical signal, we actuate one pixel. The field reflected from the n-th pixel of the SOM light beam could be written in the object plane as follows:

E=E0exp(2jkh)from the top mirror,
E=E0exp[2jk(hsn)]from the bottom reflector

where E is the reflected field, Eo is the incident field amplitude, j is the imaginary unit, k is the wavenumber, h is the fixed reference plane, and sn is the gap-height of the n-th pixel. After diffraction on the SOM, the light beam is projected to a photo-detector by the objective lens. In the focal plane (Fourier plane), a Schlieren stop rejects all diffraction orders except the working diffractive order. All light that passes through the aperture is detected by a photodiode, and the maximum and minimum photodiode signals are used to find the actuation voltage for black and white images. To calculate the photodiode signal, one should calculate the Fourier spatial spectrum of the amplitude of the SOM-diffracted field, make the spatial spectrum truncation to take into account the beam truncation by Schlieren stop, and finally calculate the energy passing through the aperture. In this case (using a spatial Fourier filter), it is more convenient to calculate the photodiode signal by integration of the electromagnetic energy of the passed beam in the spatial Fourier plane using Parseval’s theorem. It is assumed that we use the 0th diffractive order for calibration and image projection, and the Schlieren stop has an aperture in the area of the 0th order spot of light. Only diffraction orders having light spots close to this area have a considerable effect on calibration. Therefore, only the 0th and ±1st diffractive orders will be taken into account below, and the higher diffraction orders will be ignored.

The SOM in actuating mode is not periodical structure because the ribbons of different pixels have different displacements. Hence, the field in Fourier plane is a continuous function. It is well known that the field in Fourier plane (in focal plane) could be calculated by using Fourier transform of field distribution in the object plane [14]. In our case, the field in Fourier plane could be calculated by Fourier transform of the function given by Eq. (2) and Eq. (3) over all SOM area (below, we will not pay attention to the scale factor of beam size in Fourier plane since it is not an important factor). We obtained simple formulae for the field in the Fourier plane by separating the SOM area into the SOM pixel areas in order to calculate the Fourier transform:

F=(α)n=1NpixelFn(sn)
=an=1Npixel2exp(jksn)cos(ksn+αT/4)sin(αT/4)sin(αNribbonT/2)αsin(αT/2)exp[jα(nNribbonTNpixelNribbonT/2)]
[cos(ksn)cos(αT/4)sin(ksn)sin(αT/4)]

where F(α) is the diffracted field on the SOM in the spatial frequency domain, α is the spatial frequency, Fn(sn) is the n-th pixel field in the spatial frequency domain, a is field amplitude, Npixel is the number of pixels in the SOM, Nribbon is the number of grating pairs in one pixel whose grating pair has a 1:1 land/groove widths ratio, and T is the period of one grating pair (see Fig. 4(a) for the definition of T). The last factor in Eq. (4) has two parts, cos(ksn)cos(αT/4) and -sin(ksn)sin(αT/4). The first part corresponds to the 0th diffractive order with a maximum at 0 spatial frequency and the second to the ±1st diffractive orders with amplitude maxima at ±k λ/T values of the spatial frequency. The field amplitude in the spatial Fourier plane is the sum of the field amplitudes of separate pixels, each one having its own phase and amplitude. Due to the effects of interference of fields from different pixels, one pixel’s signal shape should depend on the position of all other pixels. It is not difficult to see that the interference between far pixel fields would have a large number of fringes in the spatial Fourier domain (domain of integration), due to a factor exp[(nNribbonT-NpixelNribbonT/2)]. After integration of the energy over the spatial frequency interval corresponding to the aperture size, the interference effect between far pixels would be negligibly small due to averaging over a large number of fringes. Therefore, only close pixels influence the signal shape of a moving pixel because the interference of their fields with the actuated pixel’s field creates only a few fringes in the interval of integration. Therefore, integration over the aperture area in the Fourier plane would be sensitive to the fringe amplitudes and positions, which change with the actuation positions of the moving pixels. To find the degree of influence of optical crosstalk, we choose the case when all pixels are shifted a distance s0 from white level, and we find the shift Δs of the actuated pixel signal curve at all of the pixel shift positions. For this specific case, Eq. (4) simplifies to the following formula:

F(α)=exp(jks0)sin(αT/4)sin(αNpixel*NribbonT/2)αsin(αT/2)[cos(ks0)*cos(αT/4)sin(ks0)*sin(αT/4)]
+exp(jks)sin(αT/4)sin(αNribbonT/2)asin(aT/2)[cos(ks)*cos(aT/4)sin(ks)*sin(aT/4)]
exp(jks0sin(aT/4)sin(aNribbonT/2)asin(aT/2)[cos(ks0)*cos(aT/4)sin(ks0)*sin(aT/4)])

which is obtained by using a simple transformation:

F(α)=n=1Npixel/21Fn(s0)+FNpixel/2(s)+n=Npixel/2+1NpixelFn(s0)=n=1NpixelFn(s0)+FNpixel/2(s)FNpixel/2(s0)

We limit our investigation to the case when the aperture of the Schlieren stop is situated symmetrically (the center of the aperture exactly coincides with the center of the 0th diffractive order). In this case, the terms of Eq. (5) corresponding to the 0th diffractive order have even symmetry and those corresponding to the ±1st order have odd symmetry. Therefore, after integration in the Fourier plane for the detector signal calculation, the energy of the ±1st and 0th orders would be separated, meaning that the photodiode signal is the sum of the pure 0th order signal and residual ±1st order signal. Therefore, without undermining the accuracy of the crosstalk calculation, we can write the field of the photodiode signal as follows:

I(s,s0)=ak*NAk*NAF(α)F*(α)dα
=a(I0cos2(ks0)+I1sin2(ks0)+I2cos2(ks)+2(I3I2)cos(ks0)cos(k(ss0)cos(ks)
+I4(sin2(ks)2sin(ks0)cos(k(ss0))sin(ks))

where the terms with I0 and I1 represent the constant background signal level due to contributions from the 0th and residual 1st diffractive orders from all pixels,

I2=k*NAk*NA(sin(αT4)sin(αNribbonT2)αsin(αT2)cos(αT4))2dα
I3=k*NAk*NA(sin(αT4)sin(αNpixel*NribT2)αsin(αT2))2cos2(αT4)dα
I4=k*NAk*NA(sin(αT4)sin(αNribbonT2)αsin(αT2)sin(αT4))2dα

The NA is the numerical aperture of the beam passed through the Schlieren stop (NA=0.5*d/F). The background signal terms I0 and I1 in Eq. (8) are constant during one pixel’s actuation and therefore give no contribution to the change in photodiode signal nor provide any contribution to the shift in grey scale level due to optical crosstalk. In Eq. (8), the I2 and I4 terms give contributions to the signal from the actuated pixel in the 0th and residual ±1st diffractive orders, respectively (signal of one pixel without background from other pixels). The 2(I3-I2)cos(ks0) term in Eq. (8) gives the photodiode signal due to the fringes caused by the interference between the actuated pixel’s 0th order fields and the 0th order of the background fields. The -2I4sin(ks0) term gives the photodiode signal change due to the fringes caused by the interference between the actuated pixel’s residual ±1st order fields and the ±1st orders background field. After a simple transformation of Eq. (8), the optical signal can be written as follows

I(s,s0)=C0+C1cos2(ks+φ)

where C0, C1 and φ are constants depending only on s0 (no dependence on s). From Eq. (9) it follows that the photo-detector signal always has the same shape and period (as a function of the actuated pixel’s displacement); hence, cross talk just provides a signal curve shift at some constant value of the pixel displacement. Our numerical simulation has shown that in spite of the small amplitude of the residual ±1st order signals, the fringe signal in the ±1st orders provides a strong influence on the shift of the gray scale level arising from the optical crosstalk for the case of relatively large NA (NA/(λ/T)>0.4, however in this case the shift is small). Therefore, the fringe signal in ±1st orders, contrary to other terms, has a sufficiently different (min and max curve position are shifted) dependence on the pixel displacement than the pure calibration 0th order signals.

To find the shift in the grey scale, we should just find the shift in white level that corresponds to the maximum signal. We could then find its location from the local extreme condition − the zero of the derivative. From Eq. (7), the condition on one pixel’s displacement that defines the signal curve maximum is

dds=I(s,s0)=2ak(I2cos(ks)sin(ks)(I3I2)cos(ks0)(sin(k(ss0))cos(ks)+cos(k(ss0))sin(ks))+.
+I4(sin(ks)cos(ks)sin(ks0)cos(ks)cos(k(ss0)+sin(ks0)sin(ks)sin(k(ss0)))=0

From Eq. (10), it follows that the shift in grey scale (Δs) is a periodical odd function of s0 with a period of λ/2.

 figure: Fig. 5.

Fig. 5. Dependence of gray scale shift of calibration pixel on the shift in positions of all other pixels due to optical crosstalk for two different NA of the Schlieren stop and for two different pixel structures: (a) SOM having 2 grating pairs in one pixel; (b) SOM having 1 grating pair in one pixel.

Download Full Size | PDF

Figure 5 shows the dependence of the shift of the calibration curve from the true position on the initial displacement of all other pixels for an SOM having two grating pairs in one pixel and an SOM having one grating pair in one pixel. The simulation was made for two different hole widths in the Fourier plane corresponding to two transmitted beam NAs: NA/(λ/T)=0.25 and NA/(λ/T)=0.5. From the curves in Fig. 5, it is clear that the level of crosstalk increases with a decrease in the number of grating pairs in one pixel and increases when the aperture size width (NA) decreases. It is an expected result since a decrease in hole size decreases the integration area in the Fourier plane and therefore the degree of averaging over fringes. The decrease in the number of ribbons increases the fringe size in the Fourier plane. Since the area of integration in the Fourier plane is fixed, this also decreases the average number of fringes. As one can see from Fig. 5(b) for the SOM with one grating pair in one pixel, optical crosstalk can cause a large shift in the calibration curves with Δs/(λ/4)<=0.175. Therefore, a large error in the calibration system is introduced. For an SOM with two grating pairs in one pixel, the optical crosstalk is not so large however, it is still sufficient to induce visible image deterioration.

It should be noted that we limit our simulation of optical crosstalk to the case where all pixels are shifted together some distance from black level. Because the crosstalk shift Δs is an odd function of s0 and only neighboring pixels are important, we should expect that our case provides close to the maximum crosstalk effect and the simulation data above provides close to the maximum value in calibration deterioration. From the data above, it follows that crosstalk could cause sufficient image deterioration, and a special scheme of calibration is needed to avoid problems with image quality enhancement.

5.3 A recursively converging calibration algorithm in line-scanning display

To overcome previously explained optical crosstalk problems, we developed a recursively converging algorithm to find the correct voltage for each pixel as described in Fig. 6. The algorithm moves target pixel ribbons to find the target gray voltage of the black, white and intermediate intensities after the neighboring ribbons are set in similar light intensities with initial LUT. After finding the full gray voltage of one pixel, we search for the full gray voltage of the next neighboring pixel while other pixels are under similar light intensities driven by previously found last LUTs. To increase the speed of the total pixel measurement and calculation time, we move all pixels simultaneous with the initial LUT and calculate the new LUT altogether as shown in the Fig. 6(a) flow diagram. As this process is repeatedly cycled, the voltage LUT of each pixel converges to some level and each pixel shows the same intensity level. When the convergence (|new LUT - last LUT|) is within some acceptable level (<0.5%), the final calibration LUT is used as to transform all video images, accomplishing defect-less uniform display. Figure 6(b) describes an example of an SOM intensity response vs. voltage LUT. After calibration, the final LUT is constructed like the table in example Fig. 6(c).

 figure: Fig. 6.

Fig. 6. Description of calibration process. (a) Flow diagram of recursive converging calibration algorithm. (b) Example of pixel intensity response vs. input drive voltage. (c) Example of initial and final LUT when calibration is finished.

Download Full Size | PDF

6. Experimental results and discussion

To calibrate line defects in implemented projection display, an experimental setup is prepared. An initial display with a lot of line defects is transformed into a uniform display by applying the newly acquired LUT (Look-Up Table) to the input image data.

The volume of the projection optical module is 13 cc, including the SOM chip and optical lens. The number of SOM pixels is 480 and the horizontal mirror scanning provides 640-pixel resolution. The total screen resolution is VGA (640×480) with an aspect ratio of 4:3 and diagonal length of 10 inches.

6.1 Experimental setup

Figure 7 shows the schematics of the prepared measurement setup. In the center of the projected screen, a high density array of photodiodes is installed to obtain the light intensity of each pixel. Three thousand photodiodes are assembled in a vertical line to have around 6 times more resolution than the 480 pixels of the imaging. For high speed data transfer from the photodiode sensor to the PC, a National Instruments data acquisition board NI-6115 is used. The calibration algorithm is coded in one SW package with Visual C++.

First, a test gray level image from the PC is transferred to the electronics circuit, which then drives the ribbon via the base LUT.

Second, the projected image driven by the electronics is detected by the high density photodiode array, and each pixel intensity datum is transferred to the PC memory through the NI-6115 DAQ board.

Last, the software in the PC calculates the corresponding new LUT based on the measured intensity data, which becomes the new base LUT for the next test image.

The three steps are repeated until an acceptable level of recursive convergence of uniform data is reached. The total time for the calibration of 480 pixels is less than 2 minutes.

 figure: Fig. 7.

Fig. 7. Schematic diagram of experiment setup

Download Full Size | PDF

6.2 Vertical line intensity profile

Before and after calibration, the vertical line profile was measured by the photodiode array to confirm the effect of calibration, as shown in Fig. 8. Figure 8(a) shows the vertical line profile before calibration. The intensity is normalized to an average value of 1.0. Figure 8(b) is the intensity histogram of Fig. 8(a). Owing to the imaging device and optical lens defects, there are large fluctuations of intensity where the standard deviation of line defects is 5.9% and the deviation is ±17.9%. To remove the line defects, the calibration sequence described above is applied, and the new vertical line profile is re-measured. As shown in Fig. 8(c), the intensity profile is now flattened. Figure 8(d) shows the new intensity histogram where the standard deviation of line defects improved from 5.9% to 0.45% and the deviation from ±17.9% to ±1.3%. The specification for line defects as determined by the simulated image in Fig. 2 is reached.

 figure: Fig. 8.

Fig. 8. Vertical line intensity profile difference for 255 gray level (white color) input image before and after calibration. (a) Vertical line intensity profile before calibration. (b) Intensity histogram of (a), standard deviation of line defects=5.9%, the deviation of line defects=±17.9%. (c) Vertical line intensity profile after calibration. (d) Intensity histogram of (c), standard deviation of line defects=0.4%, the deviation of line defects=±1.3%.

Download Full Size | PDF

6.3 Calibrated projection image quality

Upon finishing the calibration sequence and confirming the vertical intensity line profile, the LUT for input image gray conversion is established. Using this LUT, all input RGB gray-levels of a video image are converted to the corresponding driving voltage, and uniform display is produced as shown in Fig. 9, captured by CCD Camera. Figure 9(a) shows the projection display image that has the intensity line profile of Fig. 8(a). All fluctuations in the intensity profile are the source of horizontal line defects. No one would accept Fig. 9(a) as a viewable display. By applying the calibration LUT, the defected image is transformed into Fig. 9(b), which has the vertical intensity profile of Fig. 8(c). Now the line defects are not visible. The enhanced display image quality in Fig. 10 clearly demonstrates the effectiveness of the developed calibration algorithm and system.

 figure: Fig. 9.

Fig. 9. Display image quality difference before and after the calibration, as captured by CCD Camera. (a) Display image before calibration. (b) Display image after calibration.

Download Full Size | PDF

7. Summary

In conclusion, we have developed a calibration algorithm for line-scanning display and implemented a defect-free projection display system. The algorithm uses a recursively converging method to overcome the noise of optical crosstalk. With this proven capability of defect-less uniform display, we believe that commercial laser line-scanning display systems will soon come to the mobile phone customer.

Acknowledgment

We credit FreeDigitalPhotos.net as the original image source of Fig. 2 and Fig. 9.

References and links

1. J. I. Trisnadi, C. B. Carlisle, and R. Monteverde, “Overview and applications of Grating Light Valve based optical write engines for high-speed digital imaging,” Proc. SPIE 5348, 52–64 (2004). [CrossRef]  

2. S. K. Yun, J. Song, T.-W. Lee, I. Yeo, Y. Choi, Y. Lee, S. An, K. Han, Y. Victor, H.-W. Park, C. Park, H. Kim, J. Yang, J. Cheong, S. Ryu, K. Oh, H. Yang, Y. Hong, S. Hong, S. Yoon, J. Jang, J. Kyoung, O. Lim, C. Kim, A. Lapchuk, S. Ihar, S. Lee, S. Kim, Y. Hwang, K. Woo, S. Shin, J. Kang, and D.-H. Park, “Spatial Optical Modulator (SOM): Samsung’s Light Modulator for the Next Generation Laser Display,” IMID/IDMC ’06 DIGEST (Proceeding of Society for Information Display - SID. August, 2006), 29-1, 551–555.

3. M. W. Kowarz, J. C. Brazas, and J. G. Phalen, “Conformal Grating Electromechanical system (GEMS) for High-Speed Digital Light Modulation,” IEEE, 15th Int. MEMS Conf. Digest, 568–573 (2002).

4. L. A. Yoder, “An Introduction to the Digital Light Processing Technology,” (Texas Instruments). http://dlp.com/tech/what.aspx.

5. B. T. Teipen and D. L. MacFarlane, “Liquid-crystal-display projector-based modulation transfer function measurements of charge-coupled-device video camera systems,” Appl. Opt. 39, 515–525 (2000). [CrossRef]  

6. S. Lee, M. Sullivan, C. Mao, and K. M. Johnson, “High-contrast, fast-switching liquid-crystal-on-silicon microdisplay with a frame buffer pixel array,” Opt. Lett. 29, 751–753 (2004). [CrossRef]   [PubMed]  

7. R. W. Corrigan, D. T. Amm, P. A. Alioshin, B. Staker, D. A. LeHoty, K. P. Gross, and B. R. Lang, “Calibration of a Scanned Linear Grating Light Value Projection System,” in SID 99 Digest (Society for Information Display, San Jose, Calif., 1999), 200–223 (1999).

8. J. Kang, J. Kim, S. Kim, J. Song, O. Kyong, Y. Lee, C. Park, K. Kwon, W. Choi, S. Yun, I. Yeo, K. Han, T. Kim, and S. Park, “10-bit Driver IC Using 3-bit DAC Embedded Operational Amplifier for Spatial Optical Modulator,” IEEE J. Solid-state Circuits 42, 2913–2922 (2007). [CrossRef]  

9. J. Dijon and A. Fournier, “6” Colour FED Demonstrator with High Peak Brightness,” in SID 2007, 1313–1316 (2007). [CrossRef]  

10. M. Young, “Scratch-and-dig standard revisited,” Appl. Opt. 25, 1922–1929 (1986). [CrossRef]   [PubMed]  

11. J. A. Hoffnagle and C. M. Jefferson, “Beam shaping with a plano-aspheric lens pair,” Opt. Eng. 42, 3090–3099 (2003). [CrossRef]  

12. J. A. Hoffnagle and C. M. Jefferson, “Design and performance of a refractive optical system that converts a Gaussian to a flattop beam,” Appl. Opt. 39, 5488–5499 (2000). [CrossRef]  

13. H. Lee and M. Yang, “Dwell time algorithm for computer-controlled polishing of small axis-symmetrical aspherical lens mold,” Opt. Eng. 40, 1936–1943 (2001). [CrossRef]  

14. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, New York, 2005), Chap. 5.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Detailed schematics of the optic and imaging device is described. (a) Optical Modulation Scheme. (b) Principal scheme of line-scanning optics. (c) SOM device structure. (d) SOM displacement shape.
Fig. 2.
Fig. 2. Image simulation to see the influence of defects on display depending on the imaging device. (a) Original image, no defect. (b) Simulated point defects of two dimensional imaging device, with defect STD=2% and deviation limit <±4.5%. (c) Simulated line defects of a linear imaging device, with defect STD=2% and deviation limit <±4.5%. (d) Simulated line defects of a linear imaging device, with defect STD=1% and deviation limit <±2.5%.
Fig. 3.
Fig. 3. Image defect sources from the SOM device and optical lens. (a) Defects in SOM device. (b) Defects in optical lens.
Fig. 4.
Fig. 4. Optical scheme of initial black and white calibration: (a) Pixel actuation during calibration; (b) Projective optics schematic. Only beams passing through the aperture are projected on the screen and detected by a photodiode. c) Shift of the calibration curve due to crosstalk effect. Blue solid line is the true curve and the red and green dotted lines are the shifted curves. Each Vmin and Vmax is the voltage of the black and white intensity, respectively. Nribbon =2.
Fig. 5.
Fig. 5. Dependence of gray scale shift of calibration pixel on the shift in positions of all other pixels due to optical crosstalk for two different NA of the Schlieren stop and for two different pixel structures: (a) SOM having 2 grating pairs in one pixel; (b) SOM having 1 grating pair in one pixel.
Fig. 6.
Fig. 6. Description of calibration process. (a) Flow diagram of recursive converging calibration algorithm. (b) Example of pixel intensity response vs. input drive voltage. (c) Example of initial and final LUT when calibration is finished.
Fig. 7.
Fig. 7. Schematic diagram of experiment setup
Fig. 8.
Fig. 8. Vertical line intensity profile difference for 255 gray level (white color) input image before and after calibration. (a) Vertical line intensity profile before calibration. (b) Intensity histogram of (a), standard deviation of line defects=5.9%, the deviation of line defects=±17.9%. (c) Vertical line intensity profile after calibration. (d) Intensity histogram of (c), standard deviation of line defects=0.4%, the deviation of line defects=±1.3%.
Fig. 9.
Fig. 9. Display image quality difference before and after the calibration, as captured by CCD Camera. (a) Display image before calibration. (b) Display image after calibration.

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

In=cos2[2πsnλ]
E=E0exp(2jkh) from the top mirror ,
E=E0 exp [2jk(hsn)] from the bottom reflector
F=(α) n=1NpixelFn (sn)
=an=1Npixel2exp(jksn)cos(ksn+αT/4)sin(αT/4)sin(αNribbonT/2)αsin(αT/2)exp[jα(nNribbonTNpixelNribbonT/2)]
[cos(ksn)cos(αT/4)sin(ksn)sin(αT/4)]
F(α)=exp(jks0)sin(αT/4)sin(αNpixel*NribbonT/2)αsin(αT/2) [cos(ks0)*cos(αT/4)sin(ks0)*sin(αT/4)]
+exp(jks) sin(αT/4)sin(αNribbonT/2)asin(aT/2) [cos(ks)*cos(aT/4)sin(ks)*sin(aT/4)]
exp(jks0sin(aT/4)sin(aNribbonT/2)asin(aT/2)[cos(ks0)*cos(aT/4)sin(ks0)*sin(aT/4)])
F(α)=n=1Npixel/21Fn(s0)+FNpixel/2(s)+n=Npixel/2+1NpixelFn(s0)=n=1NpixelFn(s0)+FNpixel/2(s)FNpixel/2(s0)
I(s,s0)=a k*NAk*NAF(α)F*(α)dα
=a(I0cos2(ks0)+I1sin2(ks0)+I2cos2(ks)+2(I3I2)cos(ks0)cos(k(ss0)cos(ks)
+I4(sin2(ks)2sin(ks0)cos(k(ss0))sin(ks))
I2=k*NAk*NA(sin(αT4)sin(αNribbonT2)αsin(αT2)cos(αT4))2dα
I3=k*NAk*NA(sin(αT4)sin(αNpixel*NribT2)αsin(αT2))2cos2(αT4)dα
I4=k*NAk*NA(sin(αT4)sin(αNribbonT2)αsin(αT2)sin(αT4))2 d α
I (s,s0)=C0+C1cos2(ks+φ)
dds=I (s,s0) =2ak(I2cos(ks)sin(ks)(I3I2)cos(ks0)(sin(k(ss0))cos(ks)+cos(k(ss0))sin(ks))+.
+I4(sin(ks)cos(ks)sin(ks0)cos(ks)cos(k(ss0)+sin(ks0)sin(ks)sin(k(ss0)))=0
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.