## Abstract

Traditional methods of optical design trade optical system complexity for image quality. High quality imagers often require high system complexity. A new imaging methodology called Wavefront Coding uses aspheric optics and signal processing in order to reduce system complexity and deliver high quality imagery. An example in terms of a conformal IR imaging system is given.

©2003 Optical Society of America

## 1. Introduction

A recent methodology for digital imaging systems jointly optimizes the optics, detection and signal processing of an imaging system. In joint optimization, the signal processing is determined by specialized optics, and the exact form of the specialized optics is affected by requirements placed on the signal processing. This is in contrast to traditional imaging systems where the optics are designed independently of the other system components and an increase in optical performance comes at the cost of an increase in optical and mechanical complexity [1,2].

This new methodology has been facilitated by the recent increases in the computational power for system design and a reduction in the cost of implementing signal processing algorithms in hardware. As a result, the system designer has access to a larger design trade space which enables the design of imaging systems that can image with high quality, with fewer physical components, lighter weight, and less cost when compared to traditional optics. This methodology gives the designer control over the optics, detection, signal processing, optical and mechanical tolerancing, fabrication and signal processing implementation. Systems can be optimized based on application-specific operation, such as feature recognition algorithms for surveillance, machine vision analysis, biomedical diagnosis or bar code reading, for example. The desired result in many of these systems is not always a high quality image, but often a number or set of numbers that accurately describe a scene. Therefore, in some cases, a system “figure of merit” is not based on creating visually appealing images, but instead is based on maximizing the information transfer between the object space and the image processing, recognition, or identification algorithms. The joint design of the optics and signal processing has been called Wavefront Coding and has been shown to greatly reduce the dependence of the system on the effects of many aberrations [3]. Additionally, the total information transfer can be higher in systems using Wavefront Coding [4]. Others have used similar techniques in what they called pupil-phase engineering [5].

## 2. Wavefront Coded imaging systems

Wavefront Coded imaging systems differ from traditional imaging systems in the use of aspheric optics that form images with a special-purpose blur that create invariance to many of the optical aberrations including: spherical aberration, field curvature [6], astigmatism, chromatic aberration [7], defocus, temperature related defocus [8] and alignment or assembly related defocus. Signal processing is used to remove the blur. Figure 1 shows the general system. The aspheric optics can be a separate stand-alone element of the imaging system or can be integrated onto one or more optical elements as shown. The signal processing is independent of the object being imaged and in general depends only on the imaging optics and detector.

Joint design of the optics and signal processing is used to ensure that the amount and form of the blur is best suited to the amount and form of signal processing and to minimize noise effects. Noise effects are due to the need to remove the blur on the detected image. Signal processing that removes the image blur amplifies and changes the phase of the spatial frequencies. This amplification not only amplifies the spatial frequency content of the ideal image, but also the noise in the image. In practice it is this noise amplification that typically sets the limit on the amount of benefit achieved from Wavefront Coding in a particular system configuration.

## 3. Example IR imaging system

An example of using Wavefront Coding to reduce system complexity can be illustrated through a conformal IR imaging system. This example demonstrates a system that yields a 50% reduction in physical components, a 45% reduction in weight and a large reduction in cost due to the reduction in physical components, a simplified housing and assembly.

The IR imaging system parameters are: f=11.5mm, F/#=0.9, field of view of 8 degrees. The illumination wavelength is 8–12 microns and the detector array size is 64×64 with 25 micron pixels. The front surface of the optical system is fixed. The minimum value of the Modulation Transfer Function (MTF) (including pixel MTF) out to the detector cutoff frequency should be 0.25 or greater. The additional signal processing constraints for the Wavefront Coded system include a filter kernel that has a physical dimension of [10×10] coefficients or less with an associated noise penalty that is less than 2. A description of the signal processing noise penalty will be described later in this section.

In order to meet these specifications a traditional optical design requires two optical elements. If only a single optical element is used, large amounts of spherical aberration, astigmatism, and field curvature limit the performance and badly blur the image. A Wavefront Coded version of the system requires only a single optical element and can produce imagery that meets system specifications with only a slight noise penalty.

Figure 2 shows the overall performance characteristics of the traditional 2-element design (Fig. 2(a)), the traditional 1-element design (Fig. 2(b)) and the Wavefront Coded design (Fig. 2(c)). The traditional 2-element lens was designed using the default merit function in Zemax with a goal of minimizing the RMS wavefront error using a reference that removes piston and x and y tilt from the wavefront. The field dependent weights in this design vary linearly from 1 on-axis to 0.5 off-axis and the design images with high quality across the entire field and over the entire LWIR spectrum. The MTFs over the image field are all high as seen in Fig. 2(a). The traditional 1-element lens was designed with the same merit function except for the obvious change in constraints due to a different number of elements. The single element traditional lens design suffers from unavoidable field curvature, astigmatism and spherical aberration that limit the imaging performance at the edge of the image field. This results in a loss of image resolution and contrast as shown in Fig. 2(b).

The Wavefront Coded imaging system is also a 1-element design (Fig. 2(c)). The first surface is identical to the first surfaces of both traditional designs due to the conformal optical constraint. The only optical difference between the traditional 1-element design and the Wavefront Coded design is the form of the second surface and the lens thickness. The results demonstrate a system that satisfies the design specifications with only a single element (blue MTF curves in Fig. 2(c)). The MTFs of the Wavefront Coded system of Fig. 2(c) labeled in green are due to the optics alone before the signal processing.

The sampled PSFs of the traditional 2-element design are shown in Fig. 3(a). Notice that these PSFs are essentially unchanged as a function of object position. The sampled PSFs of the traditional 1-element design are shown in Fig. 3(b). Notice that these PSFs are far broader than any PSFs from the 2-element design. The aberrations that could not be corrected due to the limited number of system variables cause a decrease in on-axis performance due to spherical aberration and a decrease in performance away from the center of the image field due primarily to astigmatism and field curvature.

The sampled PSFs from the Wavefront Coded system before signal processing are shown in Fig. 3(c). Notice that these PSFs are broader then the traditional 2-element on-axis PSFs from Fig. 3(a), but the PSFs from the Wavefront Coded system do not change with position in the image field. There are also no zeros in the MTFs related to these PSFs, as was shown in Fig. 2(c). A single digital filter is used to transform the PSFs of Fig. 3(c) to those of Fig. 3(d). This same filter was used to form the final MTFs of Fig. 2(c). Notice that after signal processing, the sampled PSFs from the Wavefront Coded system are essentially the same as those from the traditional 2-element design.

Optimization of the Wavefront Coded system is much different than the optimization used for the traditional one-element and two-element systems shown in Fig. 2(a) and (b). In the Wavefront Coding system, the focal plane array characteristics and the signal processing characteristics directly influence the lens. In this example, optimization with a small filter kernel forces the system to have a very compact PSF across field and wavelength.

The design of this Wavefront Coded system was based on a particular aspheric optical family called the Cosine Form family. Since the first surface of the design is fixed, maximum flexibility is required in the design of the remaining surface. The Cosine Form family allows this flexibility while also having a near-ideal form for fabrication. Mathematically the Cosine Form is described by:

where the weight on each term is given by a_{i} and the radian frequencies and phases of each term are given by w_{i} and ϕ_{i} respectively. The Cosine Form includes the traditional aspheric terms [a_{i} r^{bi}] as a special case when w_{i}=0. In practice this form can be effective with a very limited number of terms. For the design shown in Fig. 2, only 5 parameters on the special surface were optimized. Limiting the number of parameters allows the design to converge to an acceptable solution faster than using a more global phase function family with a larger number of parameters.

The Cosine Form surface used in the design shown in Fig. 2 is shown in Fig. 4. The surface variations have been greatly exaggerated. The peak-to-valley deviation of this surface from the best-fit asphere is 25 µm.

A particular advantage of the Cosine Form surface is that the surface form at any radius is composed of a fixed number of sinusoidal terms. The number and parameters of the sinusoidal terms can be matched to the fabrication method being used to ensure high quality fabrication. In this way the temporal spectrum of the tool motion as well as minimum surface curvature can be directly controlled as part of the system design. Compensation of the tool command to offset tool delay as a function of temporal frequency can also be efficiently implemented with this surface form.

This system was designed and the optics optimized to use a spatially compact 2D linear filter. A representation of this filter in both the spatial and the frequency domain is shown in Fig. 5. This filter has physical dimensions of [10×10] coefficients and can in practice be implemented with a low number of bits (<4) with suitable performance. When analyzing the filter in the frequency domain, it should be noted that all of the values within the pass band are unity or higher. These values represent the signal amplification at specific spatial frequencies, and together they represent the amount of noise amplification that is associated with the filter. The amplification of the underlying noise is termed the noise gain and is the RMS value of the filter as described by the following equation:

where *F* is the filter in the frequency domain with a size of M×N. The filter shown in Fig. 5 has a noise gain of approximately 2.

Due to noise amplification, the SNR of the final image formed using Wavefront Coding is approximately a factor of 2 less than the SNR of the final image formed with the traditional two-element design. This leads to a loss in detectability compared to the traditional 2-element system. In compensation for this loss in SNR, the system is cheaper, smaller and lighter weight. In addition, the optical and related mechanical cost is conservatively estimated to be reduced by 50%. The reduction in cost is due to the decrease in the number of optical elements, simplified mechanical housing and the looser tolerances designed into the system which will ease the assembly process and increase the yield.

Speed and complexity of the signal processing is closely related to the processing platform being used. In general the design of the signal processing implementation is a tradeoff between the number of operations performed and the amount of memory required. These two quantities are typically inversely related. When software processing is used, such as with a general purpose computer, the amount of memory available is large and the amount of processing (in terms of the number of operations/second) is limited. When using hardware processing, such as with FPGA or ASIC implementations, the important criteria and limitation is silicon area and power consumption, both of which are impacted directly by gate count and operating speed. The amount of processing and memory required is measured by the number of gates needed which for a specific lithographic process translates to silicon area. The number of operations and amount of memory needed are then balanced in order to achieve the smallest silicon area.

For example, assume that the signal processing is implemented in software on a Pentium IV running at 750 Mhz. This type of platform can achieve approximately 500 million multiply/accumulate operations per second (MOPS) when executing MMX^{™} code. The amount of multiply accumulate operations per pixel with a [10×10] 2D filter is then 100. With an image size of [64×64] pixels the total number of multiply/accumulate operations to process the entire image assuming every pixel and every coefficient is processed is 400,000. The approximate time required with the assumed processor then is about 0.8 milliseconds. This processing speed can be compatible with a frame rate of up to 1250 frames/second. For the same example image size with a [10×10] kernel the latency will be a worst case 640 pixels which is a latency of 0.125 msec at the above frame rate.

If a hardware processing platform is used then the processing implementation should minimize the number of gates required. While the number of gates needed to implement the 2D filter is determined mainly by the size and values of the 2D filter, the amount of memory required is scaled by the minimum dimension of the image. With some general rules for conversions between operations, memory, and number of gates, a general estimate of the number of gates required for [10×10] filter kernel is between 20k and 80k gates. The memory component would roughly require between 10k and 20k gates. The approximate total number of gates is less than 100k gates. With either the software or the hardware implementation the resulting latency is insignificant and the processing cost is extremely small when compared to typical silicon circuits.

## 4. Conclusion

We have described a new methodology that enables a reduction in optical complexity in computational imaging systems. The new methodology increases the design trade space by jointly optimizing the optics, detection and signal processing and by coupling tolerancing, fabrication and signal processing implementation into the design path. The result is a methodology that creates invariance to many of the performance-limiting optical aberrations. The benefits of the Wavefront Coding methodology are balanced with a moderate decrease in the system SNR. As an example of the benefits of the methodology, a conformal IR optical system was presented where the number of optical elements was decreased by a factor of two while retaining the overall imaging performance. The reduction of physical imaging elements reduced the system weight by 45% with a conservative estimated system cost reduction of 50%.

## References and links

**1. **W.T. Cathey, B. R. Frieden, W. T. Rhodes, and C. K. Rushforth, “Image gathering and processing for enhanced resolution,” J. Opt. Soc. Am. A **1**, 241–249 (1984). [CrossRef]

**2. **R. M. Matic and J. W. Goodman, “Optimal pupil screen design for the estimation of partially coherent images,” J. Opt. Soc. Am. A **4**, 2213- (1987). [CrossRef]

**3. **W. T. Cathey and E. Dowski, “A new paradigm for imaging systems,” Appl. Opt. **41**, 6080 (2002). [CrossRef] [PubMed]

**4. **J. van der Gracht and G. W. Euliss, “Information-optimized extended depth-of-field imaging systems,” in Visual Information Processing X, S. K. Park, Z. Rahman, and R. A. Schowengerdt, eds., Proc. SPIE **4388**, 103–112 (2001). [CrossRef]

**5. **S. Prasad, T.C. Torgersen, V. P. Pauca, R. J. Plemmons, and J. van der Gracht, “Engineering the Pupil Phase to Improve Image Quality,” Proc. SPIE **5108**, Orlando, 2003. [CrossRef]

**6. **E. Dowski and K. Kubala, “Modeling of Wavefront Coded Imaging Systems,” Proc. SPIE, Aerosence Conference, April 4, 2002, Orlando, Florida, volume 4736, 116–126. [CrossRef]

**7. **H. Wach and E. Dowski, “Control of chromatic aberration through wave-front coding,” Appl. Opt. **37**, 5359 (1998). [CrossRef]

**8. **E. Dowski, R. H. Cormack, and S. D. Sarama, “Wavefront Coding: jointly optimized optical and digital imaging systems,” Aerosense Conference, April 25, 2000.