## Abstract

While lenses of aperture less than 1000λ frequently form images with pixel counts approaching the space-bandwidth limit, only heroic designs approach the theoretical information capacity at larger scales. We propose to use the field processing capabilities of small-scale secondary lens arrays to correct aberrations due to larger scale objective lenses, with an ultimate goal of achieving diffraction-limited imaging for apertures greater than 10,000λ. We present an example optical design using an 8 mm entrance pupil capable of resolving 20 megapixels.

©2009 Optical Society of America

## 1. Introduction

The number of degrees of freedom in an image collected through a finite aperture was analyzed in numerous studies in the 1960’s. The number of degrees of freedom is roughly equivalent to the number of independent samples (pixels) necessary to fully characterize an image. Toraldo di Francia was particularly clear in demonstrating that this number is proportional to the Shannon number, e.g. the space-bandwidth product [1]—often approximated as the lens area divided by λ^{2}. Given the long history of this result, one is surprised to note that few modern optical systems approach Shannon-limited performance. The goal of this paper is to demonstrate moderate to large scale optical systems approaching the Shannon limit may be achieved by relaxing traditional constraints on lens system design and field uniformity.

Briefly, optical systems have not achieved Shannon limited performance because (1) focal planes with commensurate pixel counts have not been available, (2) practical lens systems with apertures greater than a few millimeters are incapable of processing Shannon-limited images and (3) digital systems capable of analyzing, transmitting and databasing large images have been unavailable. The first and third challenges have been resolved over the past decade through evolutionary improvements in focal plane and computational technology. Large scale manufacturing for mobile platforms has driven the cost per pixel of CMOS focal planes below 1 microdollar, meaning that gigapixel or even terapixel detector arrays may be reasonably affordable. Similarly, numerous gigapixel image management and analysis platforms have recently emerged [2].

This paper proposes multiscale lens design as a solution to the second challenge. Multiscale design separates a lens system into objective and field processing optics. The objective is a single aperture system on a scale matched to the target angular resolution. The processing optics consist of smaller aperture components that can be fabricated with complex surface figures. The multiscale approach is motivated by scaling analysis suggesting that smaller lenses outperform larger lenses. The performance of a lens is quantified by the ratio of space-bandwidth product actually achieved to the diffraction-limited Shannon number. We represent this ratio by *γ*. As discussed in a pioneering study by Lohmann, performance against this metric saturates as a function of spatial scale, meaning that *γ* tends to fall in inverse proportion to aperture area [3]. Of course, one may make heroic efforts to maintain *γ* by increasing lens complexity as aperture area increases. This approach was adopted to great effect in the development of lithographic lens systems [4], but is ultimately unsustainable in imaging applications.

As a corollary to the idea that smaller lenses are better suited to large *γ* image formation, one may also suppose that smaller lenses are better at wavefront correction. This is the case for two reasons. First, wavefront correction and image formation both yield geometric solutions with less wavelength-scale error over smaller apertures. Second, manufacturing of complex lens surfaces is much easier in smaller scale systems. Sophisticated small scale manufacturing techniques have been particularly advanced by recent efforts to develop wafer-level cameras for mobile devices [5, 6].

Multiscale design is illustrated in Fig. 1. Figure 1(a) shows a conventional single aperture design. Figure 1(b) shows a multiple aperture design, as applied in scanners, digital superresolution, and wide field cameras. Figure 1(c) is a multiscale design incorporating multi-aperture lens array between the objective lens and the focal plane. This use of multi-aperture secondary optics is the defining feature of multiscale design. In contrast with previous uses of multi-aperture secondary optics, however, multiscale design places substantial image formation and processing requirements on the multi-aperture component. The multiaperture array consists of inhomogeneous compound lenslets to facilitate these functions. In the subsequent discussion the single-aperture front-end elements are collectively termed the *collector* and the multi-aperture secondary array the *processor*.

The aperture size of the collector subsystem is determined by angular resolution and light collection area specifications. The collector typically includes multiple surfaces and must process the field sufficiently to enable image restoration by the processor optics and digital post-processing subsystems. The aperture size of the processor subsystem is jointly determined by the quality of the wavefront delivered from the collector and by lens fabrication and detector array interface constraints. The optimal processor aperture size (typically 1–10 mm at visible wavelengths) is determined by geometric complexity and manufacturing constraints. While the system of Fig. 1 shows just two aperture sizes, multiscale design may in general include a hierarchy of aperture sizes stepping the field down from the collector aperture to small scale processing optics (Fig. 2).

A conspicuous feature of multiscale design is that the image is fractured into subimages which must be computationally combined in post-processing. Computational fusion has been demonstrated in numerous studies of multi-aperture cameras [7, 8, 9, 10, 11] with primary goal of reducing camera thickness by using digital superresolution to decrease effective pixel size. To emphasize the computational nature of the proposed imager, Fig. 1 shows an example image from a standard camera, from a 3×3 multi-aperture camera, and from a multiscale design using a 5×5 lenslet array. The multiscale imager splits the full field of view into subfields which partially overlap with their adjacent regions.

The multiscale approach exploits recent advances in optical manufacturing that allow the accurate production of small integrated lens systems and in generalized sampling and coding strategies that jointly optimize physical filtering, sampling, and digital processing [12]. Co-design of physical and digital processing in multiscale systems design can be used to improve the following system metrics:

• Field of view. A multi-aperture processing element in combination with a large scale collector greatly reduces the need for balancing geometric aberration and field of view.

• Image resolution. For imagers which are not operating at the diffraction limit, the addition of a multi-aperture processor can greatly improve image quality.

• Camera volume. Field correction in processing optics may enable faster collection optics, thus reducing camera volume.

• Manufacturing and system cost. Complex surface figures and compound elements are much easier to make at smaller scales. Some fabrication technologies, such as molding and wafer-level integration, are uniquely applicable to small scale optics.

• Depth of field and 3D imaging. Processor optics may be designed to focus at diverse ranges with overlapping fields to enable tomographic object reconstruction, combining multiscale design with multidimensional image capture as explored in TOMBO [13], plenoptic [14, 15] and integral imaging systems[16].

• Detector mosaicking. Multiscale design utilizes multiple discrete focal plane arrays to synthesize a single image without stitching and field uniformity issues encountered in conventional design.

The design example given in Sec. 3 below attempts to illustrate improvement in the first two metrics.

While the idea of wavefront correction using multiple aperture processing elements has not been previously proposed to our knowledge, multiscale design builds on various precedents for multiple aperture lenses. *Compound microlens arrays* based on the Gabor superlens [17] and TOMBO-style systems [7] are most closely related in the use of similar scale lenslets. Gabor lenses incoherently combine images from multiple subapertures to increase photosensitivity. These systems do not achieve the angular resolution of the full array aperture. *Detector array microlenses* form another precedent. These components are commonly integrated with one lens per pixel to increase quantum efficiency for reduced fill factor focal planes. Previous studies have suggested the use of inhomogeneous microlens arrays to correct field curvature [18].

Considerable recent work focuses on *plenoptic* or *lightfield* cameras [15, 14], which measure the radiance field by placing a large array of lenslets at the conventional image plane. The detector array is displaced away from this image plane by the focal length of the lenslets, such that each pixel on the FPA encodes not just the field angle of a ray entering the camera but also its pupil coordinate. The basic goal of the plenoptic camera is to measure the radiance at a spatial resolution corresponding to the lenslet spacing, and provides only an inefficient means of correcting aberrations. Similarly, *integral imaging systems* use arrays of microlenses to form multiple images of a scene at slightly different perspectives to perform 3D imaging. Recent work by Lee *et al*. [16] has included a macro-optic imaging lens placed before the micro-lens array in order to increase the field-of-view of the integral imaging system.

*Integral field spectrometers* [19] use either an array of lenslets or of “image slicers” to integrate the light within subregions of the image, while each integrated signal is then dispersed by a prism or grating in a geometry specially designed to prevent crosstalk. An integral field spectrometer treats each lenslet field as a single spatial pixel, and the lenslet is used to concentrate light for spectral analysis. As with detector array microlenses, no imaging or aberration processing is implemented in the subapertures.

The deliberate use of heterogeneous lens arrays to correct collector optic aberrations is the primary difference between multiscale design and previous multiple aperture systems. In implementing such corrections, one seeks to use the largest possible subaperture optic consistent with high resolution image fields. Larger subapertures produce more uniform image fields, experience less vignetting and are capable of greater wavefront modulation. On the other hand, larger subapertures may introduce their own geometric aberrations. With this tension in mind, one may refer to the processor elements as “meso-optics.” Their scale is less than that of the collector but greater than that of detector microlenses. Section 2 analyzes this tension and the issue of lens scale in more detail. Section 3 describes an example high *γ* multiscale design.

## 2. Scale and merit functions

The Shannon number for a lens may be equivalently evaluated at various planes, such as the entrance pupil, the exit pupil, or the image plane. The Shannon number of the aperture at the entrance pupil, for example, is the product of the area of the pupil and the area of the spatial bandpass of the field. The pupil area is *π*(*A*/2)^{2} for pupil diameter *A*. Letting FOV represent the full angular field of view of the system, the maximum spatial frequency in the coherent field illuminating the aperture is $\genfrac{}{}{0.1ex}{}{1}{\lambda}$. The effective maximum spatial frequency for incoherent imaging is twice this value, so that the Shannon number for a circular aperture imaging an incoherent object is

In geometric optics, a closely related quantity is the etendue, given as the product of the aperture area and the solid angle of acceptance. Table 1 lists *S* as a function of *A* and FOV for *λ*=587 nm. While megapixel millimeter-aperture imagers are achievable in current technology, beyond the millimeter scale the Shannon number is generally much larger than the achieved pixel count in currently available imagers.

Precise evaluation of the Shannon number is challenging for specific lens designs. For simplicity, we choose to evaluate lenses using “the number of resolvable spots,” which we define as the ratio of image area to the impulse response spotsize. Since the spotsize is generally shift variant, the number of resolvable spots must be evaluated by local integration of this ratio over the field. For a diffraction limited lens, the spotsize diameter is approximately shift-invariant

and equal to σ=2.44*λ* f_{#} for small field angles, where *f*
_{#} is the f-number of the lens. The diameter of the image is roughly 2*F* tan(FOV/2) and the number of resolvable spots is therefore

Comparing Eqns. (1) and (2), we see that *N*
_{max} is approximately one order of magnitude less than S. The information efficiency of a lens may be equivalently defined as the ratio of the number of resolvable spots or the ratio of the Shannon number achieved by the lens to the corresponding diffraction limited figure. For a lens achieving *N* resolvable spots, substituting from Eqn. (2) yields

Given diverse definitions of spotsize, the value of *γ* under this definition may differ somewhat from the Shannon number based value. Of course, we expect *γ*≤1. In practice, *γ* decreases monotonically in *A* because *N* does not grow in proportion to *A*
^{2}.

To understand why achieved values of *γ* fall as a function of aperture size, consider the ray tracing diagram for a simple lens shown in Fig. 3. Ray tracing is, of course, commonly used to analyze geometric aberrations. In this case, the on-axis ray bundle comes to reasonable focus, but the off-axis rays are severely blurred due to aberrations. For present purposes, the most interesting feature of the diagram is that it is scale invariant, meaning that the drawing is independent of the spatial units used. If we take a given design and multiply all length parameters by *M* (*i.e.* scale the lens by the factor *M*), we can use the scale parameter to explore the balance between the diffraction-limited (small *M*) and aberration-limited (large *M*) regimes of image quality.

For a given lens system, geometric ray tracing can be used to estimate an aberration spotsize ξ. If the aberration spotsize is smaller than the diffraction limited radius, then the lens achieves diffraction-limited performance. Otherwise, geometric aberration dominates. The important point is that the scale invariance of the ray tracing diagram indicates that the size of the geometric aberration scales linearly in *M*. The diffraction-limited spotsize, in contrast, is scale-invariant.

The composite resolvable element blur radius varies with scale as

$\sqrt{{\delta}^{2}+{\xi}^{2}}=\sqrt{\genfrac{}{}{0.1ex}{}{3}{2}{\lambda}^{2}{f}_{\#}^{2}+{M}^{2}{\xi}_{1}^{2},}$,

where ξ_{1} represents the mean aberration spotsize radius for the default scale, *M*=1. Defining *D*
_{1} to be the image diameter on the default scale, the number of resolvable elements varies with *M* as

At the two extremes of scale, this expression takes on the asymptotic forms

$M\text{small}:\mathrm{N}~\genfrac{}{}{0.1ex}{}{{M}^{2}{D}_{1}^{2}}{6{\lambda}^{2}{f}_{\#}^{2}},\phantom{\rule[-0ex]{1.2em}{0ex}}M\text{large}:\mathrm{N}~\genfrac{}{}{0.1ex}{}{{D}_{1}^{2}}{4{\xi}_{1}^{2}}$

At small scales, diffraction-limited performance yields a pixel count proportional to the Shannon number. At larger scales, geometric aberration makes the pixel count constant with respect to lens size for a given design. The information efficiency as a function of scale is found by substituting Eqn. (4) into Eqn. (3)

Figure 4 plots *γ* as a function of *M* for three different values for ξ_{1}. For ξ1=0, geometric aberration is eliminated and the system achieves *γ*=1 at all scales. For ξ_{1}>0, diffraction limited performance is achieved for *M*<*λ f*
_{#}/ξ_{1}, but beyond this limit *γ* decreases in inverse proportion to *M*
^{2} and the number of pixels in the image becomes scale invariant.

To achieve some advantage for increasing aperture size, conventional designs must prevent *M*ξ_{1}/(*λ*
*f*
_{#}) from increasing linearly in *M*. Combinations of the following strategies are chosen to decrease ξ_{1}/(*λ*
*f*
_{#}) as *M* increases:

1. One may reduce the field of view (and thus reduce the image size *MD*
_{1}). Geometric aberration generally increases with field angle, so that reducing FOV allows the existing degrees of freedom in the design to better compensate for the aberrations at remaining field angles.

2. One may increase lens complexity with scale, adding more optical surfaces to help reduce the growing aberrations. This approach is essential in lithographic systems, where the goal is to simultaneously maximize image field size and minimize *f*
_{#}. The extraordinary growth in lithographic lens mass and complexity to achieve this objective is well documented [4].

3. One may increase *f*
_{#} as a function of *M*. This approach reduces geometric aberration while also increasing the image space diffractive blur size. As illustrated in Fig. 5, Lohmann’s empirical rule [3] that *f*
_{#}∝*M*
^{1/3} describes this f-number scaling in many systems, with the notable exception of telescopes, which adopt strategy (1) and lithographic lenses which adopt strategy (2).

Multiscale design is a fourth strategy for reducing the growth in *M*ξ_{1}/(*λ*
*f*
_{#}) as M grows. Its basic advantage derives from the potential to overcome the information capacity saturation effect illustrated in Fig. 4. In a multi-aperture camera, the number of resolvable spots may be expected to increase linearly with the number of apertures. For fixed subaperture size, the scaling parameter is proportional to the square root of the number of apertures and the system information capacity scales in proportion to *M*
^{2}. Of course, this presupposes that the information collected by each subaperture is independent. This assumption may be valid in “close imaging” applications, such as document scanning and microscopy, but is not valid when subaperture fields of view substantially overlap. The delivery of nonredundant views of distant objects to a multi-aperture array is the purpose of the collector element in a multiscale design.

Specification of the processor element aperture size is the first challenge of multiscale design. If the only role of the processor element is to relay the collector image onto a detector array, then one should choose the minimum possible lenslet aperture size. This is, in fact, the strategy selected for current focal plane microlenses. In multiscale design, however, the lenslets must correct collector optical aberrations in addition to relaying the image. Just as a large aperture introduces greater aberration, wavefront correction capacity grows with aperture size. The wavefront correction capacity may be quantified as the maximum phase delay one can achieve between different points on the aperture. In general, the optimal processor aperture is the largest aperture over which one can achieve near diffraction-limited imaging given lens manufacturing and integration constraints. Current designs suggest subaperture sizes of 100–1000*λ* may be optimal.

## 3. Design principles and examples

The function of the objective lens is to collect light and present an coarse image to the processor subsystem. The processor is placed behind the coarse image plane and simultaneously performs the functions of correcting aberration and of relaying the image onto a mosaic of detector arrays. Using a lenslet array for performing these functions has several advantages: (1) they can operate at their ideal scale to minimize aberrations generated during the image relay; (2) they are much more compact than full-aperture relay optics would be; and (3) dividing the field of view into smaller subregions allows us to relax some design constraints.

The optical design of a multiscale imager starts with the collector optics used together with an idealized (aberration-free and zero-thickness) lens where the processor lens array will later be placed. This allows all of the appropriate distances and element powers to be defined, giving a first-order system design. Given a conventional collector objective lens consisting of rotationally symmetric surfaces, we can express the aberration function in terms of the fourth-order wavefront aberration coefficients by the polynomial expression

where *H* is the normalized field angle, *ρ* the normalized pupil radius, *φ* the azimuth angle of the pupil coordinate, and *W _{ijk}* the wavefront aberration coefficient expressed in units of length.

The goal of lens design is to minimize *W*(*H*,*ρ*,*ϕ*) at targeted field points. Multiscale design deviates from conventional design in that the goal is to correct *local* rather than global wavefront error. Since the lenslets divide the field into subregions, each lenslet can be used to correct the *local* aberration centered around its appropriate field angle. In pursuit of this objective, we expand Eqn. (6) in terms of the central field angle *H _{n}* for the

*n*th lenslet:

$$+{W}_{220}{\left(H-{H}_{n}\right)}^{2}{\rho}^{2}+{W}_{311}{\left(H-{H}_{n}\right)}^{3}\rho \mathrm{cos}\varphi $$

$$=\left[\underset{\mathrm{spherical}\mathrm{aberration}}{\underbrace{{W}_{040}}{\rho}^{4}}\right]$$

$$+\left[\underset{\mathrm{constant}\mathrm{coma}}{\underbrace{-{W}_{131}{H}_{n}{\rho}^{3}\mathrm{cos}\varphi}}+\underset{\mathrm{linear}\mathrm{coma}}{\underbrace{{W}_{131}H{\rho}^{3}\mathrm{cos}\varphi}}\right]$$

$$+\left[\underset{\mathrm{constant}\mathrm{astigmatism}}{\underbrace{{W}_{222}{H}_{n}^{2}{\rho}^{2}{\mathrm{cos}}^{2}\varphi}}-\underset{\mathrm{linear}\mathrm{astigmatism}}{\underbrace{2{W}_{222}H{H}_{n}{\rho}^{2}{\mathrm{cos}}^{2}\varphi}}+\underset{\mathrm{quadratic}\mathrm{astigmatism}}{\underbrace{{W}_{222}{H}^{2}{\rho}^{2}{\mathrm{cos}}^{2}\varphi}}\right]$$

$$+\left[\underset{\mathrm{defocus}}{\underbrace{{W}_{220}{H}_{n}^{2}{\rho}^{2}}}-\underset{\mathrm{linear}\mathrm{defocus}}{\underbrace{2{W}_{220}H{H}_{n}{\rho}^{2}}}+\underset{\mathrm{quadratic}\mathrm{defocus}(i.e.\mathrm{FC})}{\underbrace{{W}_{220}{H}^{2}{\rho}^{2}}}\right]$$

$$+\left[-\underset{\mathrm{field}\mathrm{displacement}}{\underbrace{{W}_{311}{H}_{n}^{3}\rho \mathrm{cos}\varphi}}+\underset{\mathrm{tilt}(i.e.\mathrm{magnification}\mathrm{error})}{\underbrace{3{W}_{311}H{H}_{n}^{2}\rho \mathrm{cos}\varphi}}-\underset{\mathrm{quadratic}\mathrm{distortion}}{\underbrace{3{W}_{311}{H}^{2}{H}_{n}\rho \mathrm{cos}\varphi}}+\underset{\mathrm{cubic}\mathrm{distortion}}{\underbrace{{W}_{311}{H}^{3}\rho \mathrm{cos}\varphi}}\right]$$

Expressing the aberration function localized about a given lenslet’s central field angle thus produces a number of aberration terms which are not of Seidel form. The important result of this expansion, however, is that the aberrations with high-order field dependence are greatly reduced, having much of their wavefront error shifted into lower-order terms, which are easier to correct if freeform surfaces are allowed. An example of this is shown in Fig. 6 for the case of field curvature.

From Eqn. 7 we can note that spherical aberration, being a field-*in*dependent aberration, should be fully corrected in the collection optics, since there is no benefit to correcting it in the lenslet processor array. This holds true for any of the higher-order spherical aberration terms (*W*
_{060},*W*
_{080}, etc.) as well. From the presence of non-Seidel aberrations in the aberration function local to each lenslet, we can infer that the surfaces of the individual lenslets will need to take on non-cylindrically-symmetric shapes. To get an idea of what these shapes might look like, we can first use the example of field curvature illustrated in Fig. 6. By splitting up the field into subregions, defocus becomes the primary aberration for the off-axis lenslets in this case. This is easy to fix, since we need only change the lenslet’s focal length to introduce a compensating change in focus. The off-axis elements next need to correct for a substantial amount of linear defocus (*i.e.* image tilt). Prisms are often used for this purpose, and thus we can design a wedge shape into the lenslets to untilt the image. This wedge shape is apparent in the lenslet profiles illustrated in Fig. 7(c). The remaining aberration, after subtracting uniform and linear defocus, is the standard form of field curvature, but greatly reduced from its full-field form.

One item to note when designing corrective optics placed near an image is that care must be taken to design the collector, and to select the positions of the lenslets, such that any caustics are kept away from the lenslet surfaces. This is a consequence of the well-known property that an optical surface placed at a caustic has no ability to correct the bundle of rays near the caustic. The primary consequences are that spherical aberration, and to some extent coma, must be minimized, since these aberrations possess extended caustic surfaces.

As an example we consider a simple *f*/8 Wollaston landscape lens as the collector. All of the systems were modeled with Zemax optical design software [22] using the visible *d* wavelength (*λ*=587 nm), and the freeform lenslet surface parameters are taken to be *x*-*y* polynomials of up to sixth order. For the conventional designs, the merit function was chosen to minimize the spotsize at field angles 0°, 20°, and 30°. For the multiscale designs, in order to suppress spherical aberration, and to a lesser extent coma, the merit function weights the field angles with weights 100, 1, and 0.1 respectively. For the design of the lenslets themselves, the merit function is naturally more complicated. In a freeform surface design, one may easily obtain very small spotsizes at the designed field angles while small deviations from the design angles show large spotsizes. (This is sometimes called “overdesigning” the lens.) To prevent this behavior from degrading the lenslet designs, a Cartesian grid of 3×3 field angles is chosen within the angles corresponding to the given lenslet. In practice, the curved multiscale design (Fig. 9(d)) used a subfield of ±1.75° for each lenslet, so that the field angles used to define the merit function were a grid of ±1.25° centered on the lenslet chief ray (see Fig. 8). Additionally, the optimization iterations were truncated early in order to prevent overdesign at those 9 positions.

In order to compare the multiscale approach with existing conventional design methods, we analyze the average RMS spotsize as a function of field angle for both designs, using either a planar or a curved focal surface. The conventional design trades off defocus with field curvature, blurring the axial field in order to improve the image at higher field angles. Introducing a curved focal surface (here assumed to be a sphere) greatly improves the image spotsizes (see Fig. 10) relative to the flat focal plane, but the system is still unable to achieve diffraction-limited performance primarily due to the presence of spherical aberration.

In the two multiscale systems, the design of the collector lens has been modified by shifting the stop and bending the lens in order to minimize spherical aberration and to reduce coma to a tolerable level. This has the effect of increasing field curvature and astigmatism aberration, but these are easy to correct using the multiscale approach. The planar multiscale design consists of an array of 11×11 lenslets, of diameters 2, 2, 3, 4, 4, and 6 mm (going from the axial field towards higher field angles). The field of view extends to ±32°, above which the lenslets have difficulty correcting the large aberrations. The lenslets are placed 10 mm behind the nominal image plane and re-image the scene with a magnification of *m*=-0.9. (This demagnification increases with field angle, reaching *m*=-0.4 at the highest field angles, due to the inward curving focal surface.) Since this design maps the final image onto a plane, it is compatible for use with either a monolithic detector array or with multiple small arrays fixed behind each lenslet. In both cases the various sub-images will still require post-processing in order to properly register and fuse them, while correcting for varying magnification and distortion as well as vignetting effects.

The curved multiscale design is a more complex but also more powerful approach. Fixing a single small detector array behind each lenslet and allowing the lenslet-detector pair to be placed at any desired position and angle within the focal volume provides extraordinary design flexibility, and an appearance much like an inside-out fly’s eye. One way of interpreting this system layout is the following. If we have a set of small detector arrays, we can place each of them along the medial focal surface as a kind of piecewise linear approximation to the ideal curved detector, as in Fig. 7(d). (A closeup is shown in Fig. 9 for the 32.5° field angle lenslet.) However, this piecewise detector setup would inevitably result in gaps at some field angles and would not be able to correct for other aberrations. If we place ideal lenses behind these subimages, re-imaging with demagnification would allow for recovery of the gaps between the detected subimages but would not produce aberration correction. Using freeform lenslets adds the ability to correct aberrations. In the resulting layout, the lenslets and small detector arrays are in general perpendicular to the corresponding chief rays, but at higher field angles the local field tilt becomes significant (as shown in Fig. 9), and the corresponding detector plane must be tilted in the opposite sense to satisfy the Scheimpflug condition.

The RMS geometric spotsize as a function of field angle for each of the designs is shown in Fig. 10. These spotsizes are calculated by tracing 471 rays from each sampled field angle into the pupil with an 8-ring hexapolar sampling pattern. The RMS radial distance of the ray intercepts at the image plane from their centroid give the spotsize radius. From Fig. 10, we see that although having a curved focal surface greatly improves the conventional planar design, it cannot achieve diffraction-limited performance with such a simple objective lens. The curved multiscale design, however, is able to maintain a spotsize near the diffraction limit until it begins to slowly increase at higher field angles (>25°).

The number *N* of resolvable spots for each design is estimated by dividing up the field in 0.25° increments, and a Zemax macro was used to locate the image height corresponding to each angular position. From each of these we form a corresponding annulus on the image plane, whose area divided by the corresponding average spotsize gives a rough estimate of the number of resolvable points. For the planar designs, the area of the diffraction-limited Airy disk is approximately given by *π*(1.22*λ*
*f*
_{#})^{2}/cos^{4}
*θ* [23] for image space field angle *θ*, whereas for the curved designs cos^{3}
*θ* appears in the denominator instead, since these designs lack the extra image plane obliquity factor. Fig. 11 shows the result of these calculations at each field angle. Summing the resolvable spots in each annulus, with the combined result summing the geometric and diffraction spotsizes in quadrature, gives the total number of resolvable elements for each system:

megapixels |
Design type
| Geometric | Diff-limit | Combined |
---|---|---|---|---|

conventional, planar | 0.5 | 44.4 | 0.5 | |

conventional, spherical | 8.8 | 40.3 | 8.6 | |

multiscale, locally linear | 8.7 | 39.9 | 8.5 | |

multiscale, planar | 11.4 | 42.3 | 9.6 | |

multiscale, curved | 24.0 | 39.7 | 19.9 |

For an 8 mm aperture over a ±36° FOV, the theoretical limit on the number of resolvable elements (using Eqn 2) is 44 Mpix, so that the curved multiscale design shown here corresponds to *γ*=0.45. Note that the diffraction-limited performances of the designs vary due to differences in image size and geometry. The planar multiscale design shows a more erratic performance in spotsize vs. field angle, largely due to the fact that the collector lens chosen is unsuited to the design. A collector lens which is more nearly telecentric in image space can achieve better performance. While the curved multiscale design is able to outperform the conventional planar design by a factor of 40, and the conventional design using a spherical focal surface by a factor of 2, it is a much more complicated optical arrangement. Using a more complex objective lens can of course greatly improve the performance of the conventional design, but the aim of the multiscale approach is to show an alternative means of avoiding the complexity limit in design.

The Zemax design files giving prescriptions for the lens designs discussed here have been posted at the url http://www.disp.duke.edu/projects/multiscale. Interested readers are encouraged to download the files and use them freely.

## 4. Conclusion

Multiscale lens design presents a different way of thinking about how optical design can be used for image formation. Current methods use optical elements which operate only on the full optical field, resulting in some of the familiar trade-offs in field-of-view and f-number. Multiscale design chooses to process the field-dependent aberrations locally rather than globally, thereby easing design constraints. Moreover, it allows much of the image formation process to be done by lenses operating at their ideal scale, on the order of millimeters in diameter, rather than suboptimal scales. Not only can this allow for wider FOV imagers or improved resolution in wide field systems, but can also be used to reduce system volume, reduce cost, ease detector mosaicking, and allow for 3D imaging.

The resulting instrument design can be considered as a fusing of conventional and multiaperture strategies, the one used as a front-end and the other as a back-end. This allows one to take the advantages of each type — the light collection efficiency and resolving power of the conventional lens and the image processing capability of the multiaperture camera — without their deficits. The multiscale combination of the two provides a design methodology which can achieve system information capacities that scale linearly with aperture area.

The use of nonuniform lenslet arrays, especially ones using aspheric or freeform surfaces, brings up the question of manufacturability. The past two decades have seen huge advances in the state of the art for micro- and meso-optical lenses, thus the limitation appears to be less one of manufacturing than of the various optical design issues arising from integrating the micro-, meso-, and macro-optical elements into a single system. For example, if the required surface sag of the lenslets is too much, we can choose a higher index glass, we can split the lens and effectively double the available sag by use of doublet lenslets, or we can choose a multi-stage processor. Thus, for large surface sag and high curvature shapes, optical design issues such as surface reflection, polarization dependence, and scatter are likely to cause more of a problem than manufacturability.

There are a host of questions that arise by proposing this technique which have been only briefly mentioned, and we are currently exploring these questions. These include robust design techniques for determining optical lenslet shapes and maximizing lenslet placement tolerance, chromatic effects on the multiscale approach, specifying the constraints on collector design, and how to design lenslet baffles to prevent crosstalk between subimages. We are currently developing an experimental prototype to illustrate the technique and to help refine our ideas.

## Acknowledgments

The authors thank Dr. Scott McCain of Applied Quantum Technologies for fruitful discussions on the multiscale design approach. This research was supported in part by a DARPA MTO seedling funded under AFOSR contract FA9550-06-1-0230.

## References and links

**1. **G. T. di Francia, “Degrees of freedom of an image,” J. Opt. Soc. Am. **59**, 799–804 (1969). [CrossRef]

**2. **J. Kopf, M. Uyttendaele, O. Deussen, and M. F. Cohen, “Capturing and viewing gigapixel images,” ACM Trans. Graphics **26**, 93 (2007). [CrossRef]

**3. **A. W. Lohmann, “Scaling laws for lens systems,” Appl. Opt. **28**, 4996–4998 (1989). [CrossRef] [PubMed]

**4. **T. Matsuyama, Y. Ohmura, and D. M. Williamson, “The lithographic lens: its history and evolution,” in *Optical Microlithography XIX*,
D. G. Flagello, ed., vol. 6154 of *Proc. SPIE* (2006).

**5. **R. Völkel, M. Eisner, and K. J. Weible, “Miniaturized imaging systems,” Microelectron. Eng. **67–68**, 461–472 (2003).

**6. **Y. Dagan, “Wafer-level optics enables low cost camera phones,” in *Integrated Optics: Devices, Materials, and Technologies XIII*,
J.-E. Broquin and C. M. Greiner, eds., vol. 7218 of *Proc. SPIE* (2009).

**7. **J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics (TOMBO): concept and experimental verification,” Appl. Opt. **40**, 1806–1813 (2001). [CrossRef]

**8. **M. Shankar, R. Willett, N. Pitsianis, T. Schulz, R. Gibbons, R. T. Kolste, J. Carriere, C. Chen, D. Prather, and D. Brady, “Thin infrared imaging systems through multichannel sampling,” Appl. Opt. **47**, B1–B10 (2008). [CrossRef] [PubMed]

**9. **T. Mirani, D. Rajan, M. P. Christensen, S. C. Douglas, and S. L. Wood, “Computational imaging systems: joint design and end-to-end optimality,” Appl. Opt. **47**, B86–B103 (2008). [CrossRef] [PubMed]

**10. **K. Choi and T. J. Schulz, “Signal-processing approaches for image-resolution restoration for TOMBO imagery,” Appl. Opt. **47**, B104–B116 (2008). [CrossRef] [PubMed]

**11. **A. V. Kanaev, D. A. Scribner, J. R. Ackerman, and E. F. Fleet, “Analysis and application of multiframe superresolution processing for conventional imaging systems and lenslet arrays,” Appl. Opt. **46**, 4320–4328 (2007). [CrossRef] [PubMed]

**12. **A. D. Portnoy, N. P. Pitsianis, X. Sun, and D. J. Brady, “Multichannel sampling schemes for optical imaging systems,” Appl. Opt. **47**, B76–B85 (2008). [CrossRef] [PubMed]

**13. **R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-dimensional information acquisition using a compound imaging system,” Opt. Rev. **14**, 347–350 (2007). [CrossRef]

**14. **E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intel. **14**, 99–106 (1992). [CrossRef]

**15. **M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graphics **25**, 924–934 (2006). [CrossRef]

**16. **H.-J. Lee, D.-H. Shin, H. Yoo, J.-J. Lee, and E.-S. Kim, “Computational integral imaging reconstruction scheme of far 3D objects by additional use of an imaging lens,” Opt. Comm. **281**, 2026–2032 (2007). [CrossRef]

**17. **J. Duparré, P. Schreiber, A. Matthes, E. Pshenay-Severin, A. Bräuer, A. Tünnermann, R. Völkel, M. Eisner, and T. Scharf, “Microoptical telescope compound eye,” Opt. Express **13**, 889–903 (2005). [CrossRef] [PubMed]

**18. **J. A. Cox and B. S. Fritz, “Variable focal length micro lens array field curvature corrector,” (2003). US Patent 6556349.

**19. **R. Bacon, P. Y. Copin, G. Monnet, B. W. Miller, J. R. Allington-Smith, M. Bureau, C. M. Carollo, R. L. Davies, E. Emsellem, H. Kuntschner, R. F. Peletier, E. K. Verolme, and P. T. de Zeeuw, “The SAURON project—I. The panoramic integral-field spectrograph,” Monthly Notices of the Royal Astronomical Society **326**, 23–35 (2001). [CrossRef]

**20. **URL http://www.gmto.org/codrfolder/GMT-ID-01467-Chapter 6 Optics.pdf/.

**21. **URL http://www2.keck.hawaii.edu/inst/hires/.

**22. **URL http://www.zemax.com.

**23. **M. V. R. K. Murty, “On the theoretical limit of resolution,” J. Opt. Soc. Am. **47**, 667–668 (1957). [CrossRef]