## Abstract

Monocentric lenses have recently changed from primarily a historic curiosity to a potential solution for panoramic high-resolution imagers, where the spherical image surface is directly detected by curved image sensors or optically transferred onto multiple conventional flat focal planes. We compare imaging and waveguide-based transfer of the spherical image surface formed by the monocentric lens onto planar image sensors, showing that both approaches can make the system input aperture and resolution substantially independent of the input angle. We present aberration analysis that demonstrates that wide-field monocentric lenses can be focused by purely axial translation and describe a systematic design process to identify the best designs for two-glass symmetric monocentric lenses. Finally, we use this approach to design an $F/1.7$, 12 mm focal length imager with an up to 160° field of view and show that it compares favorably in size and performance to conventional wide-angle imagers.

© 2012 Optical Society of America

## Errata

Igor Stamenov, Ilya P. Agurok, and Joseph E. Ford, "Optimization of two-glass monocentric lenses for compact panoramic imagers: general aberration analysis and specific designs: erratum," Appl. Opt.**52**, 5348-5349 (2013)

https://www.osapublishing.org/ao/abstract.cfm?uri=ao-52-22-5348

## 1. Introduction: Wide-Angle Monocentric Lens Imaging

Imagers that require the combination of wide field of view, high angular resolution, and large light collection present a difficult challenge in optical system design. Geometric lens aberrations increase with aperture diameter, numerical aperture (NA), and field of view and scale linearly with focal length. This means that for a sufficiently short focal length, it is possible to find near-diffraction-limited wide-angle lens designs, including lenses mass-produced for cell phone imagers. However, obtaining high angular resolution (for a fixed sensor pixel pitch) requires a long focal length for magnification, as well as a large NA to maintain resolution and image brightness. This combination is difficult to provide over a wide angle range. Conventional lens designs for longer focal length wide-angle lenses represent a tradeoff between competing factors of light collection, volume, and angular resolution. For example, conventional reverse-telephoto and “fisheye” lenses provide extremely limited light collection compared to their large clear aperture and overall volume [1]. However, the problem goes beyond the lens itself. Solving this lens design only leads to a secondary design constraint, in that the total resolution of such wide-angle lenses can easily exceed 100 megapixels. This is beyond the current spatial resolution and communication bandwidth of a single cost-effective sensor [2], especially for video output at $30\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{frames}/\mathrm{s}$ or more.

One early solution to wide-angle imaging was “monocentric” lenses [3], using only hemispherical or spherical optical surfaces that share a single center of curvature. This symmetry yields zero coma or astigmatism over a hemispherical image surface, and on that surface provides a field of view limited only by vignetting from the central aperture stop. The challenge of using a curved image surface has limited the practical application of this type of lens, but there has been a resurgence of interest in monocentric lens imaging. In 2009, Krishnan and Nayar proposed an omnidirectional imager using a spherical ball lens contained within a spherical detector shell [4]. In 2010 Ford and Tremblay [5] proposed using a monocentric lens as the objective in a multiscale imager system [6], where overlapping regions of the spherical image surface are relayed onto conventional image sensors, and where the mosaic of subimages can be digitally processed to form a single aggregate image. Cossairt and Nayar demonstrated a closely related configuration using a glass ball and single-element relay lenses, recording and digitally combining overlapping images from five adjacent image sensors [7]. And recently a gigapixel monocentric multiscale imager has been demonstrated that integrates a two-dimensional mosaic of subimages [8,9], using the optical layout shown in Fig. 1(a) [10].

Monocentric lenses and spherical image formation provide favorable scaling to long focal lengths and have been shown capable of a two orders of magnitude higher space–bandwidth product (number of resolvable spots) than conventional flat field systems of the same physical volume [11]. In early monocentric lens cameras, the usable field of view was limited by the vignetting and diffraction from the central lens aperture, as well as the ability of recording media to conform to a spherical image surface. However, the system aperture stop need not be located in the monocentric lens. The detailed design of the multiscale monocentric lens [10] shows that locating the aperture stop within the secondary (relay) imagers enables uniform relative illumination and resolution over the full field. This design maintains $F/2.4$ light collection with near-diffraction-limited resolution over a 120° field of view. With 1.4 μm pitch sensors, this yields an aggregate resolution of 2.4 gigapixels.

Such wide and uniform fields can also be achieved via the conceptually simpler “waveguide” approach shown in Fig. 1(b). Instead of relay optics, the spherical image surface is transferred to conventional planar image sensors using one or more multimode fiber bundles [12]. Fused fiber faceplates are commercially available with high spatial resolutions and light collection (2.5 μm between fiber cores, and NA of 1) and can be fabricated as straight or tapered [13]. Straight fiber bundles can project sufficiently far to allow space for packaging of CMOS image sensors, while tapered fiber bundles can provide the $3:1$ demagnification used in the relay optics in Fig. 1(a). Fiber bundles introduce artifacts from multiple sampling of the image [14], which can be mitigated but not eliminated through postdetection image processing [15]. In addition, the edges between adjacent fiber bundles can introduce “seams” in the collected image, whose width depends on the accuracy of fiber bundle fabrication. However, waveguide transfer can reduce the overall physical footprint and significantly increase light collection: in the multiscale optics structure, field overlap at the center of three relay optics must be divided between three apertures, while the waveguide can transfer all the light energy from a given field angle to a single sensor.

As shown in Fig. 2, in both systems stray light can be controlled using a physical aperture stop at the center of the monocentric lens [Fig. 2(a)] or through a “virtual stop” achieved by limiting light transmission in the image transfer optics [Fig. 2(b)]. In the case of relay imaging, this is done using a physical aperture stop internal to the relay optics. In the case of fiber transfer, this can be done by restricting the NA of the fiber bundles. Straight fiber bundles with a lower index difference between the core and cladding glasses are commercially available with an NA of 0.28. Alternately, high-index-contrast bundles with a spatial taper to a smaller output face can provide a controlled NA. Such bundles have the original output NA, but conservation of étendue reduces the input NA of a tapered fiber bundle by approximately the ratio of input to output diameter [16]. For example, a $3:1$ taper of input to output width with a 1.84 core and 1.48 cladding index yields approximately 0.3 input NA.

The field of view of monocentric “virtual stop” imagers can be extraordinarily wide. A physical aperture at the center of the objective lens is projected onto the field angle. At 60° incidence (120° field of view), the aperture is elliptical and reduced in width by 50%, with a corresponding decrease in light transmission and diffraction-limited resolution. As Fig. 2(b) shows, however, moving the aperture stop to the light transfer enables uniform illumination and resolution over a 160° field of view, where at extreme angles the input light illuminates the back surface of the monocentric objective. The lens as drawn in Fig. 2(b) indicates that the image transfer optics perform all stray light control, and does not show nonsequential paths from surface reflections. In practice, an oversized physical aperture or other light baffles can be used to block most of the stray light, while the image transfer optics provide the final, angle-independent apodization. While such practical optomechanical packaging constraints can limit the practical field of view, the potential for performance improvement over a conventional “fisheye” lens is clear.

Despite the structural constraints, even a simple two-glass monocentric objective lens can provide high angular resolution over the spherical image surface. With waveguide image transfer, the overall system resolution is directly limited by the objective lens resolution. In multiscale imagers, geometric aberrations in the objective can be corrected by fabricating objective-specific relay optics. However, compensating for large aberrations in the primary lens tends to increase the complexity and precision of fabrication of the relay optics. Since each multiscale imager requires many sets of relay optics (221 sets in the 120° field, 2.4 gigapixel imager design), minimizing relay optic complexity and fabrication tolerance can significantly reduce system cost. So for both structures, it is useful to optimize the objective lens resolution.

T. Sutton offered the first design of the monocentric lens in 1859 [3], with a wide field of view but high $f$-number (about 30) due to the lack of correction of spherical and chromatic aberrations. To solve this problem, in 1942, J. G. Baker proposed a glass combination with a high-index flint glass for the outer shell and low-index crown for the internal ball lens [3], resulting in a monocentric but not front-to-back symmetric lens structure with aberrations well corrected for a moderately high $f$-number of 3.5. The design of a compact and high-resolution endoscope lens with a two-glass symmetrical monocentric lens was published by Waidelich in 1965 [12]. An endoscopic lens with a 2 mm focal length and $f$-number of 1.7 has a diffraction-limited image quality, but scaling in focal length to $>1\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{cm}$ for photographic applications with the focus about 12 mm required the increase of the $f$-number to about 3.5. More recently, a two-glass monocentric lens was used as the projection objective in a Kodak autostereoscopic display [17,18]. In the Waidelich and Kodak designs, like in the Baker lens, the outer shell meniscus was made from a flint glass and the internal ball lens from a crown glass with a moderate difference in Abbe number. As will be shown later, this was nearly the ideal combination. The outer meniscuses had a higher ${n}_{d}$, and the difference in refraction index between outer and internal ball lenses did not exceed 0.08. This low index difference probably was inherited from the Cooke triplet design [1]. We will show that differences in refraction indices larger than 0.2 are needed to approach diffraction-limited performance in lower $f$-number photographic lenses. Nevertheless, these designs have demonstrated that purely monocentric lenses can achieve high-quality achromatic imaging.

Optical systems are now typically designed by computer numeric optimization in commercial software like Zemax and CodeV. The highly constrained monocentric lenses seem well suited to a global optimization search to identify the best glass combination and radii. However, we have found that “blind” optimization of monocentric lenses, even a simple two-glass lens, often overlooks the best solutions. This is especially true for large-NA designs, where the ray angles are steep and the optimization space has multiple deep local minima. There are also a large number of glasses to consider. The available glass catalog was recently increased by the publication of a number of new glasses from Hoya to 559 glasses [19]. The more advanced optimization algorithms take significant processing time. Even for a two-glass lens it is impractical to use them to search all 312,000 potential combinations, so the best design may be overlooked. Fortunately, the symmetry of monocentric lenses permits a relatively straightforward mathematical analysis of geometrical optic aberrations, as well as providing some degree of intuitive understanding of this overall design space. Combining “old school” analysis with computer sorting of glass candidates can enable a global optimization for any specific focal length and spectral bandwidth desired.

In this paper, we provide a detailed analysis for the design of two-glass monocentric lenses. We begin with the first-order paraxial and Seidel third-order analysis of the focus of wide-field monocentric imagers, showing that despite the highly curved focal surface, axial translation of monocentric lenses can maintain focus of a planar object field from infinite to close conjugates. We will optimize these lenses operating with a larger and so more general “virtual” stop, because introducing a comparably sized physical stop will only tend to create vignetting at the field points and cut off some of the most highly aberrated rays. We continue by demonstrating the systematic optimization of these lenses by the following process.

For a specified focal length, NA, and wavelength range:

- (1) Compute and minimize third-order Seidel spherical and longitudinal chromatism aberrations to find approximate surface radii for a valid glass combination.
- (2) Optimize lens prescriptions via exact ray tracing of multiple ray heights for the central wavelength.
- (3) Calculate the polychromatic mean square wavefront deformation, and generate a ranked list of all lens candidates.
- (4) Confirm ranking order by comparing polychromatic diffraction modulation transfer function (MTF) curves.

To verify this method, we redesign the objective from our 2.4 gigapixel multiscale imager and find that the global optimization process yields the original design (and fabricated) lens, as well as additional candidates with improved internal image surface resolution. We then apply the design methodology to a new system, an ultracompact fiber-coupled imager with a 12 mm focal length and uniform resolution and light collection over a more than 120° field of view. We show that this design compares favorably to a more conventional imager using a “fisheye” wide-field lens and conclude with observations on future directions and challenges for this type of imager.

## 2. Theoretical Analysis of Monocentric Lenses

#### A. Focus of Monocentric Lenses

Photographic lenses are normally focused by moving them closer to or further from the image plane, but this appears impractical for the deep spherical image surface in a wide-field monocentric lens. For the 70 mm focal length objective of Fig. 1, a 1 mm axial translation to focus on an object at a 5 m range brings the image surface only 0.5 mm closer to the objective for an object at a 60° field angle, and 0.17 mm closer for an object at an 80° field. This seems to imply that the lens can only focus in one direction at time and needs three-dimensional translation to do so. In the monocentric multiscale imager, the primary lens position is fixed, and the secondary imagers are used for independent focus on each region of the scene [10,20]. However, introducing an optomechanical focus mechanism for each secondary imager constrains the lens design and adds significantly to the overall system bulk and cost. More fundamentally, the monocentric waveguide imager shown in Fig. 2(b) has no secondary imagers and cannot be focused in this way, which initially appears a major disadvantage. In fact, however, axial translation of monocentric lenses maintains focus on a planar object across the full field of view.

Consider the geometry of an image formation in the monocentric lens structure shown in Fig. 3. For a focal length $f$ with refocusing from an object at infinity to the closer on-axis object at distance $d$, assuming $d\gg f$, the image surface shift $\mathrm{\Delta}x$ is [1]

For the off-axis field point $B$, having a field angle and the angle of principal ray ${\beta}_{1}$, the distance OB to the object is $d/\mathrm{cos}({\beta}_{1})$ and the refocusing image point shift $\mathrm{\Delta}{x}^{\prime}$ (${B}_{\text{inf}}{B}^{\prime}$) will beThe most general analytic tool for lens aberration analysis and correction is classical third-order Seidel theory [21,22]. In Seidel theory, astigmatism and image curvature are bound with the coefficients $C$ and $D$. Referring to the variables defined in Fig. 3, the coefficients $C$ and $D$ can be expressed as [21–23]

#### B. Design Optimization of Monocentric Lenses

Imaging optics are conventionally designed in two steps [1,21,22,24]. First, monochromatic design at the center wavelength achieves a sufficient level of aberration correction. The second step is to correct chromatic aberration, usually by splitting some key components to achieve achromatic power: a single glass material is replaced with two glasses of the same refraction index at the center wavelength, but different dispersions. However, this process cannot easily be applied to the symmetric two-glass monocentric lens shown in Fig. 2. For a given glass pair and focal length, the lens has only four prescription parameters to be optimized (two glasses and two radii), and the radii are constrained with the monocentric symmetry and desired focal length. From this perspective, monocentric lenses resemble conventional spherical doublets. An approach for global optimization of such lenses was proposed in [26], and our approach for a global search for optimal monocentric lenses, considering all glass combinations, is closely related. As in [26], we want to analyze all valid glass combinations. However, unlike aplanatic doublets [27], there is not an optimal analytical solution for monocentric lenses, so a modified process is needed. In addition, analysis in [26] is restricted to fifth-order aberration estimates [28], while we want to extend the design process to include full analytic raytracing and MTF calculation.

We define a systematic search process beginning with a specified focal length, $f$-number, and wavelength range. Optimization of each glass combination was done in three steps. The first step is to determine the solution (if it exists) to minimize third-order Seidel geometrical and chromatic aberrations [21–23]. A monocentric lens operating in the “fiber” stop mode has only two primary third-order aberrations—spherical aberration (Seidel wave coefficient ${W}_{040}$) and longitudinal chromatism ${W}_{020}$, which is defocus between blue and red paraxial foci. The sum of the absolute values of the third-order coefficients provides a good approach for a first-pass merit (cost) function, and an analytical solution for third-order coefficients allows this cost function to be quickly calculated.

The monocentric lens shown in Fig. 4 is defined with six variables, the two radii ${r}_{1}$ and ${r}_{2}$, and the index and Abbe number for each of two glasses: the outer glass ${n}_{2}$ and ${v}_{2}$, and the inner glass ${n}_{3}$ and ${v}_{3}$. Ray tracing of any collimated ray can use the Abbe invariant:

orThe Seidel spherical aberration coefficient $B$, according to [21–23], is

The starting position for ray tracing is ${h}_{1}=f\times \mathrm{NA}$ and ${\alpha}_{1}=0$. Consequently, applying the Abbe invariant for each surface, we can substitute ray angles and heights with the system constructional parameters. Thus, from the Abbe invariant for the first surface we get

The result is an algebraic expression for the third-order aberrations of the solution—if any—for a given glass combination. In our examples, we performed this calculation for each of the 198,000 combinations of the 446 glasses that were available as of April 2012 in the combined Schott, Ohara, Hikari, and Hoya glass catalogs [29–32]. This yields a list of qualified candidates (those forming an image on or outside of the outer glass element), ranked by third-order aberrations. However, this ranking is insufficiently accurate for a fully optimized lens.

Because of high NA, monocentric systems tend to have strong fifth- and seventh-order aberrations. That makes third-order analysis only a first approximation toward a good design. Fortunately, the two-glass monocentric lens system has an exact analytical ray trace solution in compact form, and the more accurate values of the lens prescription parameters can be found from a fast exact ray tracing of several ray heights.

The variables for ray tracing of rays with arbitrary input height $h$ are shown in Fig. 5, where ${\varphi}_{i}$ are the angles between the ray and the surface normal. We can write

From Snell’s law, we have Applying the sine law for the triangle ABO yieldsWith such steep ray angles, $Q$ is a strongly varying function, which is why a fast automated optimization can overlook the optimal radius values for a given glass pair. Figure 6 shows the dependence of $Q$ on radius ${r}_{1}$ for a representative case, one of the best glass pairs (S-LAH79 and S-LAH59) for the 12 mm focal length lens described in Section 3. The monochromatic image quality criterion $Q$ has several minima over the possible range for the ${r}_{1}$ radius. The preliminary solution for the ${r}_{1}$ radius for this glass pair obtained from the third-order aberration minimization was 8.92 mm, close to the global minimum solution for $Q$ found at 9.05 mm. This shows how the first optimization step provided a good starting point for the ${r}_{1}$ radius, avoiding the time-consuming investigation of low-quality solutions in the multi-extremum problem illustrated by Fig. 6.

Optimization with the criterion in Eq. (31) gives the optimal solution by means of minimum geometrical aberrations, but it is not sufficient to provide reliable sorting of monocentric lens solutions. The system polychromatic mean square wavefront deformation is better correlated with the Strehl ratio and other diffraction image quality criteria [21]. So, in the third step, the wavefront deformation is calculated and expanded into Zernike polynomials. The polychromatic mean square wavefront deformation is calculated and used as a criterion for creating the ranked list of monocentric lens solutions by means of their quality. In the monocentric lens geometry, the aperture stop is located at the center of the lens, where the entrance and exit pupils coincide as well. This is shown in Fig. 7.

For an arbitrary ray, the lateral aberrations $\mathrm{\Delta}Y$ are bound to the wavefront deformation as

where $\lambda $ is the wavelength, $W$ is the wavefront deformation expressed in wavelengths, $\rho $ is the reduced ray pupil coordinate that varies from zero at the pupil center to unity at the edge; $A$ is defined as the back NA, and $\mathrm{\Delta}Y$ as the lateral aberration in mm [21,33]. From Fig. 7 we have## 3. Specific Monocentric Cases

#### A. AWARE2 Monocentric Lens Analysis

The goal of this analysis is a monocentric lens optimization process for finding the best possible candidates for fabrication. These candidates are then subject to other material constraints involved in the final selection of a lens design, including mechanical aspects such as differential thermal expansion and environmental robustness, as well as practical aspects like availability and cost. The process described above appears to provide a comprehensive list of candidate designs. However, the best test of a lens design process is to compare the results to those generated by the normal process of software-based lens design. To do this, we used the constraints of a monocentric objective that was designed in the DARPA AWARE program—specifically, the AWARE-2 objective lens [20], which was designed by a conventional software optimization process, then fabricated, tested, and integrated into the AWARE2 imager. The lens has a 70 mm focal length and image space $f$-number of 3.5, using a fused silica core and an S-NBH8 glass outer shell. The optical prescription is shown in Table 1, and the layout in Fig. 8(a).

The global optimization method identified this candidate lens system, as well as multiple alternative designs (glass combinations) that provide a similar physical volume and improved MTF. The optical prescription of the top-ranked solution is shown in Table 2, and the lens layout is shown in Fig. 8(b).

The new candidate appears physically very similar to the fabricated lens. However, the MTF and ray aberrations for the manufactured prototype and the top design solution are compared in Fig. 9. The new candidate lens is significantly closer to a diffraction-limited resolution and has lower chromatism and polychromatic mean square wavefront deformation than the actual fabricated lens. It is important to recognize that the resolution of the AWARE-2 imager system includes the microcamera relay optics. The relay optics corrected for aberrations in the primary, as well as providing flattening of the relayed image field onto the planar image sensors. In fact, the overall AWARE-2 system optics design [10] was diffraction limited. However, conducting the systematic design process on a relatively long focal length system, where geometrical aberrations influence the resolution, served as a successful test of the design methodology.

#### B. SCENICC $\mathsf{f}$-Number 1.71 12 mm Focal Length Lenses

Our specific goal was to design an imager with an at least 120° field of view and resolution and sensitivity comparable to the human eye (1 arc min), resulting in about 100 megapixels total resolution. Assuming we use the waveguide configuration of Fig. 1(b) with 2.5 μm pitch, $\mathrm{NA}=1$ fiber bundles, we defined the goal of a 12 mm focal length lens with diffraction-limited operation in the photographic spectral band (0.47–0.65 μm) and an NA of 0.29 ($f$-number 1.71). We designed the monocentric lens assuming the fiber stop operation mode [Fig. 2(b)], as that is the more demanding design: the physical aperture stop and vignetting of the field beams will increase diffraction at wider field angles but can only reduce geometrical optic aberrations.

The result of the design process was an initial evaluation of 198,000 glass pair systems, of which some 56,000 candidates passed the initial evaluation and were optimized using exact ray tracing to generate the final ranked list. The entire process took only 15 min to generate using a single-threaded MATLAB optimization code running on a 2.2 GHz i7 Intel processor. Part of this list is shown in Table 3. Because different optical glass manufacturers produce similar glasses, the solutions have been combined in families, and the table shows the first seven of these families. We show the radii for a primary glass and list several substitution glasses in parentheses. The designs with the substitutions glasses result in small changes in radii but substantially the same performance. The table shows the computed polychromatic mean square wavefront deformation, the criterion for the analytical global search, and the value for the MTF at $200\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{lp}/\mathrm{mm}$ (Nyquist sampling) found following Zemax optimization of the same candidate glasses. If the design process works, we would expect that these metrics would be strongly correlated. Figure 10 illustrates this by showing the correlation for representative samples of the top 200 solutions, and in fact we find the identical sequence for all 200 of the top candidates.

The best performance monocentric lens solution (family number 1) demonstrates near-diffraction-limited resolution over the photographic visible operational waveband (470–650 nm). It uses S-LAH79 for the outer shell glass and K-LASFN9 for the inner glass. To provide a central aperture stop, it is necessary to fabricate the center lens as two hemispherical elements. Because optical glass K-LASFN9 has a high refractive index of 1.81, the interface between the two hemispherical elements can cause reflections at large incidence angles unless the interface is index matched, and the index of optical cements is limited. For example, the Norland UV-cure epoxy NOA164 has an index of 1.64. This results in a critical angle of 65° and a maximum achievable field of view of $\pm 55\xb0$. For this glass system, it is preferable to fabricate the center lens as a single spherical element and operate the system in the “virtual iris” mode, where the system can provide a maximum field of view of $\pm 78.5\xb0$. The optical layout, MTF, and ray fan diagrams of the top solution operating in “virtual” stop mode are shown in Fig. 11.

Table 4 shows the detailed optical prescription of the monocentric lens example. The lens operation is shown in three configurations. The distance to the object plane is changed from infinity to 1 m and then to 0.5 m. The back focal distance (thickness 5 in Table 4) is changed as well. The back focal distance for the object at infinity is 2.92088 mm; for the 1 m object distance, 3.06156 mm; and for the 0.5 m object distance, 3.20050 mm.

The MTF for two extreme object positions is shown in Fig. 12. After refocusing at a closer object distance, the image surface loses concentricity with the lens but still retains the original radius, as was shown in Section 2.A. The loss of concentricity enables conjugation between planar objects at a finite distance to the spherical image surface, and so implies the ability to focus on points anywhere in the object space. A concentric system designed at a specific finite object radius can be conjugated only with a specific concentric spherical image surface, and focusing a concentric lens system on spherical objects at different distances would require a change in the image surface radius. At closer object distances, angle ${\alpha}_{1}$ (Fig. 3) increases from zero to some small finite value, which changes the spherical aberration [Eq. (10)] and MTF slightly away from optimum value. As predicted by third-order aberration analysis, we maintain image quality close to the original infinite conjugate design during the refocusing operation, over a wide range of field angles. ZEMAX design files modeling this lens are available for download [35]. Note that adequate simulation of the “virtual stop” operational mode in ZEMAX requires the rotation of the aperture stop at off-axis field angles, as well as rotation on this angle image surface around its center of curvature. In this file, a distinct configuration has been defined for each field angle.

The members of the third family of Table 3 have a glass with a significantly lower refraction index of the central glass ball, 1.69, which can be index matched with standard UV cure epoxies. This enables the lens to be fabricated as two halves and assembled with a physical aperture stop at the center. While members of this family have a slightly lower image quality performance, they can fully operate over a $\pm 65\xb0$ field of view in both “virtual stop” and “aperture stop” modes. The optical layout, MTF, and ray fan diagrams of the top member of the third family operating in the “aperture stop” mode for the object located at infinity are shown in Fig. 13(a). The ray fan graphs are shown for the axial and 60° field points at the image surface. The scale of ray fans is $\pm 10\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{\mu m}$. The design has been modified to include a 10 μm thick layer of NOA164 glue between the two glasses and in the central plane. Simplified ZEMAX files of this lens without optical cement are also available.[36].

The optical layout and MTF of the top member of the third family operating in the “fiber stop” mode are shown in Fig. 13(b). The “virtual stop” mode has a uniform high image quality performance all over the field. The “aperture stop” mode suffers from a drop in performance at the edge of the field due to the pupil vignetting and aberrations in the optical cement layer. This design is useful, however, as operation in the “aperture stop” mode simplifies requirements to the image transfer and detection. In this case the input aperture of the fibers bundle can exceed the back aperture of the optics, as the physical aperture stop provides all stray light filtering needed, and the image transfer can be done with relay optics or with standard Schott fiber bundles with $\mathrm{NA}=1$ and 2.5 μm pitch.

#### C. Comparison with Conventional Wide-Angle Lenses

The architecture of the monocentric lens is intrinsically compact: the principal (central) rays of all field angles are orthogonal to the front (powered) optical surface and are directly focused to an image surface that is always substantially perpendicular to the incident light. Conventional wide-field-of-view imagers require a lens that conveys wide-field input to a contiguous planar image surface. Extreme wide-angle “fisheye” lenses use a two-stage architecture in the more general class of reverse-telephoto lenses [1,37], where the back focal distance is greater than the focal length. The front optic is a wide-aperture negative lens to ensure acceptance of at least a fraction of light incident over a wide angle range. The negative power reduces the divergence of the principal rays of the input beams, so the following focusing positive component operates with a significantly reduced field of view, although it must provide sufficient optical power to compensate for the negative first element. This retro-telephoto architecture results in a physically bulky optic. Figure 14 shows two conventional wide-field lenses on the same scale as the $f=12\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{mm}$ design example above. At the top of the figure is a 13-element “fisheye” lens taken directly from the “Zebase” catalog of standard lens designs (F_004), then scaled from 31 mm to the 12 mm focal length. The smaller lens at the center of Fig. 14 is based on the prescription for a 9-element wide-field lens specifically designed to minimize volume [38]. This design was intended for a shorter focal length, and the aberrations are not well corrected when scaled to 12 mm. When this design was optimized, even allowing for aspheric surfaces, the components tended toward the same overall physical shape and volume of the lens above. Both of these lenses are substantially larger than the monocentric lens design, even including the volume of the fiber bundles, and they collect less light, even accounting for coupling and transmission losses in the fiber bundles.

## 4. Conclusion

This paper describes a general perspective on monocentric-lens-based imagers, including discussions of stray light control, focus, and a method for conducting a global search of optimal design solutions for two-glass symmetric monocentric lenses. We proved that the monocentric lenses can be focused in the conventional way by axially moving the lens regarding a fixed-radius spherical image surface, indicating the feasibility of panoramic lenses using a fiber bundle image transfer. We checked our design approach by comparing the result for a 70 mm focal length lens and concluded that it is an effective approach to identifying top candidate solutions for specific applications. Practical constraints will determine the final selection. For the “virtual iris” stray light filtering, a spherical high-index central glass solution can be used. With a conventional aperture stop, where the central ball lens consists of two hemispherical elements and a physical stop at the center, reflections from the internal surface can limit the extreme field angle, favoring solutions with a lower index center lens.

Looking further at a specific design example, we identified high- and low-index center glass solutions for $f/1.7$ 12 mm focal length monocentric lenses with (at least) a 120° field of view and showed that this lens compares favorably to conventional fisheye and projection lens solutions to panoramic imaging. It is reasonable to ask whether the specific designs used for the comparison were optimal, and in fact there is clearly room for improvement in these specific designs. However, the standard lens categorization shown in Fig. 15 indicates that monocentric lenses can enter a domain of light collection and field of view that is not otherwise addressable [39].

We conclude that monocentric lenses offer a significant potential for panoramic imaging and deserve further investigation. Future topics of research and development are the systematic design of more complex monocentric lens structures, image processing techniques, and practical technologies for image transfer from the spherical image surface to planar image sensors.

This research was supported by the DARPA SCENICC program under contract W911NF-11-C-0210 and by the DARPA AWARE program under contract HR0011-10C-0073.

## References

**1. **W. Smith, *Modern Lens Design*, 2nd ed. (McGraw Hill, 2005).

**2. **T. Yamashita, R. Funatsu, T. Yanagi, K. Mitani, Y. Nojiri, and T. Yoshida, “A camera system using three 33-megapixel CMOS image sensors for UHDTV2,” SMPTE J. **120**, 24–31 (2011). [CrossRef]

**3. **R. Kingslake, *A History of the Photographic Lens* (Academic, 1989), pp. 49–67.

**4. **G. Krishnan and S. K. Nayar, “Towards a true spherical camera,” Proc. SPIE **7240**, 724002 (2009). [CrossRef]

**5. **J. E. Ford and E. Tremblay, “Extreme form factor imagers,” in *Imaging Systems*, OSA Technical Digest (CD) (2010), paper IMC2.

**6. **D. J. Brady and N. Hagen, “Multiscale lens design,” Opt. Express **17**, 10659–10674 (2009). [CrossRef]

**7. **O. Cossairt, D. Miau, and S. K. Nayar, “Gigapixel computational imaging,” in *IEEE International Conference on Computational Photography* (IEEE, 2011), pp. 1–8.

**8. **H. Son, D. L. Marks, E. J. Tremblay, J. Ford, J. Hahn, R. Stack, A. Johnson, P. McLaughlin, J. Shaw, J. Kim, and D. J. Brady, “A Multiscale, wide field, gigapixel camera,” in *Imaging Systems Applications*, OSA Technical Digest (2011), paper JTuE2.

**9. **D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature **486**, 386–389 (2012). [CrossRef]

**10. **E. J. Tremblay, D. L. Marks, D. J. Brady, and J. E. Ford, “Design and scaling of monocentric multiscale imagers,” Appl. Opt. **51**, 4691–4702 (2012). [CrossRef]

**11. **P. Milojkovic and J. Mait, “Space-bandwidth scaling for wide field-of-view imaging,” Appl. Opt. **51**, A36–A47 (2012). [CrossRef]

**12. **J. A. Waidelich Jr., “Spherical lens imaging device,” U.S. patent 3,166,623 (19 January 1965).

**13. **J. J. Hancock, The design, fabrication, and calibration of a fiber filter spectrometer, Ph.D. Thesis (University of Arizona, 2012); see also product datasheets posted on Schott Fiber Optics website, “Schott Fiber Optic Faceplates” (faceplates_us_march_2011.pdf) and “Schott Fused Imaging Fiber Tapers” (Tapers-US-October_2011.pdf).

**14. **R. Drougard, “Optical transfer properties of fiber bundles,” J. Opt. Soc. Am. **54**, 907–914 (1964). [CrossRef]

**15. **J.-H. Han, J. Lee, and J. U. Kang, “Pixelation effect removal from fiber bundle probe based optical coherence tomography imaging,” Opt. Express **18**, 7427–7439 (2010). [CrossRef]

**16. **Y. F. Li and J. W. Y. Lit, “Transmission properties of a multimode optical-fiber taper,” J. Opt. Soc. Am. A **2**, 462–468 (1985). [CrossRef]

**17. **J. M. Cobb, D. Kessler, and J. Agostinelli, “Optical design of a monocentric autostereoscopic immersive display,” Proc. SPIE **4832**, 80–90 (2002). [CrossRef]

**18. **J. M. Cobb, D. Kessler, and J. E. Roddy, “Autostereoscopic optical apparatus,” U.S. Patent 6,871,956 (29 March 2005).

**19. **Hoya glass catalog, http://www.hoya-opticalworld.com/english/ (20 June 2012).

**20. **D. Marks, E. Tremblay, J. Ford, and D. Brady, “Multicamera aperture scale in monocentric gigapixel cameras,” Appl. Opt. **50**, 5824–5833 (2011). [CrossRef]

**21. **M. Born and E. Wolf, *Principles of Optics*, 7th expanded ed. (Cambridge University, 1999).

**22. **V. Churilovskiy, *The Theory of Chromatism and Third Order Aberrations* (Mashinostroenie, 1968).

**23. **J. Sasian, “Theory of sixth-order wave aberrations,” Appl. Opt. **49**, D69–D95 (2010). [CrossRef]

**24. **R. Kingslake and R. B. Johnson, *Lens Design Fundamentals*, 2nd ed. (SPIE, 2010).

**25. **G. G. Slyusarev, *Aberrations and Optical Design Theory*, 2nd ed. (Adam Hilger, 1984).

**26. **J. L. Rayces and M. Rosete-Aguilar, “Selection of glasses for achromatic doublets with reduced secondary spectrum. I. Tolerances conditions for secondary spectrum, spherochromatism, and fifth-order spherical aberrations,” Appl. Opt. **40**, 5663–5676 (2001). [CrossRef]

**27. **I. Gardner, “Application of algebraic aberration equations to optical design,” Scientific Papers of the Bureau of Standards No. 550 (1927).

**28. **H. A. Buchdahl, *Optical Aberrations Coefficients* (Dover, 1968).

**29. **Schott glass catalog, http://www.us.schott.com/advanced_optics/english/download/schott_optical_glass_catalogue_excel_june_2012.xls.Schott.

**30. **Ohara glass catalog, http://www.oharacorp.com/xls/glass-data-2012.xls.

**31. **Sumita glass catalog, http://www.sumita-opt.co.jp/ja/goods/data/glassdata.xls.

**32. **Hoya glass catalog, http://www.hoyaoptics.com/pdf/MasterOpticalGlass.xls.

**33. **M. M. Rusinov, *Handbook of Computational Optics*(Mashinostroenie, 1984), Chap. 23.

**34. **I. Agurok, “Method of ‘truss’ approximation in wavefront testing,” Proc. SPIE **3782**, 337–348 (1999). [CrossRef]

**35. **http://psilab.ucsd.edu/VirtualStopFamily1.zip.

**36. **http://psilab.ucsd.edu/PhysicalApertureStopFamily3.zip.

**37. **J. Kulmer and M. Bauer, “Fish-eye lens designs and their relative performance,” Proc. SPIE **4093**, 360–369 (2000). [CrossRef]

**38. **M. Horimoto, “Fish eye lens system,” U.S. Patent 4,412,726(1 November 1983).

**39. **H. Gross, F. Blechinger, and B. Achtner, *Survey of Optical Instruments*, Vol. 4 of Handbook of Optical Systems (Wiley, 2008).