Monocentric lenses provide high-resolution wide field of view imaging onto a hemispherical image surface, which can be coupled to conventional focal planes using fiber-bundle image transfer. We show the design and characterization of a 2-glass concentric F/1.0 lens, and describe integration of 5 Mpixel 1.75µm pitch back-side illuminated color CMOS sensors with 2.5µm pitch fiber bundles, then show the fiber-coupled lens compares favorably in both resolution and light collection to a 10x larger conventional F/4 wide angle photographic lens. We describe assembly of the monocentric lens and 6 adjacent sensors with focus optomechanics into an extremely compact 30Mpixel panoramic imager with a 126° “letterbox” format field of view.
© 2014 Optical Society of America
Monocentric lenses consist entirely of hemispherical optical surfaces that share a common center of curvature. Because the single point of symmetry eliminates most of the aberrations, monocentric lenses are capable of generating a high-resolution wide-angle image on a spherical image surface. Such monocentric lenses were historically one of the first lenses used for panoramic imaging , and nearly a century later this approach was revived for high-resolution aerial imaging . However, both of these imagers were limited in utility due to their use of curved film negatives, which are difficult to fabricate and process. Another approach for sensing the monocentric image surface is to transfer it over a dense array of optical fibers , which can be polished with a curved input face and a planar output face, so that the image can be sensed by focal planes fabricated with conventional wafer processing. This approach was successfully used by Lawrence Livermore to make a F/1.7, 17.5mm focal length monocentric imager using a fiber-coupled CCD focal plane to sense a 60° field of view [4,5]. A larger imager used a F/2.8, 250mm focal length monocentric objective with an array of 23 intensified CCD sensors , where each 384x576 sensor captured a 7.5x11.5° segment to cover 75% of the overall 60° field of view. Both of these imagers used fiber bundles, which highly oversampled their 23µm pixel monochrome CCD sensors, eliminating the potential for Moiré sampling effects. However, none of the previous imagers have made use of the primary advantage of monocentric lenses: the combination of high spatial resolution with a wider field of view than conventional focal plane objectives. The goal of the current work is to demonstrate that a panoramic fiber-coupled monocentric lens imager can be implemented with current CMOS focal planes, where the pixel pitch is typically less than the minimum 2.5µm pitch supported by commercial fiber bundle suppliers, and compare the resulting imager resolution and physical volume to a conventional wide-field imager.
A single straight fiber bundle can couple the spherical image to a single focal plane, as shown in Fig. 1(a) and as implemented in . However, Fresnel refraction at the angled fiber input face limits input fiber coupling and strongly affects the light emission from the fiber output aperture, and so limits the achievable field of view. Even for fiber bundles with N.A. = 1, the maximum full field of view with 50% of peak efficiency is approximately 55° . Uniformly efficient coupling across a wide field of view can be accomplished using a 3-dimensional waveguide as shown in Fig. 1(b), where multi-mode fibers are curved to point each input aperture towards the center of the lens and adiabatically couple the propagating modes to the planar output face. This structure is the topic of on-going research in fiber modeling and system fabrication. However, a straightforward approach to wide-field imaging is to use multiple straight fiber bundles to divide the field (see Fig. 1(c)) between multiple sensors for image acquisition. If the bundles are segmented so that each bundle covers a ± 15° field, the signal coupling is uniform to within approximately 97% and the overall imager field of view is limited only by the conventional cosine projection losses from the objective lens internal aperture stop. The remaining challenges are physical integration of the optical fiber bundles to minimize information loss at the seams and in the interface to the focal plane image sensors.
In this paper we show the design, construction and operation of a 12mm focal length wide-aperture fiber-coupled monocentric imager that acquires a 30Mpixel panoramic image with a 126° x 16° field of view and compare its performance to a conventional wide angle DSLR camera. Section 2 describes the design and stand-alone characterization of the prototype monocentric objective lens, and describes the integration of 1.75µm pitch back-side illuminated CMOS focal planes with shaped fiber bundles. Section 3 contains characterization of the fiber-coupled image transfer, comparing resolution to a full size “benchmark” wide-angle lens imaging onto the same CMOS sensors. Section 4 describes the integration of 6 fiber-coupled sensors with the focus optomechanics and the resulting panoramic images, comparing performance to a conventional DSLR imager. Section 5 contains our conclusions.
2. Fiber-coupled imager components
2.1 Monocentric lens design and fabrication
We have investigated methods for identifying optimal two-glass symmetric (2GS) monocentric objective lenses  as well as more complex monocentric structures  that reduce higher order aberrations. The optimal solution for a 12mm F/1.7 470-650nm lens is a S-LAH79 glass shell (nd = 2.003) and a spherical glass core of K-LaSFn9 (or equivalently, S-LAH59 or TAF5, nd = 1.816). This lens requires a fixed aperture stop at the center of symmetry. An aperture inside a ball lens may be introduced by directly cutting the aperture into a spherical glass element , but this is not a lens fabrication technique available from our preferred lens fabricator. The aperture can also be introduced by cementing two hemispherical elements, where one has been fabricated with an aperture stop. Light will be incident at large angles on the interface between the two high-index hemispheres, so the index of the adhesive must be sufficiently high to limit internal reflection.
Using the highest index Norland UV-cure epoxy (NOA164 with index of 1.64) to bond the K-LaSFn9 glass of index 1.81 introduces a total internal reflection at a 65° ray angle incidence at the ball center, which limits the achievable field of view of a F/1.7 lens to ± 55°. Therefore, we preferred a lower index core glass. A suitable replacement solution from the global list of ranked candidates had a S-LAL13 core glass material (n = 1.69) [8,9]. We added a fused silica meniscus at the focal surface for the fiber bundle mounting purposes and re-optimized the solution (Table 1). Norland 61 optical cement (n = 1.56) was used. The fused silica meniscus does not have a significant impact on the lens optical performance, but it facilitates the alignment and mounting of multiple fiber bundles to meet the shared spherical image surface. Layout of the optimized lens and the Modulation transfer function (MTF) performance curves obtained with Zemax lens design software are shown in Fig. 2. Three monocentric lens prototypes were fabricated. The design tolerances of the lens were defined using the highest of three standard options offered by the lens fabricator, Optimax Systems. The surface radii were accurate to 0.1% of the radii (2 fringes at the surfaces used), with 20µm for lens element decentration, and 25µm for center element thickness. Of these, the center thickness tolerance was the most sensitive parameter. One of the lenses, fabricated with the maximum practical F/1.0 aperture, was intended for demonstrations of high light collection imaging including the first prototype imager. The two remaining lenses were fabricated with an aperture stop at F/1.35, wider than the highest resolution F/1.7 aperture. Figure 2 shows the F/1.7 design MTF and photographs of the fabricated F/1.35 lens, next to a penny to illustrate the size. The bottom row shows the impact on resolution when the aperture size grows to increase light collection. As discussed in the Zemax 13 manual (August 2014, p. 149), the FFT MTF calculation is based on scalar diffraction theory, and “The vectorial nature of the light is not accounted for. This is significant in systems that are very fast, around F/1.5 (in air) or faster.” Accordingly, the MTF curves shown in Fig. 2 were calculated using the Zemax FFT MTF function for F/1.7 lens and Huygens MTF function for the F/1.35 and F/1.0 cases.
2.2 Characterization of monocentric lens image formation and focus
Monocentric lenses designed for optimal performance at infinite object distance can be focused like any other lens by means of axial translation , where the object field is a flat surface perpendicular to the optical axis. To confirm that experimentally, the monocentric lens was mounted on a 5-axis translation stage with one axis aligned to the optical system axis for focus adjustment. A custom-built boresight optical aligner was used to center the ball lens and the meniscus. The curved image plane was sensed with three identical relay-imaging systems, which allowed us to simultaneously record, with high spatial resolution, the center and the extreme ± 60° field positions.
The relay imaging systems used Mitutoyo M Plan Apo 20X long working distance apochromatic objectives with 0.42 NA, sufficient to test the monocentric lenses down to a F/1.2 wide aperture. Standard InfinitubeTM 200mm lens tubes were attached to C-mount NIR Thorlabs DCC3240N cameras (1280x1024, 5.3μm pixel pitch sensor, 0.265μm equivalent after 20X magnification) to record the monochrome image. The resolution of the relay imaging system was under 1.7µm, as compared to the 2.6µm RMS spot diameter of the monocentric objective. So the overall system MTF including relay imaging is reasonably indicative of the MTF of the monocentric objective lens. Three chrome-on-glass USAF targets were used as test objects at 0.5m and 1.0m away for three different field angles (see Fig. 3). Microscope objectives in the image relay arms were kept at fixed focus on the meniscus surface, while the ball lens was moved along the optical axis providing the focusing function to the monocentric lens.
Images sensed by an enhanced NIR sensor in the DCC3240N Thorlabs cameras were acquired as raw 8-bit monochrome bitmaps. The white background behind the targets was illuminated by a visible 4200°K spectrum LED light source. Lens MTF curves were calculated using Imatest “Master” software. We used MTF50 as a metric of resolution, the spatial frequency where the transferred MTF has a 50% contrast. The lens showed an on-axis MTF50 performance of approximately 180-190 lp/mm.
The reduction in resolution from the expected design performance was a result of the fabricated lens having a wider aperture (F/1.35 instead of F/1.7). The MTF curves for 1m object distance in Fig. 3 are well matched to Zemax generated curves in Fig. 2 except for tangential performance at ± 60° where the experiment shows a better result. Focusing of the lens was also confirmed for effectively infinite object conjugates by taking the assembly outdoors for testing at more than 30m ranges. While monocentric lenses are unusual in some aspects, as for planar focal surface lenses, the image depth of field is governed by the operating F/# (or numerical aperture) and field incidence angle (i.e. the chief ray) at the image-surface. The depth of field of the F/1.35 monocentric lens was similar to a more conventional F/1.4 lens with a planar focal surface. The monocentric lens is not telecentric, and so a defocused off-axis blur function (the bokeh) is laterally shifted. Operating this lens outdoors in sunlight required the use of an external UV-IR blocking filter, as the lens was optimized for the visible spectrum. The measured travel range for the ball lens required to focus to the wide flat object from infinity to 0.5m was 240μm. This was the minimum travel range required for the prototype’s optomechanical focusing mechanism.
2.3 Fiber bundle and CMOS image sensor integration
The monocentric straight fiber bundle configuration (see Fig. 1(a)) is limited by a ± 30° field of view [5,7]. To retain uniform illumination, we choose to use a row of six fiber bundles, each covering a 21° horizontal and 16° vertical FOV (26.2° diagonal), for an overall 126° by 16° (126.1° diagonal) field of view. The vertical field of view was constrained by the 4:3 factor of the selected image sensor, an Omnivision 5653 5Mpixel back-side illuminated (BSI) CMOS image sensor, with 1.75 µm pitch pixels covered by microlenses and a Bayer RGB color filter. Each sensor is interfaced through a rigi-flex printed circuit board (PCB) that provides a USB-2.0 interface to a host computer for either streaming video or full-resolution capture. The OV5653 image sensor was intended for high production volume compact imagers such as smartphone cameras, and so was available only in a chip-scale surface-mount ball grid array package. The package has a 400μm thick glass cover attached directly to the silicon chip by a thin patterned epoxy grid. This cover had to be removed to allow direct contact between the fiber bundle and image sensor. For removal of the cover glass, we placed the PCB-mounted sensors into a temporary fixture for accurate leveling, then used a computer-controlled dicing saw to cut through the 400µm thick glass layer and stop within the 25µm adhesive (see Fig. 4(a)). After the cover glass was removed, it was essential to completely clean the surface from any residual debris, especially glass fragments, that might otherwise introduce a gap between the fiber bundle and image sensor.
The fiber bundles were made of Schott 24AS fiber optic material with a 2.5μm pitch between 5-sided optical fibers arranged into a 4-fiber unit cell with a diamond-shaped absorptive center . The fiber bundle input faces were ground and polished to match the 12.043mm radius of curvature of the mounting meniscus, with an additional 20µm allocated for adhesive thickness. The output faces were polished to form a flat pedestal (see Fig. 4(c)) to fit into the recess formed after the cover glass was removed from the OV5653 sensor. The pedestal was cut to cover slightly more than the Omnivision sensor (4.592 x 3.423 mm) active area and to avoid mechanical interference around it. Assembling six adjacent fiber bundles can provide seamless image stitching across the full field, but only if the fiber bundles are precisely shaped to align the input image with the sensor pixels, and have a sharp edge transition between the spherical input face and angled sidewalls.
The first set of shaped bundles were made with a lateral misalignment of the input image area, such that for several of the bundles, a vertical section of image ranging from 9 to 85 pixels in width was not sensed. This fabrication error was corrected for a second-generation prototype that also incorporates different optics and focus actuation. This prototype is currently being characterized, and will be reported separately.
The flat polished pedestal side of each fiber bundle was placed in contact with a clean decapped sensor using Norland NOA72 UV curable optical adhesive, and the pedestal reinforced using an additional layer of the same flexible epoxy. The impulse response of all sensors was characterized at the center and edges to confirm that the sensor assembly did not incorporate any particles that would lead to nonuniform response. Cross sectioning and optical and electron microscope images of one of the working fiber-coupled sensors revealed the epoxy thickness to be under 2μm across the sensor aperture (see Fig. 5).
The CMOS sensor micro lens and color filter add an additional 3 µm thickness between the fiber bundle and active sensor surface. All six fiber-coupled sensors were found capable of transmitting an incident focused signal through a single fiber core and coupling the signal to a 3x3 sensor pixel area, consistent with the relative areas of the fiber core and image sensor pixel, and the total 5 µm thick volume of isotropic index media between the sensor and fiber bundle.
The impulse response of the fiber-coupled sensor is locally space-variant due to the fiber core structure, and the sensed image is impacted by Moiré effects from sampling of the quasi-periodic fiber bundle by the CMOS sensor pixels.
3. Narrow field (single sensor) fiber-coupled image transfer
3.1. Fiber-coupled image transfer performance
To observe the effect of each step of the fiber-coupled image sensing, we bench-mounted the F/1.35 two-glass monocentric lens so that a central region of the spherical image could be recorded by a Keyence VHX1000 digital microscope, and then the same scene recorded with a fiber-coupled CMOS sensor. We measured the impulse response using a white light LED source, and found that the impulse response of the lens alone matched the PSF spot size diameter predicted by Zemax OpticStudio (1st dark ring diameter = 2.6μm) as shown in Fig. 6(a). With the fiber-coupled sensor in oil contact with the monocentric lens we observe local space variance of the impulse response due to the varying relative position of the quasiperiodic 2.5µm pitch fiber core relative to the 1.75µm pitch RGB color filtered sensor pixels. A detailed characterization of the space-variant impulse response of the fiber-coupled image sensor has been reported separately . As we move the white spot around the field of view, it subsequently hits the different areas of the fiber bundle surface (individual cores or cladding between them), resulting in light containment that is always within a 4x4 to 3x3 pixel area (see Fig. 6(b)). Note that the brightness of the sensed image in Fig. 6(b) has been increased to make visible the relatively low signal in the adjacent pixels.
Next, we looked at the effect of fiber transfer on a color image. The 2GS monocentric lens itself achieves an on-axis MTF50 performance close to 200 lp/mm (see Fig. 3). To understand how much of this raw image information can be preserved through the fiber transfer and image sensing we placed the monocentric lens looking down into a 90° fold mirror, and set a scene at 1m range including a star pattern chart for slant-edge MTF testing.
Figure 7 shows the physical arrangement and image data acquired for three cases:
- a) Microscope relay image of a region of the spherical image formed on the rear surface of the meniscus lens of the bare monocentric lens objective
- b) Microscope relay image of the planar rear surface of a fiber bundle with a spherically polished front surface index-matched to the meniscus lens.
- c) 5 Mpixel digitally sampled image from a fiber-coupled CMOS sensor.
In the top row of Fig. 7, the numerical aperture of the microscope objective lens used determines the object depth of field, and therefore the fraction of the overall field of view captured by the monocentric lens which is visible in the microscope image. The MTF calculation at maximum magnification confirms the previous results, achieving 170 lp/mm MTF50 performance. In the center row, the 2.5μm polished bare fiber bundle curved input face transfers the image to its planar exit surface. Maximum magnification reveals the fiber bundle structure, but the slant edge MTF measured is still excellent, with a MTF50 resolution of 110 lp/mm. The transferred image shows local defects of the fiber structure, which can be mitigated using digital image processing techniques  for sensor calibration. The bottom row shows the performance of the fiber-coupled imaging system. The microscope relay optics from the second case is replaced by a 1.75μm 5Mpixel Omnivision sensor that is UV cured to the bundle. Upon acquisition, the image was auto white-balanced in Photoshop. The image demonstrates good overall color transfer, and the Moiré artifacts of the image sensing are not apparent except at high magnification. The achieved MTF50 resolution of this raw image is 70 lp/mm. While this resolution is low compared to the bare monocentric lens, it still compares favorably to a conventional wide-angle lens, as shown by the following section.
3.2. Monocentric fiber-coupled versus a conventional wide-angle lens image forming
The next step was to test the off-axis monocentric fiber-coupled image performance and to compare the image quality with the conventional of-the-shelf wide-angle lens. A fair comparison between the monocentric fiber-coupled lens and the conventional lens requires three metrics: field of view, F-number and resolution. Since equivalent lenses with a 120° wide field of view and F-number as low as F/1.35 do not exist, we chose as our benchmark lens a high-quality example of a retro-telephoto lens, the Canon EF 8-15mm F/4L “Fisheye” USM zoom lens, intended for use with digital single lens reflex (DSLR) cameras. This lens was constrained by the DSLR requirement for back focal length, and it also provided “zoom” adjustment of focal length, whereas the monocentric lens is a fixed focal length (prime) lens. However, the best lens resulting from a search for high resolution and low F/# prime lenses with the required 12 mm focal length resulted in the best candidate a 12mm F/1.6 SLR Magic Hyper Prime Cine lens built for micro-four-thirds cameras. This lens was smaller, but provided only an 84° diagonal field of view, and the two sample lenses we characterized yielded a substantially lower resolution at all field angles than the Canon EF lens.
To match the monocentric lens, we set the Canon lens focal length to 12mm and the minimum F-number of F/4. The same Omnivision OV5653 1.75μm sensor was used to acquire the images in both systems. Since the single Omnivision 1/3.2” format sensor cannot cover the full frame Canon lens field of view, we focused the lens at the center of the field of view, and used a lateral translation stage to move the sensor and sample and test the performance at different field positions. Similarly, we focused the monocentric lens at the center of the field of view, and repositioned a single fiber-coupled OV5653 sensor on the monocentric meniscus to vary the lateral field angle. We set up the flat test scene 1m away (60° object was at 2m) and acquired image data at 0° (on-axis) and 60° off-axis.
The exposure time for the Canon F/4 lens was 4x (−60° off-axis) to 6x (0° on axis) longer than for the F/1.35 fiber coupled lens, which is consistent with the difference in aperture combined with the modest addition of loss from absorption in the fiber bundle. The image resolution at 0° was similar for both imagers, with an MTF50 between 70 and 80 lp/mm, as expected from our measurements (see Fig. 7) and Canon specifications for the EF lens series. However, the fiber-coupled monocentric lens showed considerably better performance at the edge of the field. Figure 8 shows the photo of the setup and photos of the scene at −60° taken through both of the lenses. The photo taken with the fiber-coupled lens is raw (not digitally processed with calibration data) and so the individual fiber cores imperfections are clearly visible, especially on the white portions of the scene. The Canon benchmark lens reaches MTF50 of 47 lp/mm and image suffers from strong chromatic aberration and distortion. The fiber-coupled monocentric lens achieves MTF50 of 74 lp/mm in an image free of distortion and lateral color. More significantly, it achieves this level of performance at 3 aperture stops lower than the Canon (F/1.35 vs. F/4) and within a dramatically smaller volume.
4. Wide field (multi-sensor) fiber coupled monocentric imaging
4.1. Fiber-coupled monocentric lens prototype
The next step was to assemble 6 fiber-coupled sensors and the lens glass into the self-contained 126°x 16° field of view “Letterbox” imager prototype. We chose to use the F/1.0 2GS monocentric lens part for maximum light collection. We designed and fabricated the compact optomechanical mount shown in Fig. 9. The meniscus and 6 adjacent sensors are rigidly attached to an outer frame, while the central ball lens is mounted on a flexure that allows axial translation for focus. The sensors connect through small controller boards with USB 2.0 connection to a single host computer which acquired and displayed simultaneous images at 3 fps at full resolution, including basic image processing for sensor calibration (described in ). The same USB connection allows remote focus actuation via a stepper motor. However, the stepper motor was not used in this first system integration, and focus was performed manually. The test interface was MDOSim software, developed by Distant Focus for Windows and Linux platforms. An exploded solid CAD model of the assembly is shown in Fig. 9(a). The ball lens is placed in a flexure mount and centered with the meniscus. Assembly allowed a 300µm travel range to focus the lens from 0.5m to infinity. All 6 sensors were assembled with UV-cured Norland NOA74 optical adhesive between the fiber bundles and meniscus lens, then permanently fixed in position. Details about the optomechanical mount and the motion system will be described in more detail separately .
4.2. Fiber-coupled monocentric imager vs. wide angle DSLR camera
For a direct imager system comparison, we combined the Canon EF lens with the full frame Canon DSLR EOS 5D Mark II camera and mounted it side-by-side to our Letterbox fiber-coupled prototype, as shown in Fig. 10.
For characterizing wide-angle imagers we printed a 6.10m by 2.40m poster with repeating resolution and image quality test patterns and mounted it on the wall of the lab, such that both cameras were placed 1.6m away from the wall and would image the entire poster within a 126° horizontal field of view. The poster was illuminated uniformly with 4 evenly-spaced white LED studio lights. The monocentric fiber-coupled prototype was connected to a desktop computer supporting 6 USB hosts. This enabled parallel capture and storage of full resolution image data, as well as streaming real time 30Mpixel color imagery to six 30” high definition WQXGA monitors (see Fig. 10), each monitor displaying the output of an individual sensor. The USB 2.0 standard limited the frame rate to 2-3 frames per second at full 5 Mpixel resolution.
The monitor resolution was 4Mpix, so the actual real time display was limited to 24Mpixel real time resolution. The initial raw snapshots of the full field captured by the prototype reveal variable width vertical bars in the sensed image field, as well as a small vertical mismatch between adjacent image sectors (see Fig. 11). The gaps in the image originate from misplaced pedestal positions on the back of the bundles, as well as any physical gap between the side facets of the shaped bundles. Imperfections in the surface of the fiber bundle and Moiré effects are also visible, as is a global cosine squared relative illumination falloff from vignetting of the central aperture and relative illumination. Note that the monocentric image structure orients the light collection towards the sensed image, and so has substantially better field uniformity than the conventional fourth power cosine dependence of illumination of a paraxial lens . The MTF50 performance measured at the center of the field was around 25-30 lp/mm, and at ± 60° off-axis was in the range of 50-80lp/mm. The drop in resolution expected from the F/1 objective lens only (see Fig. 2) is due to the additional elements in the image sensor (the fiber bundle and Bayer color-filtered CMOS sensor).
Full width images from both the monocentric and benchmark images are shown in Fig. 12, including enlargements of selected regions across the field of view. The fiber-coupled image has been digitally processed using the global processing techniques described in , including compensation for fiber Moiré and local defects as well as relative illumination, along with interpolation across the vertical gaps in the sensed image field. Like most camera manufacturers, Canon compensates for sensor nonuniformities and also applies a proprietary edge-enhancement algorithm even to their raw images, similar to an unsharp mask even in RAW format, which makes the photos more visually appealing in high contrast areas, but such edge-enhancements render inaccurate MTF calculations of the Canon DSLR images.
While both lenses have the same 12 mm focal length and similar horizontal field of view, the comparison between the 22 Mpixel DSLR and 30Mpixel Letterbox imager is imperfect. The Canon camera’s 22Mpixel full resolution output is sensed by 6.4µm pixels, and 89% of the image data was discarded when the image was cropped to match the letterbox field of view of our monocentric prototype. On the other hand, the digital to analog conversion (ADC) in the Canon image sensor is substantially better, yielding 12 bits signal depth as opposed to the 8-bit depth of the OV5653 sensor transmitted by the MDOSim interface.
Comparing the two images, the detail in the enlargements demonstrate the superior resolution of the monocentric fiber-coupled prototype, especially at the edge of the field where the image quality of the DSLR image is clearly limited by lens resolution (especially lateral color) as well as sensor sampling. Colors are richer and more vivid in the Canon photo, presumably due at least in part to the 12-bit Canon ADC. A halo is visible at the center of the monocentric imager field, which reduces contrast at lower spatial frequencies. This was the consequence of a wide-open F/1.0 aperture and resulting increase in stray light. In early versions of this prototype where only small a portion of the image surface was covered by fiber-coupled sensors, we observed significant stray light from internal scatter of the other image fields, which would otherwise have been absorbed by the image sensor. The lens models indicated this halo was to be expected with the F/1 aperture, but not with a reduced aperture. We confirmed this by acquiring on-axis images with no visible halo using an external fixed aperture stop set at F/2 and F/4 centered in front of the F/1 monocentric lens.
The 2-glass monocentric lens was designed to operate over the 470-650nm spectral range, and so it requires an external spectral filter for outdoor photography. The second fiber-coupled monocentric imager prototype, currently being assembled and characterized, uses a F/1.7 4-glass monocentric lens that resolves the visible through near infrared spectrum , fiber bundles with much more accurate surface position and edge quality, and includes the stepper motor actuation for focus.
This paper establishes the potential of fiber-coupled monocentric imagers in next-generation high-resolution panoramic imaging. Such imagers may find applications as high-performance versions of the popular GoPro-type video recorders, or as an extremely compact cinematic camera to provide panoramic and/or stereo imaging for emerging immersive media. We confirmed the ability of monocentric lenses to focus wide-angle images over a range of object depths, and the feasibility of high-resolution fiber-coupled curved image transfer across the 126° field of a self-contained “letterbox” format imager. This first prototype acquires a fraction of the full image data resolved by the monocentric objective lens. However, the full image could be acquired by a straightforward extension of the structure, with multiple image sensor rows arranged into a 2-D array. Such a camera would provide an F/1.7 85 Mpixel image with a 120°x120° field of view, and with the optics, optomechanics, and sensors held within a volume similar to the current prototype. The compact size is a primary advantage of this class of imager. The volume of our conventional benchmark lens imager is about 465 cm3, approximated by a cylinder containing the 78 mm diameter lens barrel and extending to the image sensor (83 mm to flange, then 44 mm to focal plane). The volume of a fiber-coupled monocentric imager can be approximated by a 7.3mm hemisphere containing the convex front lens on a cylinder 47mm in diameter by 27mm long containing the focus actuator and fiber bundles and sensors covering the same full field of view.
Ongoing research is being directed towards optimization of the fiber bundle microstructure for enhanced spatial resolution, digital image processing techniques to further improve image appearance, and exploration of full-field imagers including curved fiber-bundle geometries.
The authors thank Brett Bryars and Optimax Systems for expertise in custom lens fabrication. This work was supported by the DARPA SCENICC program, contract W911NF-11-C- 0210.
References and links
1. T. Sutton, “Panoramic photography,” Photon. J. 6, 184–188 (March 1860).
2. “Spherical camera,” Popular Mechanic Magazine 99(3), 94–95 (H.H. Windsor, March 1953).
3. J. A. Waidelich, Jr., “Spherical lens imaging device,” U.S. patent 3,166,623 (19. January 1965).
4. T. S. Axelrod, N. J. Colella, and A. G. Ledebuhr, “The wide-field-of-view camera,” in Energy and Technology Review (Lawrence Livermore National Laboratory, 1988).
5. J. F. Kordas, I. T. Lewis, B. A. Wilson, D. P. Nielsen, H. Park, R. E. Priest, R. Hills, M. J. Shannon, A. G. Ledebuhr, and L. D. Pleasance, “Star tracker stellar compass for the Clementine mission,” Proc. SPIE 2466, 70–83 (1995). [CrossRef]
6. C. Akerlof, M. Fatuzzo, B. Lee, R. Bionta, A. Ledebuhr, H. Park, S. Barthelmy, T. Cline, and N. Gehrels, “Gamma-ray optical counterpart search experiment (GROCSE),” in AIP Conf. Proc. 307, 633–637 (American Institute of Physics, 1994). [CrossRef]
7. A. Arianpour, I. Agurok, N. Motamedi, and J. Ford, “Enhanced field of view fiber-coupled image sensing,” in International Optical Design Conference 2014, OSA Technical Digest (online) (Optical Society of America, 2014), paper IM2A.4. [CrossRef]
8. I. Stamenov, I. P. Agurok, and J. E. Ford, “Optimization of two-glass monocentric lenses for compact panoramic imagers: general aberration analysis and specific designs,” Appl. Opt. 51(31), 7648–7661 (2012). [CrossRef] [PubMed]
10. J. J. Hancock, “The design, fabrication and calibration of a fiber filter spectrometer”, PhD dissertation, College of Optical Sciences, (The University of Arizona, 2012)
11. S. Olivas, N. Nikzad, I. Stamenov, A. Arianpour, G. Schuster, N. Motamedi, W. Mellette, R. Stack, A. Johnson, R. Morrison, I. Agurok, and J. Ford, “Fiber bundle image relay for monocentric lenses,” in Computational Optical Sensing and Imaging 2014, OSA Technical Digest (online) (Optical Society of America, 2014), paper CTh1C.5.S.
12. A. R. Johnson, J. Pessin, J. E. Ford, I. Stamenov, A. Arianpour, and R. A. Stack, “Optomechanical design with wide field of view fiber-coupled image systems”, to be presented at the 2014OSA Frontiers in Optics Meeting. [CrossRef]
13. B. G. Grant, Field Guide to Radiometry, SPIE Field Guides Vol. 23 (SPIE Press, 2011)
14. I. Stamenov, S. Olivas, A. Arianpour, I. Agurok, A. Johnson, R. Stack, and J. Ford, “Broad-spectrum fiber-coupled monocentric lens imaging,” in International Optical Design Conference 2014, OSA Technical Digest (online) (Optical Society of America, 2014), paper IM3B.5.