Abstract

Conventionally, the field of view of a camera is understood as the angular extent of a convex circular or rectangular region. Parallel camera architectures with computational image stitching, however, allow implementation of a field of view with an arbitrary shape. Monocentric multiscale lenses further allow the implementation of an arbitrary field of view in camera volumes comparable to conventional single-lens systems. In contrast with conventional wide-field-of-view systems, multiscale design can also achieve nearly uniform resolution across the entire field of view. This paper presents several design studies obtaining unconventional fields of view using this approach.

© 2018 Optical Society of America

1. INTRODUCTION

Field of view (FoV) and instantaneous field of view (iFoV) are the most basic measures of camera performance. Conventionally, FoV describes the angular extent of the cone around the optical axis observed by a camera. Fisheye lenses have long been used to achieve wide-field-of-view imaging. For example, the Ricoh Theta and the Samsung Gear 360 capture 360°×180°. However, distortion and aberration in fisheye systems severely limit the iFoV. For this reason, systems that capture a wide field of view by computationally stitching images obtained using temporal scanning [1] or camera arrays [28] have become increasingly popular. Higher-resolution full-solid-angle imaging has been also implemented in camera arrays such as the Facebook Surround 360 [9].

Several platforms have recently been developed that generalize 360 camera design to include more diverse parallel camera architectures [10]. While a camera array can be designed to cover any FoV and iFoV, the cost of such systems increases nonlinearly as iFoV decreases. The most fundamental issue is that as iFoV decreases, entrance aperture must increase. Lens cost increases nonlinearly with entrance aperture size. The prospective cost is raised still further if FoV per microlens decreases, as required by conventional scaling [11], since the number of microlenses required to fill a given field of view must then also increase as iFoV decreases. Multiscale designs in which a parallel array of microcameras share a common objective lens [12], in contrast, have been shown to allow a wide FoV over a wide range of aperture scales. Monocentric multiscale (MMS) designs using a spherical objective lens and microcameras mounted on a spherical shell have been particularly effective in this regard [1315].

While previous work has focused on multiscale design for lens systems with a conventional cone-shaped FoV, this paper describes MMS designs for wide-angle applications most commonly associated with fisheye lenses and ring-shaped camera arrays. Although we do not discuss focal accommodation in this paper, the ability to independently and locally control the focus state in each microcamera is a particular advantage of MMS systems. We refer the reader to previous work discussing focus control strategies [16] and implicitly assume that these strategies can also be implemented in the systems described here. With this caveat, we show below how the spherical geometry of MMS systems allows a variety of novel field of view alignments.

For example, security cameras are usually installed on a ceiling or pole overlooking the targeted field of view. Current systems often incorporate mechanical pan-tilt-zoom components to allow the camera to scan wide angle fields with high resolution. Alternatively, such systems may combine a wide-angle spotting camera and a long-focal-length, narrow-field slew camera. When an event of interest is registered by the wide-angle lens, the long-focal-length lenses will be directed into that event and capture high-resolution details [8]. However, there are at least three disadvantages in this setup. First, only one region of the full field is captured in high resolution. Second, the response time and mechanical motion speed are more likely to be limited and unable to keep up with some high-changing-rate events, especially when several events take place simultaneously. Third, mechanical components tend to render the entire system unreliable and afflicted with high maintenance cost.

In comparison, MMS lenses achieve a wide field and high resolution in real time. In this architecture, parallel small-aperture optics outperforms the traditional single-aperture lens for significantly improved information efficiency [13]. In addition, the shared objective lens leads to a more compact layout compared with that of multi-camera clusters. Since the imaging sensors are tessellated over a spherical surface, as long as there is no notable inter-occlusion occurring, the target can be covered from any spatial angle by planting a microcamera pointing to that orientation. There are numerous ways of arranging the microcamera array and, hence, the configurations of the FoV. This flexibility offers great opportunities for different FoV configurations and other camera setups for various application scenarios. In addition to the arrangement flexibility of one MMS lens, more configurations can be realized by using multiple MMS lenses in combination. As an example, we have previously described a design for compact wide-field imaging combining three MMS systems [17]. Here we explore expansions on this design space to include compact 360 ring cameras, multifocal length/extended-depth-of-field systems, and full-sphere imaging systems. The designs presented here are intended as simple illustrations of the potential of such systems; in practice, systems can be designed to cover arbitrary field of view and depth of field.

2. CONFIGURATION SPACE OF MMS CAMERAS

Conventional cameras capture images in rectangular format due to the format of film, electronic sensor, and display devices. In most cases, the captured FoV is slated to be fully streamed for rendering, e.g., the FoV shape format of the captured data is determined by the rendering convention.

As technologies in optics, electronics, and computation advance, this close coupling between image capturing and rendering ought to be decoupled for better exploitation of image information and creation of novel functionalities. An arbitrary format of the image is easily available by synthesizing frames from multiple focal planes. As new image- and video-rendering technologies are being explored and developed, all kinds of image navigation methods are readily available for new ways of rendering.

A MMS lens expands its FoV by adding up small size secondaries. The ultra-high information capacity allows for myriad FoV configuration options and image resolution formats, which provides excellent adaptation for different application scenarios.

A. Packing Space of MMS Cameras

For a MMS architecture, the microcameras are to be packed on a spherical surface. Except for some special application purposes, the microcamera array is arranged in a densely packed fashion, in which adjacent microcamera units ought to share sufficient FoV overlap for continuous object field coverage through image stitching. Therefore, the extent and format of the FoV captured are determined by the manner of packing the microcameras. Although a trivial task in the case of a 2D plane, close packing on a spherical surface can be much more challenging. Depending on the extent of the targeted packing region, either a local packing or global packing strategy is preferable. A local packing strategy is preferred if the packing region comprises only a small fraction of the whole sphere. As shown in Fig. 1, this packing region covers approximately 90°×50° FoV; therefore, a hexagonal close packing is employed, and the microcameras are aligned on lines of latitude. This packing method produces a nearly rectangular FoV coverage, resembling a conventional image format.

 figure: Fig. 1.

Fig. 1. Hexagonal close packing for a localized FoV output.

Download Full Size | PPT Slide | PDF

As the latitudinal angle grows, the variation in circle separations increases rapidly. This variation can be quantified by chord ratio, which is defined as

Chord Ratio=Maximum Center DistanceMinimum Center DistanceMinimum Center Distance,
where center distance indicates the distance between the centers of two adjacent circles. As a matter of experience, a chord ratio value less than 0.17 creates small perturbation and uniform packing density, which leads to a high image quality and reduced lens complexity. Observing this rule of thumb, a hexagonal packing strategy can only achieve a maximum latitudinal angle span of 60°.

Previous work [18] implements a packing strategy based on a distorted icosahedral geodesic. By iteratively subdividing a regular icosahedron that is projected onto a sphere, this method is able to produce an approximately uniformly distributed grid of circles on the whole globe. Figure 2 shows an example of packing 492 circles on the entire spherical surface applying this method.

 figure: Fig. 2.

Fig. 2. Close packing 492 circles on a spherical surface using the distorted icosahedral geodesic method.

Download Full Size | PPT Slide | PDF

Even with a global packing method, the extent of the packing area is still limited for lightpaths from different channels, which may interfere with one another. As shown in Fig. 3, when the close-packed region expands almost 180°, the lightpaths are interfered with by sensors from the opposite side of the globe. Of course, the maximum packing angle also hinges on specifications of the optical system.

 figure: Fig. 3.

Fig. 3. Obscuration occurs when two microcameras are sitting in each other’s lightpaths.

Download Full Size | PPT Slide | PDF

Here, we estimate the maximum angle cFoV within which the lightpath stays obscuration-free. We consider a MMS lens of the Galilean style (Galilean style has no intermediate image, as opposed to Keplerian style, which features an intermediate image between the objective lens and microcameras) with the following parameters: the focal length of the spherical objective lens is fo, the radius of the objective is R, the distance between the stop and the center of the objective is dos, the distance between the entrance pupil and the center of the objective is lϵ, and the half FoV angle of each sub-imager is α. As depicted in Fig. 4(a), an imaging channel is on the margin of the multi-channel MMS system. The green line, which connects the entrance point of the marginal ray with the center of the objective lens, serves as another margin of this multi-channel system. As long as all the channels are confined by the cone included by these two margins, this system is obscuration-free. The clear semi-diameter of the objective can be approximated as

Do=(lϵ+R)α+Dϵ2=(fodosfodos+R)α+f2F/#,
where f is the overall effective focal length. The free angle cFoV can be calculated as
cFoV=α+πarctan(DoR).

We assume a design example with chief parameters as follows: f=20  mm, F/#=2.5, f0=47.06, R=21.11  mm, dos=27.49  mm, and α=5.7°. Plugging these parameters into Eqs. (1) and (2), the clear angle of FoV is cFoV=154.7°. Figure 4(b) shows this free-packing cap colored in red. This set of design specifications will also be used to show another example with a ring-shaped FoV.

 figure: Fig. 4.

Fig. 4. Calculating maximum clear packing angle cFoV within which no light obscuration occurs between any two channels. (a) For spherical symmetry, this cFoV can be determined by lightpath of one channel on the assumed packing boundary. (b) Calculation result for a set of given design parameters specified in the text.

Download Full Size | PPT Slide | PDF

Cameras with a 360° ring-shaped field of view are of great use not only in the public security domain in parks, squares, traffic circles, and entry/exit ways but also in surveillance, navigation applications, and in virtual reality (VR) and augmented reality (AR) [19]. 360° photography is also called panoramic imaging. A common way of doing panoramic imaging is by tiling multiple cameras in a circle. As mentioned previously, this method usually ends up with bulky and costly hardware. The rest of the section shows how MMS architecture deals with this subject with superior dexterity.

As illustrated in Fig. 5(a), suppose the camera is installed on the top end of a pole that is 4 m above the ground. The view angle (the angle formed by the upright pole and the dotted lines) of the camera is 45° when aiming at the inner border and 75° when at the outer border. By simple calculation, the radius of the inner border is 4 m, while the radius of the outer border is about 14.93 m. The distance between the inner circle and the camera is about 5.67 m, and that of the outer circle is about 15.45 m.

 figure: Fig. 5.

Fig. 5. MMS camera with ring FoV. (a) MMS camera on a pole with FoV of a ring area. (b) 165 circles are packed on a belt of the top hemisphere with polar angle ranging from 43° to 76°. (b) Layout of a MMS lens design.

Download Full Size | PPT Slide | PDF

A square image sensor chip would be ideal for MMS lens design for its advantage in mosaicking. The effective focal length f is chosen to be 20 mm, which is adequate for the required angular resolution. The aperture size is F/#=2.5, and the FoV of each channel is 11.4°.

The microcamera array region covers an extensive portion of the hemisphere, and a local packing method would lead to inferior quality in terms of packing uniformity. Here we configure our MMS lens by choosing a group of circle slots resulting from the distorted icosahedron geodesic method shown in Fig. 2. The corresponding slots with microcameras are highlighted with red patches in Fig. 5(b), and Fig. 5(c) shows the layout of the corresponding optical design.

The final optical design consists of 165 microcameras covering a polar angle from 43° to 76°. The covered FoV is not exactly equal to the required due to discretely aggregated FoV with a step of 11.4° of each channel. Figure 6(a) shows the dimensions of one channel of the optics. The spherical ball lens has a radius of 21.11 mm, and the total track of optics is 60 mm. The image area of each focal plane is 2.8  mm×2.8  mm, and the resolvable pixel pitch, which can be estimated by the modulation transfer function (MTF) curves shown in Fig. 6(b), is about 1.67 μm. Therefore, the resolution elements of each focal plane are about 2.8 megapixels. The total resolution elements are around 500 megapixels. The detailed design data set is available in Table S1 in the supplementary material [20].

 figure: Fig. 6.

Fig. 6. Imaging performance of 360° ring MMS lens. (a) Layout of the one channel of 360° ring FoV MMS lens design. (b) MTF curves.

Download Full Size | PPT Slide | PDF

B. Multifocal Design

For a single-focal-length camera, magnification varies for objects from different ranges. The further the object, the smaller the magnification. This property may cause difficulty in recognition of objects dispersed over a deep depth of field.

A common solution is by using a zoom lens, which turns to a long focal length for objects of long range and to a short focal length for the close objects. Another alternative is a camera cluster, which consists of multiple cameras with different focal lengths, with the long focus covering the remote area while the short focus covers the close zone.

Compared with these two methods, MMS lens architecture provides a more compact, modular, and less expensive way of conducting multifocal imaging. In MMS lens architecture, the effective focal length of any individual channel can be varied by changing the secondary optics. By applying different secondary optics, we can integrate multiple focal lengths within a single optical system. An example is shown in Fig. 7(a), for supervising over an extensively stretched street with a camera located at one end. Here, the object plane is a narrow inclined strip, which causes large variability in imaging subject distance. The viewing angle ranges from 25° to 85°. To capture detailed information over this entire strip, a system with multiple focal lengths is desired. Figure 7(b) shows a MMS lens covering different street segments with channels of varying focal lengths. As the segment moves further away from the camera, the respective channel increases in its focal length for a more uniform ground sampling.

 figure: Fig. 7.

Fig. 7. Multifocal system. (a) Monitoring traffic of a long street from one end. (b) Multiple imaging channels of the optics. (C) Optical layout of multifocal system.

Download Full Size | PPT Slide | PDF

Channels with short focal lengths cover areas close to the camera, demanding a wide field angle. Channels with long focal lengths help zoom into areas far away from the camera and with a relatively narrow field angle. Therefore, applying an identical image sensor format would accommodate this expectation. Assume the camera is placed 10 m above the ground, with each channel covering a fraction of the long street shown in Fig. 7(a). The whole camera covers a total range of surveillance of 115 m. The first channel with the shortest focal length covers a FoV of 16°, while the longest one covers a FoV of 6°, resulting in same image size. Table 1 tabulates the focal lengths of each channel and the respective object distances of the central field of view.

Tables Icon

Table 1. Characteristics of Each Imaging Channel for a More Uniform Sampling Rate

The seventh channel at the bottom edge faces the highest pressure in depth of field (DoF) accommodation, as it has largest focal length and the deepest covering field depth. As indicated previously, this channel covers a viewing angle over 79° to 85°, which results in an object range from 52 m to 115 m. If, for example, vehicle plate recognition is the major mission of this camera, a circle of confusion (CoC) of less than 10 mm in object space would be a standard choice. This corresponds to a CoC of approximately 50 μm in the camera’s image plane. In our design example, the lens aperture is D=15  mm, and the nearest object range is p1=52  m. Plugging these two quantities as well as CoC=10  mm into the first equation in Eq. (3) produces the focusing distance of p87  m. Then, substituting this result into the second equation, the furthest object range in focus is p2=261  m. Since p2 is greater than the longest designed operating range 115 m, this DoF coverage guarantees that our system would recognize a vehicle plate dispersed anywhere within the monitored region without the need of performing focusing.

p1=DpD+CoC,p2=DpDCoC.

The inclined object plane also leads to an image distortion problem as a consequence of the converging effect in the transverse direction and deformation in the longitudinal direction. Nonetheless, these two effects can be described precisely by a linear perspective camera projection model between the object plane and imaging focal plane. Therefore, a simple distortion correction can be employed to solve the problem in post-processing.

In traditional monolithic lens design, size, weight, cost, as well as aberration or imaging quality are among many factors that limit achievable system specifications. For a MMS architecture, besides these common factors, physical conflict between array units is another equally important constraint. Besides these external constraints, system specifications also emply checks and balances with each other. To determine the limit of one specification, other specifications should be considered as a set of given specifications. In our design example, the aperture size F/#=3 is a common value designated to a security camera. The image size of each channel is also determined by a given digital sensor format. The design of the shared objective lens is best matched for the center channel with a focal length of 30 mm. Under these conditions, altering the microcamera for increased focal length not only degrades the imaging quality but also enlarges the aperture size of the optics, which would eventually lead to physical conflict between adjacent microcamera units. On the other hand, a reduced system focal length would move the microcamera towards the spherical lens. Since the microcamera cannot be located inside the spherical objective, this puts a limit on the shortest focal length available. In our design example, physical constraint as well as imaging quality considerations have been the limiting factors that jointly determine a focal length limit from 15 mm to 45 mm.

The final design features an optical size less than 0.4 L. Figure 7(c) shows the layout of the lens design and size labels of some critical dimensions. Each imaging channel consists of the commonly shared spherical objective lens and a microcamera. The MTF curves of each channel are shown in Fig. 8. For a given spherical objective lens, there is an optimally matched system focal length at which optimal imaging performance is achieved. However, the performance degrades mildly as the focal length deviates from the optimally matched one. As demonstrated in Fig. 8, channels in the middle assume the highest MTF achievement for both on-axis and off-axis FoVs, while a satisfactory performance is obtained as the focal length shifts on either side with a total zoom ratio of about 3×. Detailed design prescription data is available in Table S2 in the supplementary material at [20].

 figure: Fig. 8.

Fig. 8. MTF curves of each imaging channel in multifocal MMS lens design. (a) MTFs of on-axis FoV. (b) MTFs of half FoV. (c) MTFs of marginal FoV.

Download Full Size | PPT Slide | PDF

C. Combination of Multiple MMS Cameras

As discussed in Section 2.A, lightpath obscuration prevents arbitrary FoV configuration for one MMS camera. Nonetheless, this limitation can be surmounted by a combinational use of multiple MMS cameras. One such example is presented in [17], where multiple MMS lenses are co-boresighted to interleave continuous coverage of a wide FoV. Here we present another example. In the instance of the 360° ring FoV lens in section 2.A, the viewing angle ranges from 43° to 76°. However, light occlusion occurs when the microcamera array zone approaches the equator as shown in Fig. 3. As illustrated in Fig. 9, one simple solution is to implement three MMS lenses positioned back to back, with each one covering a FoV barely greater than 120°. Collectively, a 360° panoramic image in the horizontal direction is captured without occlusion.

 figure: Fig. 9.

Fig. 9. Achieving 360° horizontal FoV with three back-to-back MMS lenses.

Download Full Size | PPT Slide | PDF

Another solution is one that bears resemblance to the Facebook Surround360 camera [21], which presents a spherical camera where free spaces are reserved between adjacent optics and sensors for light to pass through. To achieve this field of view using a multiscale array, some microcamera positions are saved for light passages. For a continuous FoV coverage, we can combine image patches captured by multiple MMS cameras together. As shown in Fig. 10(a), four MMS lenses can be stacked vertically and interleaved for a complete coverage of a 360° horizontal FoV. All four MMS cameras are identical, only being twisted relatively for staggered angular positions. The cone angle of each small circle here is 10°, and the number of circles along one orbit of the sphere is 36. As illustrated in Fig. 10(b), each microcamera is looking through a clear tunnel of the three reserved circles horizontally, which allows for a near-obscuration-free light passage. Detailed lens design data can be found in Table S3 in the supplementary material at [20].

 figure: Fig. 10.

Fig. 10. Achieving 360° horizontal FoV by interleaving MMS lenses in a stack. (a) Four MMS lenses are combined to interleave a complete coverage of 360° horizontal FoV. (b) Microcameras and the light windows of 360° horizontal stacking imager.

Download Full Size | PPT Slide | PDF

The last example presented here is an omnidirectional camera, which sees in all directions with uniform angular resolution. Section 2.A estimated that the largest obscuration-free angle for a MMS lens is less than 80°, which implies that a minimum of four MMS lenses are required for full 4π spherical FoV coverage. Each camera of the four is positioned at one of the vertices of a regular tetrahedron and covers a solid angle slightly more than π steradian. The extra coverage is for overlapping. As demonstrated in Fig. 11(a), the area projected by one of the triangular surfaces of a tetrahedron on its circumscribed sphere dictates the minimum covering area of each MMS lens. The largest field angle for this triangular spherical patch, as depicted in Fig. 11(a), is 125.26°. As shown in Fig. 10(b), we crop out a packing patch from a close-packed globe with the distorted icosahedron geodesic method. The geometry of our final 4π full-space camera is shown in Fig. 12; being bounded by a sphere with a radius of 74 mm, this imager has the potential to achieve a uniform angular resolution of ifov=83  μrad over full-space coverage, employing the MMS lens design prescription used in our first example above and detailed in Table S1 at [20].

 figure: Fig. 11.

Fig. 11. Tetrahedron geometry of full spherical MMS lens. (a) Space segmentation with four MMS lenses, with each one covering a quarter of the full sphere. (b) Close-packed microcameras on one of the four segments.

Download Full Size | PPT Slide | PDF

 figure: Fig. 12.

Fig. 12. Layout view of the full spherical MMS lens.

Download Full Size | PPT Slide | PDF

To provide a quantitative perception about all the design instances presented in this paper, Table 2, shown below, describes the field of view configurations, angular resolution, information capacity, and physical size of each instance. This table helps to verify the effectiveness of the MMS lens architecture in building high-pixel-count, versatile-field-of-view configuration cameras with compact form factor.

Tables Icon

Table 2. Characteristics of MMS Lens Designs Presented

3. CONCLUSION

In this paper, we have explored various possible design architectures of MMS lenses. By manipulating the packing patterns of the microcameras, we have designed a 360° ring FoV MMS lens. This configuration is able to capture around 500 megapixel images from a circular ring area. By varying the microcamera designs from one imaging channel to another, we have showed a multifocal design. The focal lengths here range from 15 to 40 mm, capable of covering a street with relatively uniform imaging magnification. Finally, we illustrated the capability of a combination of multiple MMS systems in covering an arbitrary solid angle in 4π space. We conclude that the easy adaptation and manipulation of MMS lens configuration provides novel design freedom for diverse applications.

Nonetheless, these advantages in easy adaptation, flexible rearrangement, and linearly scaled cost due to the core idea of the design in modularity come with limitations. Transmitting, processing, integrating, and exploring the more fractured imaging information produced by a MMS architecture demands much more capacity from supporting electronics, more computational power, and more software competency than there would be for conventional cameras. Fortunately, these challenges are not insurmountable, since all required enabling technologies have been revolutionized over the past decades and are more ready than ever for supporting our parallel camera effort as a promising direction for next-generation camera products.

Funding

Intel Corporation.

REFERENCES

1. R. Sargent, C. Bartley, P. Dille, J. Keller, I. Nourbakhsh, and R. LeGrand, “Timelapse GigaPan: capturing, sharing, and exploring timelapse gigapixel imagery,” in Fine International Conference on Gigapixel Imaging for Science, Pittsburgh, Pennsylvania, 2010.

2. B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005). [CrossRef]  

3. N. Fukushima, T. Yendo, T. Fujii, and M. Tanimoto, “Synthesizing wide-angle and arbitrary viewpoint images from a circular camera array,” Proc. SPIE 6056, 60560Z (2006). [CrossRef]  

4. S. R. Tan, M. J. Zhang, W. Wang, and W. Xu, “Aha: an easily extendible high-resolution camera array,” in 2nd Workshop on Digital Media and its Application in Museum & Heritages (IEEE, 2007), pp. 319–323.

5. Y. Taguchi, K. Takahashi, and T. Naemura, “Real-time all-in-focus video-based rendering using a network camera array,” in 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video (IEEE, 2008), pp. 241–244.

6. Y. Taguchi, K. Takahashi, and T. Naemura, “Design and implementation of a real-time video-based rendering system using a network camera array,” IEICE Trans. Inf. Syst. E92-D, 1442–1452 (2009). [CrossRef]  

7. D. Pollock, P. Reardon, T. Rogers, C. Underwood, G. Egnal, B. Wilburn, and S. Pitalo, “Multi-lens array system and method,” U.S. patent 9,182,228 B2 (10 November , 2015).

8. “Mantis Camera,” 2017, https://www.aqueti.com/products/.

9. “Facebook surround 360,” 2017, https://code.fb.com/video-engineering/introducing-facebook-surround-360-an-open-high-quality-3d-360-video-capture-system/.

10. D. Brady, W. Pang, H. Li, Z. Ma, Y. Tao, and X. Cao, “Parallel cameras,” Optica 5, 127–137 (2018). [CrossRef]  

11. A. W. Lohmann, “Scaling laws for lens systems,” Appl. Opt. 28, 4996–4998 (1989). [CrossRef]  

12. D. J. Brady and N. Hagen, “Multiscale lens design,” Opt. Express 17, 10659–10674 (2009). [CrossRef]  

13. D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012). [CrossRef]  

14. D. L. Marks, E. J. Tremblay, J. E. Ford, and D. J. Brady, “Microcamera aperture scale in monocentric gigapixel cameras,” Appl. Opt. 50, 5824–5833 (2011). [CrossRef]  

15. E. J. Tremblay, D. L. Marks, D. J. Brady, and J. E. Ford, “Design and scaling of monocentric multiscale imagers,” Appl. Opt. 51, 4691–4702 (2012). [CrossRef]  

16. T. Nakamura, D. Kittle, S. Youn, S. Feller, J. Tanida, and D. Brady, “Autofocus for a multiscale gigapixel camera,” Appl. Opt. 52, 8146–8153 (2013). [CrossRef]  

17. W. Pang and D. J. Brady, “Galilean monocentric multiscale optical systems,” Opt. Express 25, 20332–20339 (2017). [CrossRef]  

18. H. S. Son, D. L. Marks, J. Hahn, J. Kim, and D. J. Brady, “Design of a spherical focal surface using close-packed relay optics,” Opt. Express 19, 16132–16138 (2011). [CrossRef]  

19. K. Matzen, M. F. Cohen, B. Evans, J. Kopf, and R. Szeliski, “Low-cost 360 stereo photography and video capture,” ACM Trans. Graph. 36, 148 (2017). [CrossRef]  

20. W. Pang and D. Brady, “Supplemental Material Novel FoV MMS,” 2018, https://doi.org/10.6084/m9.figshare.5876061.v1.

21. G. Krishnan and S. K. Nayar, “Towards a true spherical camera,” Proc. SPIE 7240, 724002 (2009). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. R. Sargent, C. Bartley, P. Dille, J. Keller, I. Nourbakhsh, and R. LeGrand, “Timelapse GigaPan: capturing, sharing, and exploring timelapse gigapixel imagery,” in Fine International Conference on Gigapixel Imaging for Science, Pittsburgh, Pennsylvania, 2010.
  2. B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
    [Crossref]
  3. N. Fukushima, T. Yendo, T. Fujii, and M. Tanimoto, “Synthesizing wide-angle and arbitrary viewpoint images from a circular camera array,” Proc. SPIE 6056, 60560Z (2006).
    [Crossref]
  4. S. R. Tan, M. J. Zhang, W. Wang, and W. Xu, “Aha: an easily extendible high-resolution camera array,” in 2nd Workshop on Digital Media and its Application in Museum & Heritages (IEEE, 2007), pp. 319–323.
  5. Y. Taguchi, K. Takahashi, and T. Naemura, “Real-time all-in-focus video-based rendering using a network camera array,” in 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video (IEEE, 2008), pp. 241–244.
  6. Y. Taguchi, K. Takahashi, and T. Naemura, “Design and implementation of a real-time video-based rendering system using a network camera array,” IEICE Trans. Inf. Syst. E92-D, 1442–1452 (2009).
    [Crossref]
  7. D. Pollock, P. Reardon, T. Rogers, C. Underwood, G. Egnal, B. Wilburn, and S. Pitalo, “Multi-lens array system and method,” U.S. patent9,182,228 B2 (10November, 2015).
  8. “Mantis Camera,” 2017, https://www.aqueti.com/products/ .
  9. “Facebook surround 360,” 2017, https://code.fb.com/video-engineering/introducing-facebook-surround-360-an-open-high-quality-3d-360-video-capture-system/ .
  10. D. Brady, W. Pang, H. Li, Z. Ma, Y. Tao, and X. Cao, “Parallel cameras,” Optica 5, 127–137 (2018).
    [Crossref]
  11. A. W. Lohmann, “Scaling laws for lens systems,” Appl. Opt. 28, 4996–4998 (1989).
    [Crossref]
  12. D. J. Brady and N. Hagen, “Multiscale lens design,” Opt. Express 17, 10659–10674 (2009).
    [Crossref]
  13. D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
    [Crossref]
  14. D. L. Marks, E. J. Tremblay, J. E. Ford, and D. J. Brady, “Microcamera aperture scale in monocentric gigapixel cameras,” Appl. Opt. 50, 5824–5833 (2011).
    [Crossref]
  15. E. J. Tremblay, D. L. Marks, D. J. Brady, and J. E. Ford, “Design and scaling of monocentric multiscale imagers,” Appl. Opt. 51, 4691–4702 (2012).
    [Crossref]
  16. T. Nakamura, D. Kittle, S. Youn, S. Feller, J. Tanida, and D. Brady, “Autofocus for a multiscale gigapixel camera,” Appl. Opt. 52, 8146–8153 (2013).
    [Crossref]
  17. W. Pang and D. J. Brady, “Galilean monocentric multiscale optical systems,” Opt. Express 25, 20332–20339 (2017).
    [Crossref]
  18. H. S. Son, D. L. Marks, J. Hahn, J. Kim, and D. J. Brady, “Design of a spherical focal surface using close-packed relay optics,” Opt. Express 19, 16132–16138 (2011).
    [Crossref]
  19. K. Matzen, M. F. Cohen, B. Evans, J. Kopf, and R. Szeliski, “Low-cost 360 stereo photography and video capture,” ACM Trans. Graph. 36, 148 (2017).
    [Crossref]
  20. W. Pang and D. Brady, “Supplemental Material Novel FoV MMS,” 2018, https://doi.org/10.6084/m9.figshare.5876061.v1 .
  21. G. Krishnan and S. K. Nayar, “Towards a true spherical camera,” Proc. SPIE 7240, 724002 (2009).
    [Crossref]

2018 (1)

2017 (2)

W. Pang and D. J. Brady, “Galilean monocentric multiscale optical systems,” Opt. Express 25, 20332–20339 (2017).
[Crossref]

K. Matzen, M. F. Cohen, B. Evans, J. Kopf, and R. Szeliski, “Low-cost 360 stereo photography and video capture,” ACM Trans. Graph. 36, 148 (2017).
[Crossref]

2013 (1)

2012 (2)

E. J. Tremblay, D. L. Marks, D. J. Brady, and J. E. Ford, “Design and scaling of monocentric multiscale imagers,” Appl. Opt. 51, 4691–4702 (2012).
[Crossref]

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

2011 (2)

2009 (3)

G. Krishnan and S. K. Nayar, “Towards a true spherical camera,” Proc. SPIE 7240, 724002 (2009).
[Crossref]

Y. Taguchi, K. Takahashi, and T. Naemura, “Design and implementation of a real-time video-based rendering system using a network camera array,” IEICE Trans. Inf. Syst. E92-D, 1442–1452 (2009).
[Crossref]

D. J. Brady and N. Hagen, “Multiscale lens design,” Opt. Express 17, 10659–10674 (2009).
[Crossref]

2006 (1)

N. Fukushima, T. Yendo, T. Fujii, and M. Tanimoto, “Synthesizing wide-angle and arbitrary viewpoint images from a circular camera array,” Proc. SPIE 6056, 60560Z (2006).
[Crossref]

2005 (1)

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

1989 (1)

Adams, A.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Antunez, E.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Barth, A.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Bartley, C.

R. Sargent, C. Bartley, P. Dille, J. Keller, I. Nourbakhsh, and R. LeGrand, “Timelapse GigaPan: capturing, sharing, and exploring timelapse gigapixel imagery,” in Fine International Conference on Gigapixel Imaging for Science, Pittsburgh, Pennsylvania, 2010.

Brady, D.

Brady, D. J.

Cao, X.

Cohen, M. F.

K. Matzen, M. F. Cohen, B. Evans, J. Kopf, and R. Szeliski, “Low-cost 360 stereo photography and video capture,” ACM Trans. Graph. 36, 148 (2017).
[Crossref]

Dille, P.

R. Sargent, C. Bartley, P. Dille, J. Keller, I. Nourbakhsh, and R. LeGrand, “Timelapse GigaPan: capturing, sharing, and exploring timelapse gigapixel imagery,” in Fine International Conference on Gigapixel Imaging for Science, Pittsburgh, Pennsylvania, 2010.

Egnal, G.

D. Pollock, P. Reardon, T. Rogers, C. Underwood, G. Egnal, B. Wilburn, and S. Pitalo, “Multi-lens array system and method,” U.S. patent9,182,228 B2 (10November, 2015).

Evans, B.

K. Matzen, M. F. Cohen, B. Evans, J. Kopf, and R. Szeliski, “Low-cost 360 stereo photography and video capture,” ACM Trans. Graph. 36, 148 (2017).
[Crossref]

Feller, S.

Feller, S. D.

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

Ford, J. E.

Fujii, T.

N. Fukushima, T. Yendo, T. Fujii, and M. Tanimoto, “Synthesizing wide-angle and arbitrary viewpoint images from a circular camera array,” Proc. SPIE 6056, 60560Z (2006).
[Crossref]

Fukushima, N.

N. Fukushima, T. Yendo, T. Fujii, and M. Tanimoto, “Synthesizing wide-angle and arbitrary viewpoint images from a circular camera array,” Proc. SPIE 6056, 60560Z (2006).
[Crossref]

Gehm, M. E.

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

Golish, D. R.

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

Hagen, N.

Hahn, J.

Horowitz, M.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Joshi, N.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Keller, J.

R. Sargent, C. Bartley, P. Dille, J. Keller, I. Nourbakhsh, and R. LeGrand, “Timelapse GigaPan: capturing, sharing, and exploring timelapse gigapixel imagery,” in Fine International Conference on Gigapixel Imaging for Science, Pittsburgh, Pennsylvania, 2010.

Kim, J.

Kittle, D.

Kittle, D. S.

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

Kopf, J.

K. Matzen, M. F. Cohen, B. Evans, J. Kopf, and R. Szeliski, “Low-cost 360 stereo photography and video capture,” ACM Trans. Graph. 36, 148 (2017).
[Crossref]

Krishnan, G.

G. Krishnan and S. K. Nayar, “Towards a true spherical camera,” Proc. SPIE 7240, 724002 (2009).
[Crossref]

LeGrand, R.

R. Sargent, C. Bartley, P. Dille, J. Keller, I. Nourbakhsh, and R. LeGrand, “Timelapse GigaPan: capturing, sharing, and exploring timelapse gigapixel imagery,” in Fine International Conference on Gigapixel Imaging for Science, Pittsburgh, Pennsylvania, 2010.

Levoy, M.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Li, H.

Lohmann, A. W.

Ma, Z.

Marks, D. L.

Matzen, K.

K. Matzen, M. F. Cohen, B. Evans, J. Kopf, and R. Szeliski, “Low-cost 360 stereo photography and video capture,” ACM Trans. Graph. 36, 148 (2017).
[Crossref]

Naemura, T.

Y. Taguchi, K. Takahashi, and T. Naemura, “Design and implementation of a real-time video-based rendering system using a network camera array,” IEICE Trans. Inf. Syst. E92-D, 1442–1452 (2009).
[Crossref]

Y. Taguchi, K. Takahashi, and T. Naemura, “Real-time all-in-focus video-based rendering using a network camera array,” in 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video (IEEE, 2008), pp. 241–244.

Nakamura, T.

Nayar, S. K.

G. Krishnan and S. K. Nayar, “Towards a true spherical camera,” Proc. SPIE 7240, 724002 (2009).
[Crossref]

Nourbakhsh, I.

R. Sargent, C. Bartley, P. Dille, J. Keller, I. Nourbakhsh, and R. LeGrand, “Timelapse GigaPan: capturing, sharing, and exploring timelapse gigapixel imagery,” in Fine International Conference on Gigapixel Imaging for Science, Pittsburgh, Pennsylvania, 2010.

Pang, W.

Pitalo, S.

D. Pollock, P. Reardon, T. Rogers, C. Underwood, G. Egnal, B. Wilburn, and S. Pitalo, “Multi-lens array system and method,” U.S. patent9,182,228 B2 (10November, 2015).

Pollock, D.

D. Pollock, P. Reardon, T. Rogers, C. Underwood, G. Egnal, B. Wilburn, and S. Pitalo, “Multi-lens array system and method,” U.S. patent9,182,228 B2 (10November, 2015).

Reardon, P.

D. Pollock, P. Reardon, T. Rogers, C. Underwood, G. Egnal, B. Wilburn, and S. Pitalo, “Multi-lens array system and method,” U.S. patent9,182,228 B2 (10November, 2015).

Rogers, T.

D. Pollock, P. Reardon, T. Rogers, C. Underwood, G. Egnal, B. Wilburn, and S. Pitalo, “Multi-lens array system and method,” U.S. patent9,182,228 B2 (10November, 2015).

Sargent, R.

R. Sargent, C. Bartley, P. Dille, J. Keller, I. Nourbakhsh, and R. LeGrand, “Timelapse GigaPan: capturing, sharing, and exploring timelapse gigapixel imagery,” in Fine International Conference on Gigapixel Imaging for Science, Pittsburgh, Pennsylvania, 2010.

Son, H. S.

Stack, R. A.

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

Szeliski, R.

K. Matzen, M. F. Cohen, B. Evans, J. Kopf, and R. Szeliski, “Low-cost 360 stereo photography and video capture,” ACM Trans. Graph. 36, 148 (2017).
[Crossref]

Taguchi, Y.

Y. Taguchi, K. Takahashi, and T. Naemura, “Design and implementation of a real-time video-based rendering system using a network camera array,” IEICE Trans. Inf. Syst. E92-D, 1442–1452 (2009).
[Crossref]

Y. Taguchi, K. Takahashi, and T. Naemura, “Real-time all-in-focus video-based rendering using a network camera array,” in 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video (IEEE, 2008), pp. 241–244.

Takahashi, K.

Y. Taguchi, K. Takahashi, and T. Naemura, “Design and implementation of a real-time video-based rendering system using a network camera array,” IEICE Trans. Inf. Syst. E92-D, 1442–1452 (2009).
[Crossref]

Y. Taguchi, K. Takahashi, and T. Naemura, “Real-time all-in-focus video-based rendering using a network camera array,” in 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video (IEEE, 2008), pp. 241–244.

Talvala, E.-V.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Tan, S. R.

S. R. Tan, M. J. Zhang, W. Wang, and W. Xu, “Aha: an easily extendible high-resolution camera array,” in 2nd Workshop on Digital Media and its Application in Museum & Heritages (IEEE, 2007), pp. 319–323.

Tanida, J.

Tanimoto, M.

N. Fukushima, T. Yendo, T. Fujii, and M. Tanimoto, “Synthesizing wide-angle and arbitrary viewpoint images from a circular camera array,” Proc. SPIE 6056, 60560Z (2006).
[Crossref]

Tao, Y.

Tremblay, E. J.

Underwood, C.

D. Pollock, P. Reardon, T. Rogers, C. Underwood, G. Egnal, B. Wilburn, and S. Pitalo, “Multi-lens array system and method,” U.S. patent9,182,228 B2 (10November, 2015).

Vaish, V.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Vera, E. M.

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

Wang, W.

S. R. Tan, M. J. Zhang, W. Wang, and W. Xu, “Aha: an easily extendible high-resolution camera array,” in 2nd Workshop on Digital Media and its Application in Museum & Heritages (IEEE, 2007), pp. 319–323.

Wilburn, B.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

D. Pollock, P. Reardon, T. Rogers, C. Underwood, G. Egnal, B. Wilburn, and S. Pitalo, “Multi-lens array system and method,” U.S. patent9,182,228 B2 (10November, 2015).

Xu, W.

S. R. Tan, M. J. Zhang, W. Wang, and W. Xu, “Aha: an easily extendible high-resolution camera array,” in 2nd Workshop on Digital Media and its Application in Museum & Heritages (IEEE, 2007), pp. 319–323.

Yendo, T.

N. Fukushima, T. Yendo, T. Fujii, and M. Tanimoto, “Synthesizing wide-angle and arbitrary viewpoint images from a circular camera array,” Proc. SPIE 6056, 60560Z (2006).
[Crossref]

Youn, S.

Zhang, M. J.

S. R. Tan, M. J. Zhang, W. Wang, and W. Xu, “Aha: an easily extendible high-resolution camera array,” in 2nd Workshop on Digital Media and its Application in Museum & Heritages (IEEE, 2007), pp. 319–323.

ACM Trans. Graph. (2)

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

K. Matzen, M. F. Cohen, B. Evans, J. Kopf, and R. Szeliski, “Low-cost 360 stereo photography and video capture,” ACM Trans. Graph. 36, 148 (2017).
[Crossref]

Appl. Opt. (4)

IEICE Trans. Inf. Syst. (1)

Y. Taguchi, K. Takahashi, and T. Naemura, “Design and implementation of a real-time video-based rendering system using a network camera array,” IEICE Trans. Inf. Syst. E92-D, 1442–1452 (2009).
[Crossref]

Nature (1)

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

Opt. Express (3)

Optica (1)

Proc. SPIE (2)

N. Fukushima, T. Yendo, T. Fujii, and M. Tanimoto, “Synthesizing wide-angle and arbitrary viewpoint images from a circular camera array,” Proc. SPIE 6056, 60560Z (2006).
[Crossref]

G. Krishnan and S. K. Nayar, “Towards a true spherical camera,” Proc. SPIE 7240, 724002 (2009).
[Crossref]

Other (7)

W. Pang and D. Brady, “Supplemental Material Novel FoV MMS,” 2018, https://doi.org/10.6084/m9.figshare.5876061.v1 .

S. R. Tan, M. J. Zhang, W. Wang, and W. Xu, “Aha: an easily extendible high-resolution camera array,” in 2nd Workshop on Digital Media and its Application in Museum & Heritages (IEEE, 2007), pp. 319–323.

Y. Taguchi, K. Takahashi, and T. Naemura, “Real-time all-in-focus video-based rendering using a network camera array,” in 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video (IEEE, 2008), pp. 241–244.

D. Pollock, P. Reardon, T. Rogers, C. Underwood, G. Egnal, B. Wilburn, and S. Pitalo, “Multi-lens array system and method,” U.S. patent9,182,228 B2 (10November, 2015).

“Mantis Camera,” 2017, https://www.aqueti.com/products/ .

“Facebook surround 360,” 2017, https://code.fb.com/video-engineering/introducing-facebook-surround-360-an-open-high-quality-3d-360-video-capture-system/ .

R. Sargent, C. Bartley, P. Dille, J. Keller, I. Nourbakhsh, and R. LeGrand, “Timelapse GigaPan: capturing, sharing, and exploring timelapse gigapixel imagery,” in Fine International Conference on Gigapixel Imaging for Science, Pittsburgh, Pennsylvania, 2010.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Hexagonal close packing for a localized FoV output.
Fig. 2.
Fig. 2. Close packing 492 circles on a spherical surface using the distorted icosahedral geodesic method.
Fig. 3.
Fig. 3. Obscuration occurs when two microcameras are sitting in each other’s lightpaths.
Fig. 4.
Fig. 4. Calculating maximum clear packing angle cFoV within which no light obscuration occurs between any two channels. (a) For spherical symmetry, this cFoV can be determined by lightpath of one channel on the assumed packing boundary. (b) Calculation result for a set of given design parameters specified in the text.
Fig. 5.
Fig. 5. MMS camera with ring FoV. (a) MMS camera on a pole with FoV of a ring area. (b) 165 circles are packed on a belt of the top hemisphere with polar angle ranging from 43° to 76°. (b) Layout of a MMS lens design.
Fig. 6.
Fig. 6. Imaging performance of 360° ring MMS lens. (a) Layout of the one channel of 360° ring FoV MMS lens design. (b) MTF curves.
Fig. 7.
Fig. 7. Multifocal system. (a) Monitoring traffic of a long street from one end. (b) Multiple imaging channels of the optics. (C) Optical layout of multifocal system.
Fig. 8.
Fig. 8. MTF curves of each imaging channel in multifocal MMS lens design. (a) MTFs of on-axis FoV. (b) MTFs of half FoV. (c) MTFs of marginal FoV.
Fig. 9.
Fig. 9. Achieving 360° horizontal FoV with three back-to-back MMS lenses.
Fig. 10.
Fig. 10. Achieving 360° horizontal FoV by interleaving MMS lenses in a stack. (a) Four MMS lenses are combined to interleave a complete coverage of 360° horizontal FoV. (b) Microcameras and the light windows of 360° horizontal stacking imager.
Fig. 11.
Fig. 11. Tetrahedron geometry of full spherical MMS lens. (a) Space segmentation with four MMS lenses, with each one covering a quarter of the full sphere. (b) Close-packed microcameras on one of the four segments.
Fig. 12.
Fig. 12. Layout view of the full spherical MMS lens.

Tables (2)

Tables Icon

Table 1. Characteristics of Each Imaging Channel for a More Uniform Sampling Rate

Tables Icon

Table 2. Characteristics of MMS Lens Designs Presented

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

Chord Ratio=Maximum Center DistanceMinimum Center DistanceMinimum Center Distance,
Do=(lϵ+R)α+Dϵ2=(fodosfodos+R)α+f2F/#,
cFoV=α+πarctan(DoR).
p1=DpD+CoC,p2=DpDCoC.

Metrics