Abstract

Conventionally, the field of view of a camera is understood as the angular extent of a convex circular or rectangular region. Parallel camera architectures with computational image stitching, however, allow implementation of a field of view with an arbitrary shape. Monocentric multiscale lenses further allow the implementation of an arbitrary field of view in camera volumes comparable to conventional single-lens systems. In contrast with conventional wide-field-of-view systems, multiscale design can also achieve nearly uniform resolution across the entire field of view. This paper presents several design studies obtaining unconventional fields of view using this approach.

© 2018 Optical Society of America

Full Article  |  PDF Article
OSA Recommended Articles
Galilean monocentric multiscale optical systems

Wubin Pang and David J. Brady
Opt. Express 25(17) 20332-20339 (2017)

Design and scaling of monocentric multiscale imagers

Eric J. Tremblay, Daniel L. Marks, David J. Brady, and Joseph E. Ford
Appl. Opt. 51(20) 4691-4702 (2012)

Microcamera aperture scale in monocentric gigapixel cameras

Daniel L. Marks, Eric J. Tremblay, Joseph E. Ford, and David J. Brady
Appl. Opt. 50(30) 5824-5833 (2011)

References

  • View by:
  • |
  • |
  • |

  1. R. Sargent, C. Bartley, P. Dille, J. Keller, I. Nourbakhsh, and R. LeGrand, “Timelapse GigaPan: capturing, sharing, and exploring timelapse gigapixel imagery,” in Fine International Conference on Gigapixel Imaging for Science, Pittsburgh, Pennsylvania, 2010.
  2. B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
    [Crossref]
  3. N. Fukushima, T. Yendo, T. Fujii, and M. Tanimoto, “Synthesizing wide-angle and arbitrary viewpoint images from a circular camera array,” Proc. SPIE 6056, 60560Z (2006).
    [Crossref]
  4. S. R. Tan, M. J. Zhang, W. Wang, and W. Xu, “Aha: an easily extendible high-resolution camera array,” in 2nd Workshop on Digital Media and its Application in Museum & Heritages (IEEE, 2007), pp. 319–323.
  5. Y. Taguchi, K. Takahashi, and T. Naemura, “Real-time all-in-focus video-based rendering using a network camera array,” in 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video (IEEE, 2008), pp. 241–244.
  6. Y. Taguchi, K. Takahashi, and T. Naemura, “Design and implementation of a real-time video-based rendering system using a network camera array,” IEICE Trans. Inf. Syst. E92-D, 1442–1452 (2009).
    [Crossref]
  7. D. Pollock, P. Reardon, T. Rogers, C. Underwood, G. Egnal, B. Wilburn, and S. Pitalo, “Multi-lens array system and method,” U.S. patent9,182,228 B2 (10November, 2015).
  8. “Mantis Camera,” 2017, https://www.aqueti.com/products/ .
  9. “Facebook surround 360,” 2017, https://code.fb.com/video-engineering/introducing-facebook-surround-360-an-open-high-quality-3d-360-video-capture-system/ .
  10. D. Brady, W. Pang, H. Li, Z. Ma, Y. Tao, and X. Cao, “Parallel cameras,” Optica 5, 127–137 (2018).
    [Crossref]
  11. A. W. Lohmann, “Scaling laws for lens systems,” Appl. Opt. 28, 4996–4998 (1989).
    [Crossref]
  12. D. J. Brady and N. Hagen, “Multiscale lens design,” Opt. Express 17, 10659–10674 (2009).
    [Crossref]
  13. D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
    [Crossref]
  14. D. L. Marks, E. J. Tremblay, J. E. Ford, and D. J. Brady, “Microcamera aperture scale in monocentric gigapixel cameras,” Appl. Opt. 50, 5824–5833 (2011).
    [Crossref]
  15. E. J. Tremblay, D. L. Marks, D. J. Brady, and J. E. Ford, “Design and scaling of monocentric multiscale imagers,” Appl. Opt. 51, 4691–4702 (2012).
    [Crossref]
  16. T. Nakamura, D. Kittle, S. Youn, S. Feller, J. Tanida, and D. Brady, “Autofocus for a multiscale gigapixel camera,” Appl. Opt. 52, 8146–8153 (2013).
    [Crossref]
  17. W. Pang and D. J. Brady, “Galilean monocentric multiscale optical systems,” Opt. Express 25, 20332–20339 (2017).
    [Crossref]
  18. H. S. Son, D. L. Marks, J. Hahn, J. Kim, and D. J. Brady, “Design of a spherical focal surface using close-packed relay optics,” Opt. Express 19, 16132–16138 (2011).
    [Crossref]
  19. K. Matzen, M. F. Cohen, B. Evans, J. Kopf, and R. Szeliski, “Low-cost 360 stereo photography and video capture,” ACM Trans. Graph. 36, 148 (2017).
    [Crossref]
  20. W. Pang and D. Brady, “Supplemental Material Novel FoV MMS,” 2018, https://doi.org/10.6084/m9.figshare.5876061.v1 .
  21. G. Krishnan and S. K. Nayar, “Towards a true spherical camera,” Proc. SPIE 7240, 724002 (2009).
    [Crossref]

2018 (1)

2017 (2)

W. Pang and D. J. Brady, “Galilean monocentric multiscale optical systems,” Opt. Express 25, 20332–20339 (2017).
[Crossref]

K. Matzen, M. F. Cohen, B. Evans, J. Kopf, and R. Szeliski, “Low-cost 360 stereo photography and video capture,” ACM Trans. Graph. 36, 148 (2017).
[Crossref]

2013 (1)

2012 (2)

E. J. Tremblay, D. L. Marks, D. J. Brady, and J. E. Ford, “Design and scaling of monocentric multiscale imagers,” Appl. Opt. 51, 4691–4702 (2012).
[Crossref]

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

2011 (2)

2009 (3)

G. Krishnan and S. K. Nayar, “Towards a true spherical camera,” Proc. SPIE 7240, 724002 (2009).
[Crossref]

Y. Taguchi, K. Takahashi, and T. Naemura, “Design and implementation of a real-time video-based rendering system using a network camera array,” IEICE Trans. Inf. Syst. E92-D, 1442–1452 (2009).
[Crossref]

D. J. Brady and N. Hagen, “Multiscale lens design,” Opt. Express 17, 10659–10674 (2009).
[Crossref]

2006 (1)

N. Fukushima, T. Yendo, T. Fujii, and M. Tanimoto, “Synthesizing wide-angle and arbitrary viewpoint images from a circular camera array,” Proc. SPIE 6056, 60560Z (2006).
[Crossref]

2005 (1)

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

1989 (1)

Adams, A.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Antunez, E.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Barth, A.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Bartley, C.

R. Sargent, C. Bartley, P. Dille, J. Keller, I. Nourbakhsh, and R. LeGrand, “Timelapse GigaPan: capturing, sharing, and exploring timelapse gigapixel imagery,” in Fine International Conference on Gigapixel Imaging for Science, Pittsburgh, Pennsylvania, 2010.

Brady, D.

Brady, D. J.

Cao, X.

Cohen, M. F.

K. Matzen, M. F. Cohen, B. Evans, J. Kopf, and R. Szeliski, “Low-cost 360 stereo photography and video capture,” ACM Trans. Graph. 36, 148 (2017).
[Crossref]

Dille, P.

R. Sargent, C. Bartley, P. Dille, J. Keller, I. Nourbakhsh, and R. LeGrand, “Timelapse GigaPan: capturing, sharing, and exploring timelapse gigapixel imagery,” in Fine International Conference on Gigapixel Imaging for Science, Pittsburgh, Pennsylvania, 2010.

Egnal, G.

D. Pollock, P. Reardon, T. Rogers, C. Underwood, G. Egnal, B. Wilburn, and S. Pitalo, “Multi-lens array system and method,” U.S. patent9,182,228 B2 (10November, 2015).

Evans, B.

K. Matzen, M. F. Cohen, B. Evans, J. Kopf, and R. Szeliski, “Low-cost 360 stereo photography and video capture,” ACM Trans. Graph. 36, 148 (2017).
[Crossref]

Feller, S.

Feller, S. D.

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

Ford, J. E.

Fujii, T.

N. Fukushima, T. Yendo, T. Fujii, and M. Tanimoto, “Synthesizing wide-angle and arbitrary viewpoint images from a circular camera array,” Proc. SPIE 6056, 60560Z (2006).
[Crossref]

Fukushima, N.

N. Fukushima, T. Yendo, T. Fujii, and M. Tanimoto, “Synthesizing wide-angle and arbitrary viewpoint images from a circular camera array,” Proc. SPIE 6056, 60560Z (2006).
[Crossref]

Gehm, M. E.

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

Golish, D. R.

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

Hagen, N.

Hahn, J.

Horowitz, M.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Joshi, N.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Keller, J.

R. Sargent, C. Bartley, P. Dille, J. Keller, I. Nourbakhsh, and R. LeGrand, “Timelapse GigaPan: capturing, sharing, and exploring timelapse gigapixel imagery,” in Fine International Conference on Gigapixel Imaging for Science, Pittsburgh, Pennsylvania, 2010.

Kim, J.

Kittle, D.

Kittle, D. S.

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

Kopf, J.

K. Matzen, M. F. Cohen, B. Evans, J. Kopf, and R. Szeliski, “Low-cost 360 stereo photography and video capture,” ACM Trans. Graph. 36, 148 (2017).
[Crossref]

Krishnan, G.

G. Krishnan and S. K. Nayar, “Towards a true spherical camera,” Proc. SPIE 7240, 724002 (2009).
[Crossref]

LeGrand, R.

R. Sargent, C. Bartley, P. Dille, J. Keller, I. Nourbakhsh, and R. LeGrand, “Timelapse GigaPan: capturing, sharing, and exploring timelapse gigapixel imagery,” in Fine International Conference on Gigapixel Imaging for Science, Pittsburgh, Pennsylvania, 2010.

Levoy, M.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Li, H.

Lohmann, A. W.

Ma, Z.

Marks, D. L.

Matzen, K.

K. Matzen, M. F. Cohen, B. Evans, J. Kopf, and R. Szeliski, “Low-cost 360 stereo photography and video capture,” ACM Trans. Graph. 36, 148 (2017).
[Crossref]

Naemura, T.

Y. Taguchi, K. Takahashi, and T. Naemura, “Design and implementation of a real-time video-based rendering system using a network camera array,” IEICE Trans. Inf. Syst. E92-D, 1442–1452 (2009).
[Crossref]

Y. Taguchi, K. Takahashi, and T. Naemura, “Real-time all-in-focus video-based rendering using a network camera array,” in 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video (IEEE, 2008), pp. 241–244.

Nakamura, T.

Nayar, S. K.

G. Krishnan and S. K. Nayar, “Towards a true spherical camera,” Proc. SPIE 7240, 724002 (2009).
[Crossref]

Nourbakhsh, I.

R. Sargent, C. Bartley, P. Dille, J. Keller, I. Nourbakhsh, and R. LeGrand, “Timelapse GigaPan: capturing, sharing, and exploring timelapse gigapixel imagery,” in Fine International Conference on Gigapixel Imaging for Science, Pittsburgh, Pennsylvania, 2010.

Pang, W.

Pitalo, S.

D. Pollock, P. Reardon, T. Rogers, C. Underwood, G. Egnal, B. Wilburn, and S. Pitalo, “Multi-lens array system and method,” U.S. patent9,182,228 B2 (10November, 2015).

Pollock, D.

D. Pollock, P. Reardon, T. Rogers, C. Underwood, G. Egnal, B. Wilburn, and S. Pitalo, “Multi-lens array system and method,” U.S. patent9,182,228 B2 (10November, 2015).

Reardon, P.

D. Pollock, P. Reardon, T. Rogers, C. Underwood, G. Egnal, B. Wilburn, and S. Pitalo, “Multi-lens array system and method,” U.S. patent9,182,228 B2 (10November, 2015).

Rogers, T.

D. Pollock, P. Reardon, T. Rogers, C. Underwood, G. Egnal, B. Wilburn, and S. Pitalo, “Multi-lens array system and method,” U.S. patent9,182,228 B2 (10November, 2015).

Sargent, R.

R. Sargent, C. Bartley, P. Dille, J. Keller, I. Nourbakhsh, and R. LeGrand, “Timelapse GigaPan: capturing, sharing, and exploring timelapse gigapixel imagery,” in Fine International Conference on Gigapixel Imaging for Science, Pittsburgh, Pennsylvania, 2010.

Son, H. S.

Stack, R. A.

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

Szeliski, R.

K. Matzen, M. F. Cohen, B. Evans, J. Kopf, and R. Szeliski, “Low-cost 360 stereo photography and video capture,” ACM Trans. Graph. 36, 148 (2017).
[Crossref]

Taguchi, Y.

Y. Taguchi, K. Takahashi, and T. Naemura, “Design and implementation of a real-time video-based rendering system using a network camera array,” IEICE Trans. Inf. Syst. E92-D, 1442–1452 (2009).
[Crossref]

Y. Taguchi, K. Takahashi, and T. Naemura, “Real-time all-in-focus video-based rendering using a network camera array,” in 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video (IEEE, 2008), pp. 241–244.

Takahashi, K.

Y. Taguchi, K. Takahashi, and T. Naemura, “Design and implementation of a real-time video-based rendering system using a network camera array,” IEICE Trans. Inf. Syst. E92-D, 1442–1452 (2009).
[Crossref]

Y. Taguchi, K. Takahashi, and T. Naemura, “Real-time all-in-focus video-based rendering using a network camera array,” in 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video (IEEE, 2008), pp. 241–244.

Talvala, E.-V.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Tan, S. R.

S. R. Tan, M. J. Zhang, W. Wang, and W. Xu, “Aha: an easily extendible high-resolution camera array,” in 2nd Workshop on Digital Media and its Application in Museum & Heritages (IEEE, 2007), pp. 319–323.

Tanida, J.

Tanimoto, M.

N. Fukushima, T. Yendo, T. Fujii, and M. Tanimoto, “Synthesizing wide-angle and arbitrary viewpoint images from a circular camera array,” Proc. SPIE 6056, 60560Z (2006).
[Crossref]

Tao, Y.

Tremblay, E. J.

Underwood, C.

D. Pollock, P. Reardon, T. Rogers, C. Underwood, G. Egnal, B. Wilburn, and S. Pitalo, “Multi-lens array system and method,” U.S. patent9,182,228 B2 (10November, 2015).

Vaish, V.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Vera, E. M.

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

Wang, W.

S. R. Tan, M. J. Zhang, W. Wang, and W. Xu, “Aha: an easily extendible high-resolution camera array,” in 2nd Workshop on Digital Media and its Application in Museum & Heritages (IEEE, 2007), pp. 319–323.

Wilburn, B.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

D. Pollock, P. Reardon, T. Rogers, C. Underwood, G. Egnal, B. Wilburn, and S. Pitalo, “Multi-lens array system and method,” U.S. patent9,182,228 B2 (10November, 2015).

Xu, W.

S. R. Tan, M. J. Zhang, W. Wang, and W. Xu, “Aha: an easily extendible high-resolution camera array,” in 2nd Workshop on Digital Media and its Application in Museum & Heritages (IEEE, 2007), pp. 319–323.

Yendo, T.

N. Fukushima, T. Yendo, T. Fujii, and M. Tanimoto, “Synthesizing wide-angle and arbitrary viewpoint images from a circular camera array,” Proc. SPIE 6056, 60560Z (2006).
[Crossref]

Youn, S.

Zhang, M. J.

S. R. Tan, M. J. Zhang, W. Wang, and W. Xu, “Aha: an easily extendible high-resolution camera array,” in 2nd Workshop on Digital Media and its Application in Museum & Heritages (IEEE, 2007), pp. 319–323.

ACM Trans. Graph. (2)

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

K. Matzen, M. F. Cohen, B. Evans, J. Kopf, and R. Szeliski, “Low-cost 360 stereo photography and video capture,” ACM Trans. Graph. 36, 148 (2017).
[Crossref]

Appl. Opt. (4)

IEICE Trans. Inf. Syst. (1)

Y. Taguchi, K. Takahashi, and T. Naemura, “Design and implementation of a real-time video-based rendering system using a network camera array,” IEICE Trans. Inf. Syst. E92-D, 1442–1452 (2009).
[Crossref]

Nature (1)

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

Opt. Express (3)

Optica (1)

Proc. SPIE (2)

N. Fukushima, T. Yendo, T. Fujii, and M. Tanimoto, “Synthesizing wide-angle and arbitrary viewpoint images from a circular camera array,” Proc. SPIE 6056, 60560Z (2006).
[Crossref]

G. Krishnan and S. K. Nayar, “Towards a true spherical camera,” Proc. SPIE 7240, 724002 (2009).
[Crossref]

Other (7)

W. Pang and D. Brady, “Supplemental Material Novel FoV MMS,” 2018, https://doi.org/10.6084/m9.figshare.5876061.v1 .

S. R. Tan, M. J. Zhang, W. Wang, and W. Xu, “Aha: an easily extendible high-resolution camera array,” in 2nd Workshop on Digital Media and its Application in Museum & Heritages (IEEE, 2007), pp. 319–323.

Y. Taguchi, K. Takahashi, and T. Naemura, “Real-time all-in-focus video-based rendering using a network camera array,” in 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video (IEEE, 2008), pp. 241–244.

D. Pollock, P. Reardon, T. Rogers, C. Underwood, G. Egnal, B. Wilburn, and S. Pitalo, “Multi-lens array system and method,” U.S. patent9,182,228 B2 (10November, 2015).

“Mantis Camera,” 2017, https://www.aqueti.com/products/ .

“Facebook surround 360,” 2017, https://code.fb.com/video-engineering/introducing-facebook-surround-360-an-open-high-quality-3d-360-video-capture-system/ .

R. Sargent, C. Bartley, P. Dille, J. Keller, I. Nourbakhsh, and R. LeGrand, “Timelapse GigaPan: capturing, sharing, and exploring timelapse gigapixel imagery,” in Fine International Conference on Gigapixel Imaging for Science, Pittsburgh, Pennsylvania, 2010.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Hexagonal close packing for a localized FoV output.
Fig. 2.
Fig. 2. Close packing 492 circles on a spherical surface using the distorted icosahedral geodesic method.
Fig. 3.
Fig. 3. Obscuration occurs when two microcameras are sitting in each other’s lightpaths.
Fig. 4.
Fig. 4. Calculating maximum clear packing angle cFoV within which no light obscuration occurs between any two channels. (a) For spherical symmetry, this cFoV can be determined by lightpath of one channel on the assumed packing boundary. (b) Calculation result for a set of given design parameters specified in the text.
Fig. 5.
Fig. 5. MMS camera with ring FoV. (a) MMS camera on a pole with FoV of a ring area. (b) 165 circles are packed on a belt of the top hemisphere with polar angle ranging from 43° to 76°. (b) Layout of a MMS lens design.
Fig. 6.
Fig. 6. Imaging performance of 360° ring MMS lens. (a) Layout of the one channel of 360° ring FoV MMS lens design. (b) MTF curves.
Fig. 7.
Fig. 7. Multifocal system. (a) Monitoring traffic of a long street from one end. (b) Multiple imaging channels of the optics. (C) Optical layout of multifocal system.
Fig. 8.
Fig. 8. MTF curves of each imaging channel in multifocal MMS lens design. (a) MTFs of on-axis FoV. (b) MTFs of half FoV. (c) MTFs of marginal FoV.
Fig. 9.
Fig. 9. Achieving 360° horizontal FoV with three back-to-back MMS lenses.
Fig. 10.
Fig. 10. Achieving 360° horizontal FoV by interleaving MMS lenses in a stack. (a) Four MMS lenses are combined to interleave a complete coverage of 360° horizontal FoV. (b) Microcameras and the light windows of 360° horizontal stacking imager.
Fig. 11.
Fig. 11. Tetrahedron geometry of full spherical MMS lens. (a) Space segmentation with four MMS lenses, with each one covering a quarter of the full sphere. (b) Close-packed microcameras on one of the four segments.
Fig. 12.
Fig. 12. Layout view of the full spherical MMS lens.

Tables (2)

Tables Icon

Table 1. Characteristics of Each Imaging Channel for a More Uniform Sampling Rate

Tables Icon

Table 2. Characteristics of MMS Lens Designs Presented

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

Chord Ratio=Maximum Center DistanceMinimum Center DistanceMinimum Center Distance,
Do=(lϵ+R)α+Dϵ2=(fodosfodos+R)α+f2F/#,
cFoV=α+πarctan(DoR).
p1=DpD+CoC,p2=DpDCoC.

Metrics