Abstract

We propose for the first time a deep learning approach in assisting lens designers to find a lens design starting point. Using machine learning, lens design databases can be expanded in a continuous way to produce high-quality starting points from various optical specifications. A deep neural network (DNN) is trained to reproduce known forms of design (supervised training) and to jointly optimize the optical performance (unsupervised training) for generalization. In this work, the DNN infers high-performance cemented and air-spaced doublets that are tailored to diverse desired specifications after being fed with reference designs from the literature. The framework can be extended to lens systems with more optical surfaces.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Label enhanced and patch based deep learning for phase retrieval from single frame fringe pattern in fringe projection 3D measurement

Jiashuo Shi, Xinjun Zhu, Hongyi Wang, Limei Song, and Qinghua Guo
Opt. Express 27(20) 28929-28943 (2019)

Deep learning reconstruction of ultrashort pulses

Tom Zahavy, Alex Dikopoltsev, Daniel Moss, Gil Ilan Haham, Oren Cohen, Shie Mannor, and Mordechai Segev
Optica 5(5) 666-673 (2018)

References

  • View by:
  • |
  • |
  • |

  1. D. Sturlesi and D. C. O’Shea, “Future of global optimization in optical design,” in 1990 International Lens Design Conference, vol. 1354 (International Society for Optics and Photonics, 1991), pp. 54–69
  2. M. Isshiki, H. Ono, K. Hiraga, J. Ishikawa, and S. Nakadate, “Lens Design: Global Optimization with Escape Function,” Opt. Rev. 2(6), 463–470 (1995).
    [Crossref]
  3. S. Thibault, C. Gagné, J. Beaulieu, and M. Parizeau, “Evolutionary algorithms applied to lens design: case study and analysis,” in Optical Design and Engineering II, vol. 5962 (International Society for Optics and Photonics, 2005).
  4. K. Höschel and V. Lakshminarayanan, “Genetic algorithms for lens design: a review,” J. Opt. 2, 463–470 (2018).
  5. G. W. Forbes and A. E. W. Jones, “Towards global optimization with adaptive simulated annealing,” in 1990 International Lens Design Conference, vol. 1354 (International Society for Optics and Photonics, 1991), pp. 144–154.
  6. C. Menke, “Application of particle swarm optimization to the automatic design of optical systems,” in Optical Design and Engineering VII, vol. 10690 (International Society for Optics and Photonics, 2018).
  7. M. van Turnhout and F. Bociort, “Instabilities and fractal basins of attraction in optical system optimization,” Opt. Express 17(1), 314–328 (2009).
    [Crossref]
  8. D. Petković, N. T. Pavlović, S. Shamshirband, M. L. Mat Kiah, N. Badrul Anuar, and M. Y. Idna Idris, “Adaptive neuro-fuzzy estimation of optimal lens system parameters,” Opt. Lasers Eng. 55, 84–93 (2014).
    [Crossref]
  9. S. Shamshirband, D. Petković, N. T. Pavlović, S. Ch, T. A. Altameem, and A. Gani, “Support vector machine firefly algorithm based optimization of lens system,” Appl. Opt. 54(1), 37–45 (2015).
    [Crossref]
  10. S. W. Weller, “Neural network optimization, components, and design selection,” in 1990 International Lens Design Conference, vol. 1354 (International Society for Optics and Photonics, 1991), pp. 371–379.
  11. Y. LeCun and Y. Bengio, “Convolutional networks for images, speech, and time series,” in The Handbook of Brain Theory and Neural Networks (MIT Press, 1995), pp. 255–258.
  12. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” in Advances in Neural Information Processing Systems 25, (Curran Associates, Inc., 2012), pp. 1097–1105.
  13. S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Comput. 9(8), 1735–1780 (1997).
    [Crossref]
  14. A. Graves, A. Mohamed, and G. Hinton, “Speech recognition with deep recurrent neural networks,” in 2013 IEEE International Conference on Acoustics Speech and Signal Processing (2013), pp. 6645–6649.
  15. I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to Sequence Learning with Neural Networks,” in Advances in Neural Information Processing Systems 27, (Curran Associates, Inc., 2014), pp. 3104–3112.
  16. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015)..
    [Crossref]
  17. I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning (MIT University, 2016).
  18. Zemax Development Corp., “Zebase 6 Optical Design Collection,” (2007).
  19. G. Côté, J.-F. Lalonde, and S. Thibault, “Toward Training a Deep Neural Network to Optimize Lens Designs,” in Frontiers in Optics / Laser Science, (Optical Society of America, 2018), p. JW4A.28.
  20. A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in PyTorch,” in 31st Conference on Neural Information Processing Systems (NIPS 2017), (2017).
  21. W. J. Smith, Modern Lens Design (McGraw Hill Professional, 2004).
  22. G. Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter, “Self-Normalizing Neural Networks,” in Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, eds. (Curran Associates, Inc., 2017) pp. 971–980.
  23. Schott Corporation, “Optical Glass Catalog,” (2017).
  24. D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” in 3rd International Conference on Learning Representations (ICLR 2015), (2015).

2018 (1)

K. Höschel and V. Lakshminarayanan, “Genetic algorithms for lens design: a review,” J. Opt. 2, 463–470 (2018).

2015 (2)

2014 (1)

D. Petković, N. T. Pavlović, S. Shamshirband, M. L. Mat Kiah, N. Badrul Anuar, and M. Y. Idna Idris, “Adaptive neuro-fuzzy estimation of optimal lens system parameters,” Opt. Lasers Eng. 55, 84–93 (2014).
[Crossref]

2009 (1)

1997 (1)

S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Comput. 9(8), 1735–1780 (1997).
[Crossref]

1995 (1)

M. Isshiki, H. Ono, K. Hiraga, J. Ishikawa, and S. Nakadate, “Lens Design: Global Optimization with Escape Function,” Opt. Rev. 2(6), 463–470 (1995).
[Crossref]

Altameem, T. A.

Antiga, L.

A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in PyTorch,” in 31st Conference on Neural Information Processing Systems (NIPS 2017), (2017).

Ba, J.

D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” in 3rd International Conference on Learning Representations (ICLR 2015), (2015).

Badrul Anuar, N.

D. Petković, N. T. Pavlović, S. Shamshirband, M. L. Mat Kiah, N. Badrul Anuar, and M. Y. Idna Idris, “Adaptive neuro-fuzzy estimation of optimal lens system parameters,” Opt. Lasers Eng. 55, 84–93 (2014).
[Crossref]

Beaulieu, J.

S. Thibault, C. Gagné, J. Beaulieu, and M. Parizeau, “Evolutionary algorithms applied to lens design: case study and analysis,” in Optical Design and Engineering II, vol. 5962 (International Society for Optics and Photonics, 2005).

Bengio, Y.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015)..
[Crossref]

I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning (MIT University, 2016).

Y. LeCun and Y. Bengio, “Convolutional networks for images, speech, and time series,” in The Handbook of Brain Theory and Neural Networks (MIT Press, 1995), pp. 255–258.

Bociort, F.

Ch, S.

Chanan, G.

A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in PyTorch,” in 31st Conference on Neural Information Processing Systems (NIPS 2017), (2017).

Chintala, S.

A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in PyTorch,” in 31st Conference on Neural Information Processing Systems (NIPS 2017), (2017).

Côté, G.

G. Côté, J.-F. Lalonde, and S. Thibault, “Toward Training a Deep Neural Network to Optimize Lens Designs,” in Frontiers in Optics / Laser Science, (Optical Society of America, 2018), p. JW4A.28.

Courville, A.

I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning (MIT University, 2016).

Desmaison, A.

A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in PyTorch,” in 31st Conference on Neural Information Processing Systems (NIPS 2017), (2017).

DeVito, Z.

A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in PyTorch,” in 31st Conference on Neural Information Processing Systems (NIPS 2017), (2017).

Forbes, G. W.

G. W. Forbes and A. E. W. Jones, “Towards global optimization with adaptive simulated annealing,” in 1990 International Lens Design Conference, vol. 1354 (International Society for Optics and Photonics, 1991), pp. 144–154.

Gagné, C.

S. Thibault, C. Gagné, J. Beaulieu, and M. Parizeau, “Evolutionary algorithms applied to lens design: case study and analysis,” in Optical Design and Engineering II, vol. 5962 (International Society for Optics and Photonics, 2005).

Gani, A.

Goodfellow, I.

I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning (MIT University, 2016).

Graves, A.

A. Graves, A. Mohamed, and G. Hinton, “Speech recognition with deep recurrent neural networks,” in 2013 IEEE International Conference on Acoustics Speech and Signal Processing (2013), pp. 6645–6649.

Gross, S.

A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in PyTorch,” in 31st Conference on Neural Information Processing Systems (NIPS 2017), (2017).

Hinton, G.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015)..
[Crossref]

A. Graves, A. Mohamed, and G. Hinton, “Speech recognition with deep recurrent neural networks,” in 2013 IEEE International Conference on Acoustics Speech and Signal Processing (2013), pp. 6645–6649.

Hinton, G. E.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” in Advances in Neural Information Processing Systems 25, (Curran Associates, Inc., 2012), pp. 1097–1105.

Hiraga, K.

M. Isshiki, H. Ono, K. Hiraga, J. Ishikawa, and S. Nakadate, “Lens Design: Global Optimization with Escape Function,” Opt. Rev. 2(6), 463–470 (1995).
[Crossref]

Hochreiter, S.

S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Comput. 9(8), 1735–1780 (1997).
[Crossref]

G. Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter, “Self-Normalizing Neural Networks,” in Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, eds. (Curran Associates, Inc., 2017) pp. 971–980.

Höschel, K.

K. Höschel and V. Lakshminarayanan, “Genetic algorithms for lens design: a review,” J. Opt. 2, 463–470 (2018).

Idna Idris, M. Y.

D. Petković, N. T. Pavlović, S. Shamshirband, M. L. Mat Kiah, N. Badrul Anuar, and M. Y. Idna Idris, “Adaptive neuro-fuzzy estimation of optimal lens system parameters,” Opt. Lasers Eng. 55, 84–93 (2014).
[Crossref]

Ishikawa, J.

M. Isshiki, H. Ono, K. Hiraga, J. Ishikawa, and S. Nakadate, “Lens Design: Global Optimization with Escape Function,” Opt. Rev. 2(6), 463–470 (1995).
[Crossref]

Isshiki, M.

M. Isshiki, H. Ono, K. Hiraga, J. Ishikawa, and S. Nakadate, “Lens Design: Global Optimization with Escape Function,” Opt. Rev. 2(6), 463–470 (1995).
[Crossref]

Jones, A. E. W.

G. W. Forbes and A. E. W. Jones, “Towards global optimization with adaptive simulated annealing,” in 1990 International Lens Design Conference, vol. 1354 (International Society for Optics and Photonics, 1991), pp. 144–154.

Kingma, D. P.

D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” in 3rd International Conference on Learning Representations (ICLR 2015), (2015).

Klambauer, G.

G. Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter, “Self-Normalizing Neural Networks,” in Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, eds. (Curran Associates, Inc., 2017) pp. 971–980.

Krizhevsky, A.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” in Advances in Neural Information Processing Systems 25, (Curran Associates, Inc., 2012), pp. 1097–1105.

Lakshminarayanan, V.

K. Höschel and V. Lakshminarayanan, “Genetic algorithms for lens design: a review,” J. Opt. 2, 463–470 (2018).

Lalonde, J.-F.

G. Côté, J.-F. Lalonde, and S. Thibault, “Toward Training a Deep Neural Network to Optimize Lens Designs,” in Frontiers in Optics / Laser Science, (Optical Society of America, 2018), p. JW4A.28.

Le, Q. V.

I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to Sequence Learning with Neural Networks,” in Advances in Neural Information Processing Systems 27, (Curran Associates, Inc., 2014), pp. 3104–3112.

LeCun, Y.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015)..
[Crossref]

Y. LeCun and Y. Bengio, “Convolutional networks for images, speech, and time series,” in The Handbook of Brain Theory and Neural Networks (MIT Press, 1995), pp. 255–258.

Lerer, A.

A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in PyTorch,” in 31st Conference on Neural Information Processing Systems (NIPS 2017), (2017).

Lin, Z.

A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in PyTorch,” in 31st Conference on Neural Information Processing Systems (NIPS 2017), (2017).

Mat Kiah, M. L.

D. Petković, N. T. Pavlović, S. Shamshirband, M. L. Mat Kiah, N. Badrul Anuar, and M. Y. Idna Idris, “Adaptive neuro-fuzzy estimation of optimal lens system parameters,” Opt. Lasers Eng. 55, 84–93 (2014).
[Crossref]

Mayr, A.

G. Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter, “Self-Normalizing Neural Networks,” in Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, eds. (Curran Associates, Inc., 2017) pp. 971–980.

Menke, C.

C. Menke, “Application of particle swarm optimization to the automatic design of optical systems,” in Optical Design and Engineering VII, vol. 10690 (International Society for Optics and Photonics, 2018).

Mohamed, A.

A. Graves, A. Mohamed, and G. Hinton, “Speech recognition with deep recurrent neural networks,” in 2013 IEEE International Conference on Acoustics Speech and Signal Processing (2013), pp. 6645–6649.

Nakadate, S.

M. Isshiki, H. Ono, K. Hiraga, J. Ishikawa, and S. Nakadate, “Lens Design: Global Optimization with Escape Function,” Opt. Rev. 2(6), 463–470 (1995).
[Crossref]

O’Shea, D. C.

D. Sturlesi and D. C. O’Shea, “Future of global optimization in optical design,” in 1990 International Lens Design Conference, vol. 1354 (International Society for Optics and Photonics, 1991), pp. 54–69

Ono, H.

M. Isshiki, H. Ono, K. Hiraga, J. Ishikawa, and S. Nakadate, “Lens Design: Global Optimization with Escape Function,” Opt. Rev. 2(6), 463–470 (1995).
[Crossref]

Parizeau, M.

S. Thibault, C. Gagné, J. Beaulieu, and M. Parizeau, “Evolutionary algorithms applied to lens design: case study and analysis,” in Optical Design and Engineering II, vol. 5962 (International Society for Optics and Photonics, 2005).

Paszke, A.

A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in PyTorch,” in 31st Conference on Neural Information Processing Systems (NIPS 2017), (2017).

Pavlovic, N. T.

S. Shamshirband, D. Petković, N. T. Pavlović, S. Ch, T. A. Altameem, and A. Gani, “Support vector machine firefly algorithm based optimization of lens system,” Appl. Opt. 54(1), 37–45 (2015).
[Crossref]

D. Petković, N. T. Pavlović, S. Shamshirband, M. L. Mat Kiah, N. Badrul Anuar, and M. Y. Idna Idris, “Adaptive neuro-fuzzy estimation of optimal lens system parameters,” Opt. Lasers Eng. 55, 84–93 (2014).
[Crossref]

Petkovic, D.

S. Shamshirband, D. Petković, N. T. Pavlović, S. Ch, T. A. Altameem, and A. Gani, “Support vector machine firefly algorithm based optimization of lens system,” Appl. Opt. 54(1), 37–45 (2015).
[Crossref]

D. Petković, N. T. Pavlović, S. Shamshirband, M. L. Mat Kiah, N. Badrul Anuar, and M. Y. Idna Idris, “Adaptive neuro-fuzzy estimation of optimal lens system parameters,” Opt. Lasers Eng. 55, 84–93 (2014).
[Crossref]

Schmidhuber, J.

S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Comput. 9(8), 1735–1780 (1997).
[Crossref]

Shamshirband, S.

S. Shamshirband, D. Petković, N. T. Pavlović, S. Ch, T. A. Altameem, and A. Gani, “Support vector machine firefly algorithm based optimization of lens system,” Appl. Opt. 54(1), 37–45 (2015).
[Crossref]

D. Petković, N. T. Pavlović, S. Shamshirband, M. L. Mat Kiah, N. Badrul Anuar, and M. Y. Idna Idris, “Adaptive neuro-fuzzy estimation of optimal lens system parameters,” Opt. Lasers Eng. 55, 84–93 (2014).
[Crossref]

Smith, W. J.

W. J. Smith, Modern Lens Design (McGraw Hill Professional, 2004).

Sturlesi, D.

D. Sturlesi and D. C. O’Shea, “Future of global optimization in optical design,” in 1990 International Lens Design Conference, vol. 1354 (International Society for Optics and Photonics, 1991), pp. 54–69

Sutskever, I.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” in Advances in Neural Information Processing Systems 25, (Curran Associates, Inc., 2012), pp. 1097–1105.

I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to Sequence Learning with Neural Networks,” in Advances in Neural Information Processing Systems 27, (Curran Associates, Inc., 2014), pp. 3104–3112.

Thibault, S.

G. Côté, J.-F. Lalonde, and S. Thibault, “Toward Training a Deep Neural Network to Optimize Lens Designs,” in Frontiers in Optics / Laser Science, (Optical Society of America, 2018), p. JW4A.28.

S. Thibault, C. Gagné, J. Beaulieu, and M. Parizeau, “Evolutionary algorithms applied to lens design: case study and analysis,” in Optical Design and Engineering II, vol. 5962 (International Society for Optics and Photonics, 2005).

Unterthiner, T.

G. Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter, “Self-Normalizing Neural Networks,” in Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, eds. (Curran Associates, Inc., 2017) pp. 971–980.

van Turnhout, M.

Vinyals, O.

I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to Sequence Learning with Neural Networks,” in Advances in Neural Information Processing Systems 27, (Curran Associates, Inc., 2014), pp. 3104–3112.

Weller, S. W.

S. W. Weller, “Neural network optimization, components, and design selection,” in 1990 International Lens Design Conference, vol. 1354 (International Society for Optics and Photonics, 1991), pp. 371–379.

Yang, E.

A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in PyTorch,” in 31st Conference on Neural Information Processing Systems (NIPS 2017), (2017).

Appl. Opt. (1)

J. Opt. (1)

K. Höschel and V. Lakshminarayanan, “Genetic algorithms for lens design: a review,” J. Opt. 2, 463–470 (2018).

Nature (1)

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015)..
[Crossref]

Neural Comput. (1)

S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Comput. 9(8), 1735–1780 (1997).
[Crossref]

Opt. Express (1)

Opt. Lasers Eng. (1)

D. Petković, N. T. Pavlović, S. Shamshirband, M. L. Mat Kiah, N. Badrul Anuar, and M. Y. Idna Idris, “Adaptive neuro-fuzzy estimation of optimal lens system parameters,” Opt. Lasers Eng. 55, 84–93 (2014).
[Crossref]

Opt. Rev. (1)

M. Isshiki, H. Ono, K. Hiraga, J. Ishikawa, and S. Nakadate, “Lens Design: Global Optimization with Escape Function,” Opt. Rev. 2(6), 463–470 (1995).
[Crossref]

Other (17)

S. Thibault, C. Gagné, J. Beaulieu, and M. Parizeau, “Evolutionary algorithms applied to lens design: case study and analysis,” in Optical Design and Engineering II, vol. 5962 (International Society for Optics and Photonics, 2005).

G. W. Forbes and A. E. W. Jones, “Towards global optimization with adaptive simulated annealing,” in 1990 International Lens Design Conference, vol. 1354 (International Society for Optics and Photonics, 1991), pp. 144–154.

C. Menke, “Application of particle swarm optimization to the automatic design of optical systems,” in Optical Design and Engineering VII, vol. 10690 (International Society for Optics and Photonics, 2018).

A. Graves, A. Mohamed, and G. Hinton, “Speech recognition with deep recurrent neural networks,” in 2013 IEEE International Conference on Acoustics Speech and Signal Processing (2013), pp. 6645–6649.

I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to Sequence Learning with Neural Networks,” in Advances in Neural Information Processing Systems 27, (Curran Associates, Inc., 2014), pp. 3104–3112.

S. W. Weller, “Neural network optimization, components, and design selection,” in 1990 International Lens Design Conference, vol. 1354 (International Society for Optics and Photonics, 1991), pp. 371–379.

Y. LeCun and Y. Bengio, “Convolutional networks for images, speech, and time series,” in The Handbook of Brain Theory and Neural Networks (MIT Press, 1995), pp. 255–258.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” in Advances in Neural Information Processing Systems 25, (Curran Associates, Inc., 2012), pp. 1097–1105.

I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning (MIT University, 2016).

Zemax Development Corp., “Zebase 6 Optical Design Collection,” (2007).

G. Côté, J.-F. Lalonde, and S. Thibault, “Toward Training a Deep Neural Network to Optimize Lens Designs,” in Frontiers in Optics / Laser Science, (Optical Society of America, 2018), p. JW4A.28.

A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in PyTorch,” in 31st Conference on Neural Information Processing Systems (NIPS 2017), (2017).

W. J. Smith, Modern Lens Design (McGraw Hill Professional, 2004).

G. Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter, “Self-Normalizing Neural Networks,” in Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, eds. (Curran Associates, Inc., 2017) pp. 971–980.

Schott Corporation, “Optical Glass Catalog,” (2017).

D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” in 3rd International Conference on Learning Representations (ICLR 2015), (2015).

D. Sturlesi and D. C. O’Shea, “Future of global optimization in optical design,” in 1990 International Lens Design Conference, vol. 1354 (International Society for Optics and Photonics, 1991), pp. 54–69

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Illustration of the variables used to describe an optical system in this work. Lens materials are modeled by the refractive index $n$ and Abbe number $v$. Thicknesses $t$ represent the distance between adjacent optical surfaces including the image plane. The surface curvatures are represented by $c$.
Fig. 2.
Fig. 2. Overview of the deep learning framework. Boxes refer to operators and plain text to data. When inferring lens systems, only the top part (labeled “inference”) is necessary. To train the DNN, the unsupervised or supervised loss is computed, then backpropagated into the DNN through the corresponding colored arrows. In unsupervised training, the lens system performance is evaluated by computing the RMS spot size. In supervised training, the reference designs are converted to a normalized form then compared to the DNN outputs using the mean squared error.
Fig. 3.
Fig. 3. Evolution of the mean normalized RMS spot size during training for the hybrid and unsupervised schemes and mean squared error for the hybrid scheme. For every curve we show the minimum and maximum over 3 experiments initialized with random DNN parameters. The mean RMS spot size cannot be directly compared between GGA and GAGA designs because the respective DNNs are trained on different input specifications.
Fig. 4.
Fig. 4. Heat map of the average spot size for unitary focal length as a function of the entrance pupil diameter (EPD) and half field of view (HFOV) given as inputs to the DNN. The reference systems used for supervision are also shown as individual dots with their respective color-encoded RMS spot size. With the hybrid training scheme, the DNN learns to infer lens designs with an RMS spot size at least on par with that of the reference designs over most of the specifications considered.
Fig. 5.
Fig. 5. Comparison of the lens designs inferred by our model to the reference designs from the literature. All systems are scaled to a 100 mm focal length. Dashed lines are used for the reference designs. In addition to each 2D layout, we show the distortion, astigmatism and lateral color w.r.t. the relative field angle, and the standard ray aberration plots. Aberration units are in mm except for the relative distortion. Best viewed by zooming in the electronic version.
Fig. 6.
Fig. 6. Model architecture. The model is a stacked fully-connected (FC) DNN. The number of output units of each FC layer is given within parentheses. D is the dimensionality or number of neurons per layer, L is the number of hidden layers and S is the size of the stack. The input and output dimensions are given for lens designs with a glass-air-glass-air structure. Through a grid search, the best parameters found for the telescopic objectives studied in this work were $\textrm {S}=16$, $\textrm {L}=7$ and $\textrm {D}=32$.
Fig. 7.
Fig. 7. Computation of valid glass variables. Data points were taken from the Schott glass catalog. The data is modeled as a multivariate normal distribution whose principal axes are shown by the arrows. We bound the glass variables in the gray area which encompasses 2.1 times the standard deviation.
Fig. 8.
Fig. 8. Illustration of some variables involved in meridional ray tracing through spherical surfaces. Primes decorate variables relative to the refracted ray. $I$ and $U$ are the angle of the ray relative to the surface normal and to the optical axis, respectively. $Q$ is the shortest distance from the surface vertex to the ray and $c$ is the surface curvature.

Tables (1)

Tables Icon

Table 1. Minimum and maximum values of the uniform distributions used to draw the input specifications when training the DNN. The last line (in bold) for each type of sequence gives the values used for generalization with unsupervised training. All dimensions are given for an effective focal length of 1 and the HFOV is given in radians.

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

s = 1 N H H 1 N w N p w , p ( y H w p y H ¯ ) 2   .
q H w p = 1 N k k max ( | sin ( I H w p k ) | 1 , 0 ) + max ( | sin ( I H w p k ) | 1 , 0 ) + max ( Δ z H w p k , 0 )   .
L u = s ( 1 + λ q 1 N H N w N p H , w , p q H w p )   ,
L s = i = 1 N 1 ( c i c i ) 2 + i = 1 N ( t i , raw t i , raw ) 2 + i = 1 M ( g i , 1 g i , 1 ) 2 + i = 1 M ( g i , 2 g i , 2 ) 2 2 N + 2 M 1   .
L h = L u ¯ + λ s L s ¯   ,
f ( t raw ) = t min + softplus ( t raw t min , β ) softplus ( t raw ( t range + t min ) , β )   ,
sin I = Q c + sin U   ,
sin I = n sin I n   ,
U = U I + I   ,
Q = 1 c ( sin I sin U )   .

Metrics