Abstract

Recently, optical neural networks (ONNs) integrated into photonic chips have received extensive attention because they are expected to implement the same pattern recognition tasks in electronic platforms with high efficiency and low power consumption. However, there are no efficient learning algorithms for the training of ONNs on an on-chip integration system. In this article, we propose a novel learning strategy based on neuroevolution to design and train ONNs. Two typical neuroevolution algorithms are used to determine the hyper-parameters of ONNs and to optimize the weights (phase shifters) in the connections. To demonstrate the effectiveness of the training algorithms, the trained ONNs are applied in classification tasks for an iris plants dataset, a wine recognition dataset and modulation formats recognition. The calculated results demonstrate that the accuracy and stability of the training algorithms based on neuroevolution are competitive with other traditional learning algorithms. In comparison to previous works, we introduce an efficient training method for ONNs and demonstrate their broad application prospects in pattern recognition, reinforcement learning and so on.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Training of photonic neural networks through in situ backpropagation and gradient measurement

Tyler W. Hughes, Momchil Minkov, Yu Shi, and Shanhui Fan
Optica 5(7) 864-871 (2018)

All-optical nonlinear activation function for photonic neural networks [Invited]

Mario Miscuglio, Armin Mehrabian, Zibo Hu, Shaimaa I. Azzam, Jonathan George, Alexander V. Kildishev, Matthew Pelton, and Volker J. Sorger
Opt. Mater. Express 8(12) 3851-3863 (2018)

High-accuracy optical convolution unit architecture for convolutional neural networks by cascaded acousto-optical modulator arrays

Shaofu Xu, Jing Wang, Rui Wang, Jiangping Chen, and Weiwen Zou
Opt. Express 27(14) 19778-19787 (2019)

References

  • View by:
  • |
  • |
  • |

  1. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
    [Crossref]
  2. J. Schmidhuber, “Deep learning in neural networks: An overview,” Neural Networks 61, 85–117 (2015).
    [Crossref]
  3. T. Young, D. Hazarika, S. Poria, and E. Cambria, “Recent trends in deep learning based natural language processing,” IEEE Comput. Intell. Mag. 13(3), 55–75 (2018).
    [Crossref]
  4. G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, and T. N. Sainath, “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Process. Mag. 29(6), 82–97 (2012).
    [Crossref]
  5. M. Längkvist, L. Karlsson, and A. Loutfi, “A review of unsupervised feature learning and deep learning for time-series modeling,” Pattern Recognition Letters 42, 11–24 (2014).
    [Crossref]
  6. M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, and J. Zhang, “End to end learning for self-driving cars,” arXiv preprint arXiv:1604.07316 (2016).
  7. V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” arXiv preprint arXiv:1312.5602 (2013).
  8. S. Gu, E. Holly, T. Lillicrap, and S. Levine, “Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates,” in 2017 IEEE international conference on robotics and automation (ICRA), (IEEE, 2017), 3389–3396.
  9. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, (NIPS, 2012), 1097–1105.
  10. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556 (2014).
  11. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (CVPR, 2015), 1–9.
  12. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (IEEE, 2016), 770–778.
  13. S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation 9(8), 1735–1780 (1997).
    [Crossref]
  14. J. Bergstra, F. Bastien, O. Breuleux, P. Lamblin, R. Pascanu, O. Delalleau, G. Desjardins, D. Warde-Farley, I. Goodfellow, and A. Bergeron, “Theano: Deep learning on gpus with python,” in NIPS 2011, BigLearning Workshop, Granada, Spain, (Citeseer, 2011), 1–48.
  15. C. Zhang, P. Li, G. Sun, Y. Guan, B. Xiao, and J. Cong, “Optimizing fpga-based accelerator design for deep convolutional neural networks,” in Proceedings of the 2015 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, (ACM, 2015), 161–170.
  16. T. Chen, Z. Du, N. Sun, J. Wang, C. Wu, Y. Chen, and O. Temam, “Diannao: A small-footprint high-throughput accelerator for ubiquitous machine-learning,” in ACM Sigplan Notices, (ACM, 2014), 269–284.
  17. F. Akopyan, J. Sawada, A. Cassidy, R. Alvarez-Icaza, J. Arthur, P. Merolla, N. Imam, Y. Nakamura, P. Datta, and G.-J. Nam, “Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip,” IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 34(10), 1537–1557 (2015).
    [Crossref]
  18. M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, and S. Jain, “Loihi: A neuromorphic manycore processor with on-chip learning,” IEEE Micro 38(1), 82–99 (2018).
    [Crossref]
  19. S. B. Furber, F. Galluppi, S. Temple, and L. A. Plana, “The spinnaker project,” Proc. IEEE 102(5), 652–665 (2014).
    [Crossref]
  20. X. Wu, V. Saxena, K. Zhu, and S. Balagopal, “A CMOS Spiking Neuron for Brain-Inspired Neural Networks With Resistive Synapses andIn SituLearning,” IEEE Trans. Circuits Syst. II 62(11), 1088–1092 (2015).
    [Crossref]
  21. J. Touch, A.-H. Badawy, and V. J. Sorger, “Optical computing,” Nanophotonics 6(3), 503–505 (2017).
    [Crossref]
  22. M. A. Nahmias, B. J. Shastri, A. N. Tait, and P. R. Prucnal, “A leaky integrate-and-fire laser neuron for ultrafast cognitive computing,” IEEE J. Sel. Top. Quantum Electron. 19(5), 1–12 (2013).
    [Crossref]
  23. A. N. Tait, T. F. De Lima, M. A. Nahmias, B. J. Shastri, and P. R. Prucnal, “Multi-channel control for microring weight banks,” Opt. Express 24(8), 8895–8906 (2016).
    [Crossref]
  24. P. R. Prucnal, B. J. Shastri, T. F. de Lima, M. A. Nahmias, and A. N. Tait, “Recent progress in semiconductor excitable lasers for photonic spike processing,” Adv. Opt. Photonics 8(2), 228–299 (2016).
    [Crossref]
  25. Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017).
    [Crossref]
  26. H. Bagherian, S. Skirlo, Y. Shen, H. Meng, V. Ceperic, and M. Soljacic, “On-Chip Optical Convolutional Neural Networks,” arXiv preprint arXiv:1808.03303 (2018).
  27. J. Bueno, S. Maktoobi, L. Froehly, I. Fischer, M. Jacquot, L. Larger, and D. Brunner, “Reinforcement learning in a large-scale photonic recurrent neural network,” Optica 5(6), 756–760 (2018).
    [Crossref]
  28. X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361(6406), 1004–1008 (2018).
    [Crossref]
  29. M. Y.-S. Fang, S. Manipatruni, C. Wierzynski, A. Khosrowshahi, and M. R. DeWeese, “Design of optical neural networks with component imprecisions,” Opt. Express 27(10), 14009–14029 (2019).
    [Crossref]
  30. R. Hamerly, L. Bernstein, A. Sludds, M. Soljačić, and D. Englund, “Large-scale optical neural networks based on photoelectric multiplication,” Phys. Rev. X 9(2), 021032 (2019).
    [Crossref]
  31. G. R. Steinbrecher, J. P. Olson, D. Englund, and J. Carolan, “Quantum optical neural networks,” npj Quantum Inf 5(1), 60 (2019).
    [Crossref]
  32. N. Passalis, G. Mourgias-Alexandris, A. Tsakyridis, N. Pleros, and A. Tefas, “Variance preserving initialization for training deep neuromorphic photonic networks with sinusoidal activations,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (IEEE, 2019), 1483–1487.
  33. G. Mourgias-Alexandris, A. Tsakyridis, N. Passalis, A. Tefas, K. Vyrsokinos, and N. Pleros, “An all-optical neuron with sigmoid activation function,” Opt. Express 27(7), 9620–9630 (2019).
    [Crossref]
  34. I. A. Williamson, T. W. Hughes, M. Minkov, B. Bartlett, S. Pai, and S. Fan, “Reprogrammable Electro-Optic Nonlinear Activation Functions for Optical Neural Networks,” arXiv preprint arXiv:1903.04579 (2019).
  35. H. Zhou, Y. Zhao, X. Wang, D. Gao, J. Dong, and X. Zhang, “Self-learning photonic signal processor with an optical neural network chip,” arXiv preprint arXiv:1902.07318 (2019).
  36. T. W. Hughes, M. Minkov, Y. Shi, and S. Fan, “Training of photonic neural networks through in situ backpropagation and gradient measurement,” Optica 5(7), 864–871 (2018).
    [Crossref]
  37. S. Molesky, Z. Lin, A. Y. Piggott, W. Jin, J. Vucković, and A. W. Rodriguez, “Inverse design in nanophotonics,” Nat. Photonics 12(11), 659–670 (2018).
    [Crossref]
  38. W. Bogaerts and L. Chrostowski, “Silicon photonics circuit design: methods, tools and challenges,” Laser Photonics Rev. 12(4), 1700237 (2018).
    [Crossref]
  39. T. Zhang, J. Wang, Q. Liu, J. Zhou, J. Dai, X. Han, Y. Zhou, and K. Xu, “Efficient spectrum prediction and inverse design for plasmonic waveguide systems based on artificial neural networks,” Photonics Res. 7(3), 368–380 (2019).
    [Crossref]
  40. Z. Yu, H. Cui, and X. Sun, “Genetically optimized on-chip wideband ultracompact reflectors and Fabry–Perot cavities,” Photonics Res. 5(6), B15–B19 (2017).
    [Crossref]
  41. P.-H. Fu, S.-C. Lo, P.-C. Tsai, K.-L. Lee, and P.-K. Wei, “Optimization for Gold Nanostructure-Based Surface Plasmon Biosensors Using a Microgenetic Algorithm,” ACS Photonics 5(6), 2320–2327 (2018).
    [Crossref]
  42. J. C. Mak, C. Sideris, J. Jeong, A. Hajimiri, and J. K. Poon, “Binary particle swarm optimized 2× 2 power splitters in a standard foundry silicon photonic platform,” Opt. Lett. 41(16), 3868–3871 (2016).
    [Crossref]
  43. K. O. Stanley, J. Clune, J. Lehman, and R. Miikkulainen, “Designing neural networks through neuroevolution,” Nat. Mach. Intell. 1(1), 24–35 (2019).
    [Crossref]
  44. F. P. Such, V. Madhavan, E. Conti, J. Lehman, K. O. Stanley, and J. Clune, “Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning,” arXiv preprint arXiv:1712.06567 (2017).
  45. M. Hessel, J. Modayil, H. Van Hasselt, T. Schaul, G. Ostrovski, W. Dabney, D. Horgan, B. Piot, M. Azar, and D. Silver, “Rainbow: Combining improvements in deep reinforcement learning,” in Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI, 2018).
  46. A. Tavanaei, M. Ghodrati, S. R. Kheradpisheh, T. Masquelier, and A. Maida, “Deep learning in spiking neural networks,” Neural Networks 111, 47–63 (2019).
    [Crossref]
  47. S. R. Kulkarni and B. Rajendran, “Spiking neural networks for handwritten digit recognition—Supervised learning and network optimization,” Neural Networks 103, 118–127 (2018).
    [Crossref]
  48. M. Pfeiffer and T. Pfeil, “Deep learning with spiking neurons: opportunities and challenges,” Front. Neurosci. 12(774), 1–18 (2018).
    [Crossref]
  49. A. Autere, H. Jussila, Y. Dai, Y. Wang, H. Lipsanen, and Z. Sun, “Nonlinear optics with 2D layered materials,” Adv. Mater. 30(24), 1705963 (2018).
    [Crossref]
  50. https://github.com/fancompute/neuroptica .
  51. M. Reck, A. Zeilinger, H. J. Bernstein, and P. Bertani, “Experimental realization of any discrete unitary operator,” Phys. Rev. Lett. 73(1), 58–61 (1994).
    [Crossref]
  52. W. R. Clements, P. C. Humphreys, B. J. Metcalf, W. S. Kolthammer, and I. A. Walmsley, “Optimal design for universal multiport interferometers,” Optica 3(12), 1460–1465 (2016).
    [Crossref]
  53. P. Ghamisi and J. A. Benediktsson, “Feature selection based on hybridization of genetic algorithm and particle swarm optimization,” IEEE Geosci. Remote Sens. Lett. 12(2), 309–313 (2015).
    [Crossref]
  54. R. Kohavi, “A study of cross-validation and bootstrap for accuracy estimation and model selection,” in Ijcai, (Montreal, Canada, 1995), 1137–1145.
  55. S. J. Roberts, R. Everson, and I. Rezek, “Maximum certainty data partitioning,” Pattern Recognition 33(5), 833–839 (2000).
    [Crossref]
  56. E. E. Azzouz and A. K. Nandi, “Automatic identification of digital modulation types,” Signal Processing 47(1), 55–69 (1995).
    [Crossref]
  57. D. H. Wolpert and W. G. Macready, “No free lunch theorems for optimization,” IEEE Trans. Evol. Computat. 1(1), 67–82 (1997).
    [Crossref]
  58. W. Li, S. Ong, and A. Nee, “Hybrid genetic algorithm and simulated annealing approach for the optimization of process plans for prismatic parts,” Int. J. Prod. Res. 40(8), 1899–1922 (2002).
    [Crossref]

2019 (7)

K. O. Stanley, J. Clune, J. Lehman, and R. Miikkulainen, “Designing neural networks through neuroevolution,” Nat. Mach. Intell. 1(1), 24–35 (2019).
[Crossref]

A. Tavanaei, M. Ghodrati, S. R. Kheradpisheh, T. Masquelier, and A. Maida, “Deep learning in spiking neural networks,” Neural Networks 111, 47–63 (2019).
[Crossref]

T. Zhang, J. Wang, Q. Liu, J. Zhou, J. Dai, X. Han, Y. Zhou, and K. Xu, “Efficient spectrum prediction and inverse design for plasmonic waveguide systems based on artificial neural networks,” Photonics Res. 7(3), 368–380 (2019).
[Crossref]

R. Hamerly, L. Bernstein, A. Sludds, M. Soljačić, and D. Englund, “Large-scale optical neural networks based on photoelectric multiplication,” Phys. Rev. X 9(2), 021032 (2019).
[Crossref]

G. R. Steinbrecher, J. P. Olson, D. Englund, and J. Carolan, “Quantum optical neural networks,” npj Quantum Inf 5(1), 60 (2019).
[Crossref]

G. Mourgias-Alexandris, A. Tsakyridis, N. Passalis, A. Tefas, K. Vyrsokinos, and N. Pleros, “An all-optical neuron with sigmoid activation function,” Opt. Express 27(7), 9620–9630 (2019).
[Crossref]

M. Y.-S. Fang, S. Manipatruni, C. Wierzynski, A. Khosrowshahi, and M. R. DeWeese, “Design of optical neural networks with component imprecisions,” Opt. Express 27(10), 14009–14029 (2019).
[Crossref]

2018 (11)

P.-H. Fu, S.-C. Lo, P.-C. Tsai, K.-L. Lee, and P.-K. Wei, “Optimization for Gold Nanostructure-Based Surface Plasmon Biosensors Using a Microgenetic Algorithm,” ACS Photonics 5(6), 2320–2327 (2018).
[Crossref]

J. Bueno, S. Maktoobi, L. Froehly, I. Fischer, M. Jacquot, L. Larger, and D. Brunner, “Reinforcement learning in a large-scale photonic recurrent neural network,” Optica 5(6), 756–760 (2018).
[Crossref]

T. W. Hughes, M. Minkov, Y. Shi, and S. Fan, “Training of photonic neural networks through in situ backpropagation and gradient measurement,” Optica 5(7), 864–871 (2018).
[Crossref]

S. R. Kulkarni and B. Rajendran, “Spiking neural networks for handwritten digit recognition—Supervised learning and network optimization,” Neural Networks 103, 118–127 (2018).
[Crossref]

M. Pfeiffer and T. Pfeil, “Deep learning with spiking neurons: opportunities and challenges,” Front. Neurosci. 12(774), 1–18 (2018).
[Crossref]

A. Autere, H. Jussila, Y. Dai, Y. Wang, H. Lipsanen, and Z. Sun, “Nonlinear optics with 2D layered materials,” Adv. Mater. 30(24), 1705963 (2018).
[Crossref]

T. Young, D. Hazarika, S. Poria, and E. Cambria, “Recent trends in deep learning based natural language processing,” IEEE Comput. Intell. Mag. 13(3), 55–75 (2018).
[Crossref]

M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, and S. Jain, “Loihi: A neuromorphic manycore processor with on-chip learning,” IEEE Micro 38(1), 82–99 (2018).
[Crossref]

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361(6406), 1004–1008 (2018).
[Crossref]

S. Molesky, Z. Lin, A. Y. Piggott, W. Jin, J. Vucković, and A. W. Rodriguez, “Inverse design in nanophotonics,” Nat. Photonics 12(11), 659–670 (2018).
[Crossref]

W. Bogaerts and L. Chrostowski, “Silicon photonics circuit design: methods, tools and challenges,” Laser Photonics Rev. 12(4), 1700237 (2018).
[Crossref]

2017 (3)

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017).
[Crossref]

J. Touch, A.-H. Badawy, and V. J. Sorger, “Optical computing,” Nanophotonics 6(3), 503–505 (2017).
[Crossref]

Z. Yu, H. Cui, and X. Sun, “Genetically optimized on-chip wideband ultracompact reflectors and Fabry–Perot cavities,” Photonics Res. 5(6), B15–B19 (2017).
[Crossref]

2016 (4)

2015 (5)

F. Akopyan, J. Sawada, A. Cassidy, R. Alvarez-Icaza, J. Arthur, P. Merolla, N. Imam, Y. Nakamura, P. Datta, and G.-J. Nam, “Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip,” IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 34(10), 1537–1557 (2015).
[Crossref]

X. Wu, V. Saxena, K. Zhu, and S. Balagopal, “A CMOS Spiking Neuron for Brain-Inspired Neural Networks With Resistive Synapses andIn SituLearning,” IEEE Trans. Circuits Syst. II 62(11), 1088–1092 (2015).
[Crossref]

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

J. Schmidhuber, “Deep learning in neural networks: An overview,” Neural Networks 61, 85–117 (2015).
[Crossref]

P. Ghamisi and J. A. Benediktsson, “Feature selection based on hybridization of genetic algorithm and particle swarm optimization,” IEEE Geosci. Remote Sens. Lett. 12(2), 309–313 (2015).
[Crossref]

2014 (2)

S. B. Furber, F. Galluppi, S. Temple, and L. A. Plana, “The spinnaker project,” Proc. IEEE 102(5), 652–665 (2014).
[Crossref]

M. Längkvist, L. Karlsson, and A. Loutfi, “A review of unsupervised feature learning and deep learning for time-series modeling,” Pattern Recognition Letters 42, 11–24 (2014).
[Crossref]

2013 (1)

M. A. Nahmias, B. J. Shastri, A. N. Tait, and P. R. Prucnal, “A leaky integrate-and-fire laser neuron for ultrafast cognitive computing,” IEEE J. Sel. Top. Quantum Electron. 19(5), 1–12 (2013).
[Crossref]

2012 (1)

G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, and T. N. Sainath, “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Process. Mag. 29(6), 82–97 (2012).
[Crossref]

2002 (1)

W. Li, S. Ong, and A. Nee, “Hybrid genetic algorithm and simulated annealing approach for the optimization of process plans for prismatic parts,” Int. J. Prod. Res. 40(8), 1899–1922 (2002).
[Crossref]

2000 (1)

S. J. Roberts, R. Everson, and I. Rezek, “Maximum certainty data partitioning,” Pattern Recognition 33(5), 833–839 (2000).
[Crossref]

1997 (2)

D. H. Wolpert and W. G. Macready, “No free lunch theorems for optimization,” IEEE Trans. Evol. Computat. 1(1), 67–82 (1997).
[Crossref]

S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation 9(8), 1735–1780 (1997).
[Crossref]

1995 (1)

E. E. Azzouz and A. K. Nandi, “Automatic identification of digital modulation types,” Signal Processing 47(1), 55–69 (1995).
[Crossref]

1994 (1)

M. Reck, A. Zeilinger, H. J. Bernstein, and P. Bertani, “Experimental realization of any discrete unitary operator,” Phys. Rev. Lett. 73(1), 58–61 (1994).
[Crossref]

Akopyan, F.

F. Akopyan, J. Sawada, A. Cassidy, R. Alvarez-Icaza, J. Arthur, P. Merolla, N. Imam, Y. Nakamura, P. Datta, and G.-J. Nam, “Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip,” IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 34(10), 1537–1557 (2015).
[Crossref]

Alvarez-Icaza, R.

F. Akopyan, J. Sawada, A. Cassidy, R. Alvarez-Icaza, J. Arthur, P. Merolla, N. Imam, Y. Nakamura, P. Datta, and G.-J. Nam, “Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip,” IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 34(10), 1537–1557 (2015).
[Crossref]

Anguelov, D.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (CVPR, 2015), 1–9.

Antonoglou, I.

V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” arXiv preprint arXiv:1312.5602 (2013).

Arthur, J.

F. Akopyan, J. Sawada, A. Cassidy, R. Alvarez-Icaza, J. Arthur, P. Merolla, N. Imam, Y. Nakamura, P. Datta, and G.-J. Nam, “Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip,” IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 34(10), 1537–1557 (2015).
[Crossref]

Autere, A.

A. Autere, H. Jussila, Y. Dai, Y. Wang, H. Lipsanen, and Z. Sun, “Nonlinear optics with 2D layered materials,” Adv. Mater. 30(24), 1705963 (2018).
[Crossref]

Azar, M.

M. Hessel, J. Modayil, H. Van Hasselt, T. Schaul, G. Ostrovski, W. Dabney, D. Horgan, B. Piot, M. Azar, and D. Silver, “Rainbow: Combining improvements in deep reinforcement learning,” in Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI, 2018).

Azzouz, E. E.

E. E. Azzouz and A. K. Nandi, “Automatic identification of digital modulation types,” Signal Processing 47(1), 55–69 (1995).
[Crossref]

Badawy, A.-H.

J. Touch, A.-H. Badawy, and V. J. Sorger, “Optical computing,” Nanophotonics 6(3), 503–505 (2017).
[Crossref]

Baehr-Jones, T.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017).
[Crossref]

Bagherian, H.

H. Bagherian, S. Skirlo, Y. Shen, H. Meng, V. Ceperic, and M. Soljacic, “On-Chip Optical Convolutional Neural Networks,” arXiv preprint arXiv:1808.03303 (2018).

Balagopal, S.

X. Wu, V. Saxena, K. Zhu, and S. Balagopal, “A CMOS Spiking Neuron for Brain-Inspired Neural Networks With Resistive Synapses andIn SituLearning,” IEEE Trans. Circuits Syst. II 62(11), 1088–1092 (2015).
[Crossref]

Bartlett, B.

I. A. Williamson, T. W. Hughes, M. Minkov, B. Bartlett, S. Pai, and S. Fan, “Reprogrammable Electro-Optic Nonlinear Activation Functions for Optical Neural Networks,” arXiv preprint arXiv:1903.04579 (2019).

Bastien, F.

J. Bergstra, F. Bastien, O. Breuleux, P. Lamblin, R. Pascanu, O. Delalleau, G. Desjardins, D. Warde-Farley, I. Goodfellow, and A. Bergeron, “Theano: Deep learning on gpus with python,” in NIPS 2011, BigLearning Workshop, Granada, Spain, (Citeseer, 2011), 1–48.

Benediktsson, J. A.

P. Ghamisi and J. A. Benediktsson, “Feature selection based on hybridization of genetic algorithm and particle swarm optimization,” IEEE Geosci. Remote Sens. Lett. 12(2), 309–313 (2015).
[Crossref]

Bengio, Y.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

Bergeron, A.

J. Bergstra, F. Bastien, O. Breuleux, P. Lamblin, R. Pascanu, O. Delalleau, G. Desjardins, D. Warde-Farley, I. Goodfellow, and A. Bergeron, “Theano: Deep learning on gpus with python,” in NIPS 2011, BigLearning Workshop, Granada, Spain, (Citeseer, 2011), 1–48.

Bergstra, J.

J. Bergstra, F. Bastien, O. Breuleux, P. Lamblin, R. Pascanu, O. Delalleau, G. Desjardins, D. Warde-Farley, I. Goodfellow, and A. Bergeron, “Theano: Deep learning on gpus with python,” in NIPS 2011, BigLearning Workshop, Granada, Spain, (Citeseer, 2011), 1–48.

Bernstein, H. J.

M. Reck, A. Zeilinger, H. J. Bernstein, and P. Bertani, “Experimental realization of any discrete unitary operator,” Phys. Rev. Lett. 73(1), 58–61 (1994).
[Crossref]

Bernstein, L.

R. Hamerly, L. Bernstein, A. Sludds, M. Soljačić, and D. Englund, “Large-scale optical neural networks based on photoelectric multiplication,” Phys. Rev. X 9(2), 021032 (2019).
[Crossref]

Bertani, P.

M. Reck, A. Zeilinger, H. J. Bernstein, and P. Bertani, “Experimental realization of any discrete unitary operator,” Phys. Rev. Lett. 73(1), 58–61 (1994).
[Crossref]

Bogaerts, W.

W. Bogaerts and L. Chrostowski, “Silicon photonics circuit design: methods, tools and challenges,” Laser Photonics Rev. 12(4), 1700237 (2018).
[Crossref]

Bojarski, M.

M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, and J. Zhang, “End to end learning for self-driving cars,” arXiv preprint arXiv:1604.07316 (2016).

Breuleux, O.

J. Bergstra, F. Bastien, O. Breuleux, P. Lamblin, R. Pascanu, O. Delalleau, G. Desjardins, D. Warde-Farley, I. Goodfellow, and A. Bergeron, “Theano: Deep learning on gpus with python,” in NIPS 2011, BigLearning Workshop, Granada, Spain, (Citeseer, 2011), 1–48.

Brunner, D.

Bueno, J.

Cambria, E.

T. Young, D. Hazarika, S. Poria, and E. Cambria, “Recent trends in deep learning based natural language processing,” IEEE Comput. Intell. Mag. 13(3), 55–75 (2018).
[Crossref]

Cao, Y.

M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, and S. Jain, “Loihi: A neuromorphic manycore processor with on-chip learning,” IEEE Micro 38(1), 82–99 (2018).
[Crossref]

Carolan, J.

G. R. Steinbrecher, J. P. Olson, D. Englund, and J. Carolan, “Quantum optical neural networks,” npj Quantum Inf 5(1), 60 (2019).
[Crossref]

Cassidy, A.

F. Akopyan, J. Sawada, A. Cassidy, R. Alvarez-Icaza, J. Arthur, P. Merolla, N. Imam, Y. Nakamura, P. Datta, and G.-J. Nam, “Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip,” IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 34(10), 1537–1557 (2015).
[Crossref]

Ceperic, V.

H. Bagherian, S. Skirlo, Y. Shen, H. Meng, V. Ceperic, and M. Soljacic, “On-Chip Optical Convolutional Neural Networks,” arXiv preprint arXiv:1808.03303 (2018).

Chen, T.

T. Chen, Z. Du, N. Sun, J. Wang, C. Wu, Y. Chen, and O. Temam, “Diannao: A small-footprint high-throughput accelerator for ubiquitous machine-learning,” in ACM Sigplan Notices, (ACM, 2014), 269–284.

Chen, Y.

T. Chen, Z. Du, N. Sun, J. Wang, C. Wu, Y. Chen, and O. Temam, “Diannao: A small-footprint high-throughput accelerator for ubiquitous machine-learning,” in ACM Sigplan Notices, (ACM, 2014), 269–284.

Chinya, G.

M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, and S. Jain, “Loihi: A neuromorphic manycore processor with on-chip learning,” IEEE Micro 38(1), 82–99 (2018).
[Crossref]

Choday, S. H.

M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, and S. Jain, “Loihi: A neuromorphic manycore processor with on-chip learning,” IEEE Micro 38(1), 82–99 (2018).
[Crossref]

Chrostowski, L.

W. Bogaerts and L. Chrostowski, “Silicon photonics circuit design: methods, tools and challenges,” Laser Photonics Rev. 12(4), 1700237 (2018).
[Crossref]

Clements, W. R.

Clune, J.

K. O. Stanley, J. Clune, J. Lehman, and R. Miikkulainen, “Designing neural networks through neuroevolution,” Nat. Mach. Intell. 1(1), 24–35 (2019).
[Crossref]

F. P. Such, V. Madhavan, E. Conti, J. Lehman, K. O. Stanley, and J. Clune, “Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning,” arXiv preprint arXiv:1712.06567 (2017).

Cong, J.

C. Zhang, P. Li, G. Sun, Y. Guan, B. Xiao, and J. Cong, “Optimizing fpga-based accelerator design for deep convolutional neural networks,” in Proceedings of the 2015 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, (ACM, 2015), 161–170.

Conti, E.

F. P. Such, V. Madhavan, E. Conti, J. Lehman, K. O. Stanley, and J. Clune, “Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning,” arXiv preprint arXiv:1712.06567 (2017).

Cui, H.

Z. Yu, H. Cui, and X. Sun, “Genetically optimized on-chip wideband ultracompact reflectors and Fabry–Perot cavities,” Photonics Res. 5(6), B15–B19 (2017).
[Crossref]

Dabney, W.

M. Hessel, J. Modayil, H. Van Hasselt, T. Schaul, G. Ostrovski, W. Dabney, D. Horgan, B. Piot, M. Azar, and D. Silver, “Rainbow: Combining improvements in deep reinforcement learning,” in Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI, 2018).

Dahl, G. E.

G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, and T. N. Sainath, “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Process. Mag. 29(6), 82–97 (2012).
[Crossref]

Dai, J.

T. Zhang, J. Wang, Q. Liu, J. Zhou, J. Dai, X. Han, Y. Zhou, and K. Xu, “Efficient spectrum prediction and inverse design for plasmonic waveguide systems based on artificial neural networks,” Photonics Res. 7(3), 368–380 (2019).
[Crossref]

Dai, Y.

A. Autere, H. Jussila, Y. Dai, Y. Wang, H. Lipsanen, and Z. Sun, “Nonlinear optics with 2D layered materials,” Adv. Mater. 30(24), 1705963 (2018).
[Crossref]

Datta, P.

F. Akopyan, J. Sawada, A. Cassidy, R. Alvarez-Icaza, J. Arthur, P. Merolla, N. Imam, Y. Nakamura, P. Datta, and G.-J. Nam, “Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip,” IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 34(10), 1537–1557 (2015).
[Crossref]

Davies, M.

M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, and S. Jain, “Loihi: A neuromorphic manycore processor with on-chip learning,” IEEE Micro 38(1), 82–99 (2018).
[Crossref]

de Lima, T. F.

P. R. Prucnal, B. J. Shastri, T. F. de Lima, M. A. Nahmias, and A. N. Tait, “Recent progress in semiconductor excitable lasers for photonic spike processing,” Adv. Opt. Photonics 8(2), 228–299 (2016).
[Crossref]

A. N. Tait, T. F. De Lima, M. A. Nahmias, B. J. Shastri, and P. R. Prucnal, “Multi-channel control for microring weight banks,” Opt. Express 24(8), 8895–8906 (2016).
[Crossref]

Del Testa, D.

M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, and J. Zhang, “End to end learning for self-driving cars,” arXiv preprint arXiv:1604.07316 (2016).

Delalleau, O.

J. Bergstra, F. Bastien, O. Breuleux, P. Lamblin, R. Pascanu, O. Delalleau, G. Desjardins, D. Warde-Farley, I. Goodfellow, and A. Bergeron, “Theano: Deep learning on gpus with python,” in NIPS 2011, BigLearning Workshop, Granada, Spain, (Citeseer, 2011), 1–48.

Deng, L.

G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, and T. N. Sainath, “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Process. Mag. 29(6), 82–97 (2012).
[Crossref]

Desjardins, G.

J. Bergstra, F. Bastien, O. Breuleux, P. Lamblin, R. Pascanu, O. Delalleau, G. Desjardins, D. Warde-Farley, I. Goodfellow, and A. Bergeron, “Theano: Deep learning on gpus with python,” in NIPS 2011, BigLearning Workshop, Granada, Spain, (Citeseer, 2011), 1–48.

DeWeese, M. R.

Dimou, G.

M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, and S. Jain, “Loihi: A neuromorphic manycore processor with on-chip learning,” IEEE Micro 38(1), 82–99 (2018).
[Crossref]

Dong, J.

H. Zhou, Y. Zhao, X. Wang, D. Gao, J. Dong, and X. Zhang, “Self-learning photonic signal processor with an optical neural network chip,” arXiv preprint arXiv:1902.07318 (2019).

Du, Z.

T. Chen, Z. Du, N. Sun, J. Wang, C. Wu, Y. Chen, and O. Temam, “Diannao: A small-footprint high-throughput accelerator for ubiquitous machine-learning,” in ACM Sigplan Notices, (ACM, 2014), 269–284.

Dworakowski, D.

M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, and J. Zhang, “End to end learning for self-driving cars,” arXiv preprint arXiv:1604.07316 (2016).

Englund, D.

G. R. Steinbrecher, J. P. Olson, D. Englund, and J. Carolan, “Quantum optical neural networks,” npj Quantum Inf 5(1), 60 (2019).
[Crossref]

R. Hamerly, L. Bernstein, A. Sludds, M. Soljačić, and D. Englund, “Large-scale optical neural networks based on photoelectric multiplication,” Phys. Rev. X 9(2), 021032 (2019).
[Crossref]

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017).
[Crossref]

Erhan, D.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (CVPR, 2015), 1–9.

Everson, R.

S. J. Roberts, R. Everson, and I. Rezek, “Maximum certainty data partitioning,” Pattern Recognition 33(5), 833–839 (2000).
[Crossref]

Fan, S.

T. W. Hughes, M. Minkov, Y. Shi, and S. Fan, “Training of photonic neural networks through in situ backpropagation and gradient measurement,” Optica 5(7), 864–871 (2018).
[Crossref]

I. A. Williamson, T. W. Hughes, M. Minkov, B. Bartlett, S. Pai, and S. Fan, “Reprogrammable Electro-Optic Nonlinear Activation Functions for Optical Neural Networks,” arXiv preprint arXiv:1903.04579 (2019).

Fang, M. Y.-S.

Firner, B.

M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, and J. Zhang, “End to end learning for self-driving cars,” arXiv preprint arXiv:1604.07316 (2016).

Fischer, I.

Flepp, B.

M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, and J. Zhang, “End to end learning for self-driving cars,” arXiv preprint arXiv:1604.07316 (2016).

Froehly, L.

Fu, P.-H.

P.-H. Fu, S.-C. Lo, P.-C. Tsai, K.-L. Lee, and P.-K. Wei, “Optimization for Gold Nanostructure-Based Surface Plasmon Biosensors Using a Microgenetic Algorithm,” ACS Photonics 5(6), 2320–2327 (2018).
[Crossref]

Furber, S. B.

S. B. Furber, F. Galluppi, S. Temple, and L. A. Plana, “The spinnaker project,” Proc. IEEE 102(5), 652–665 (2014).
[Crossref]

Galluppi, F.

S. B. Furber, F. Galluppi, S. Temple, and L. A. Plana, “The spinnaker project,” Proc. IEEE 102(5), 652–665 (2014).
[Crossref]

Gao, D.

H. Zhou, Y. Zhao, X. Wang, D. Gao, J. Dong, and X. Zhang, “Self-learning photonic signal processor with an optical neural network chip,” arXiv preprint arXiv:1902.07318 (2019).

Ghamisi, P.

P. Ghamisi and J. A. Benediktsson, “Feature selection based on hybridization of genetic algorithm and particle swarm optimization,” IEEE Geosci. Remote Sens. Lett. 12(2), 309–313 (2015).
[Crossref]

Ghodrati, M.

A. Tavanaei, M. Ghodrati, S. R. Kheradpisheh, T. Masquelier, and A. Maida, “Deep learning in spiking neural networks,” Neural Networks 111, 47–63 (2019).
[Crossref]

Goodfellow, I.

J. Bergstra, F. Bastien, O. Breuleux, P. Lamblin, R. Pascanu, O. Delalleau, G. Desjardins, D. Warde-Farley, I. Goodfellow, and A. Bergeron, “Theano: Deep learning on gpus with python,” in NIPS 2011, BigLearning Workshop, Granada, Spain, (Citeseer, 2011), 1–48.

Goyal, P.

M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, and J. Zhang, “End to end learning for self-driving cars,” arXiv preprint arXiv:1604.07316 (2016).

Graves, A.

V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” arXiv preprint arXiv:1312.5602 (2013).

Gu, S.

S. Gu, E. Holly, T. Lillicrap, and S. Levine, “Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates,” in 2017 IEEE international conference on robotics and automation (ICRA), (IEEE, 2017), 3389–3396.

Guan, Y.

C. Zhang, P. Li, G. Sun, Y. Guan, B. Xiao, and J. Cong, “Optimizing fpga-based accelerator design for deep convolutional neural networks,” in Proceedings of the 2015 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, (ACM, 2015), 161–170.

Hajimiri, A.

Hamerly, R.

R. Hamerly, L. Bernstein, A. Sludds, M. Soljačić, and D. Englund, “Large-scale optical neural networks based on photoelectric multiplication,” Phys. Rev. X 9(2), 021032 (2019).
[Crossref]

Han, X.

T. Zhang, J. Wang, Q. Liu, J. Zhou, J. Dai, X. Han, Y. Zhou, and K. Xu, “Efficient spectrum prediction and inverse design for plasmonic waveguide systems based on artificial neural networks,” Photonics Res. 7(3), 368–380 (2019).
[Crossref]

Harris, N. C.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017).
[Crossref]

Hazarika, D.

T. Young, D. Hazarika, S. Poria, and E. Cambria, “Recent trends in deep learning based natural language processing,” IEEE Comput. Intell. Mag. 13(3), 55–75 (2018).
[Crossref]

He, K.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (IEEE, 2016), 770–778.

Hessel, M.

M. Hessel, J. Modayil, H. Van Hasselt, T. Schaul, G. Ostrovski, W. Dabney, D. Horgan, B. Piot, M. Azar, and D. Silver, “Rainbow: Combining improvements in deep reinforcement learning,” in Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI, 2018).

Hinton, G.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, and T. N. Sainath, “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Process. Mag. 29(6), 82–97 (2012).
[Crossref]

Hinton, G. E.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, (NIPS, 2012), 1097–1105.

Hochberg, M.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017).
[Crossref]

Hochreiter, S.

S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation 9(8), 1735–1780 (1997).
[Crossref]

Holly, E.

S. Gu, E. Holly, T. Lillicrap, and S. Levine, “Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates,” in 2017 IEEE international conference on robotics and automation (ICRA), (IEEE, 2017), 3389–3396.

Horgan, D.

M. Hessel, J. Modayil, H. Van Hasselt, T. Schaul, G. Ostrovski, W. Dabney, D. Horgan, B. Piot, M. Azar, and D. Silver, “Rainbow: Combining improvements in deep reinforcement learning,” in Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI, 2018).

Hughes, T. W.

T. W. Hughes, M. Minkov, Y. Shi, and S. Fan, “Training of photonic neural networks through in situ backpropagation and gradient measurement,” Optica 5(7), 864–871 (2018).
[Crossref]

I. A. Williamson, T. W. Hughes, M. Minkov, B. Bartlett, S. Pai, and S. Fan, “Reprogrammable Electro-Optic Nonlinear Activation Functions for Optical Neural Networks,” arXiv preprint arXiv:1903.04579 (2019).

Humphreys, P. C.

Imam, N.

M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, and S. Jain, “Loihi: A neuromorphic manycore processor with on-chip learning,” IEEE Micro 38(1), 82–99 (2018).
[Crossref]

F. Akopyan, J. Sawada, A. Cassidy, R. Alvarez-Icaza, J. Arthur, P. Merolla, N. Imam, Y. Nakamura, P. Datta, and G.-J. Nam, “Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip,” IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 34(10), 1537–1557 (2015).
[Crossref]

Jackel, L. D.

M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, and J. Zhang, “End to end learning for self-driving cars,” arXiv preprint arXiv:1604.07316 (2016).

Jacquot, M.

Jain, S.

M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, and S. Jain, “Loihi: A neuromorphic manycore processor with on-chip learning,” IEEE Micro 38(1), 82–99 (2018).
[Crossref]

Jaitly, N.

G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, and T. N. Sainath, “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Process. Mag. 29(6), 82–97 (2012).
[Crossref]

Jarrahi, M.

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361(6406), 1004–1008 (2018).
[Crossref]

Jeong, J.

Jia, Y.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (CVPR, 2015), 1–9.

Jin, W.

S. Molesky, Z. Lin, A. Y. Piggott, W. Jin, J. Vucković, and A. W. Rodriguez, “Inverse design in nanophotonics,” Nat. Photonics 12(11), 659–670 (2018).
[Crossref]

Joshi, P.

M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, and S. Jain, “Loihi: A neuromorphic manycore processor with on-chip learning,” IEEE Micro 38(1), 82–99 (2018).
[Crossref]

Jussila, H.

A. Autere, H. Jussila, Y. Dai, Y. Wang, H. Lipsanen, and Z. Sun, “Nonlinear optics with 2D layered materials,” Adv. Mater. 30(24), 1705963 (2018).
[Crossref]

Karlsson, L.

M. Längkvist, L. Karlsson, and A. Loutfi, “A review of unsupervised feature learning and deep learning for time-series modeling,” Pattern Recognition Letters 42, 11–24 (2014).
[Crossref]

Kavukcuoglu, K.

V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” arXiv preprint arXiv:1312.5602 (2013).

Kheradpisheh, S. R.

A. Tavanaei, M. Ghodrati, S. R. Kheradpisheh, T. Masquelier, and A. Maida, “Deep learning in spiking neural networks,” Neural Networks 111, 47–63 (2019).
[Crossref]

Khosrowshahi, A.

Kohavi, R.

R. Kohavi, “A study of cross-validation and bootstrap for accuracy estimation and model selection,” in Ijcai, (Montreal, Canada, 1995), 1137–1145.

Kolthammer, W. S.

Krizhevsky, A.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, (NIPS, 2012), 1097–1105.

Kulkarni, S. R.

S. R. Kulkarni and B. Rajendran, “Spiking neural networks for handwritten digit recognition—Supervised learning and network optimization,” Neural Networks 103, 118–127 (2018).
[Crossref]

Lamblin, P.

J. Bergstra, F. Bastien, O. Breuleux, P. Lamblin, R. Pascanu, O. Delalleau, G. Desjardins, D. Warde-Farley, I. Goodfellow, and A. Bergeron, “Theano: Deep learning on gpus with python,” in NIPS 2011, BigLearning Workshop, Granada, Spain, (Citeseer, 2011), 1–48.

Längkvist, M.

M. Längkvist, L. Karlsson, and A. Loutfi, “A review of unsupervised feature learning and deep learning for time-series modeling,” Pattern Recognition Letters 42, 11–24 (2014).
[Crossref]

Larger, L.

Larochelle, H.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017).
[Crossref]

LeCun, Y.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

Lee, K.-L.

P.-H. Fu, S.-C. Lo, P.-C. Tsai, K.-L. Lee, and P.-K. Wei, “Optimization for Gold Nanostructure-Based Surface Plasmon Biosensors Using a Microgenetic Algorithm,” ACS Photonics 5(6), 2320–2327 (2018).
[Crossref]

Lehman, J.

K. O. Stanley, J. Clune, J. Lehman, and R. Miikkulainen, “Designing neural networks through neuroevolution,” Nat. Mach. Intell. 1(1), 24–35 (2019).
[Crossref]

F. P. Such, V. Madhavan, E. Conti, J. Lehman, K. O. Stanley, and J. Clune, “Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning,” arXiv preprint arXiv:1712.06567 (2017).

Levine, S.

S. Gu, E. Holly, T. Lillicrap, and S. Levine, “Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates,” in 2017 IEEE international conference on robotics and automation (ICRA), (IEEE, 2017), 3389–3396.

Li, P.

C. Zhang, P. Li, G. Sun, Y. Guan, B. Xiao, and J. Cong, “Optimizing fpga-based accelerator design for deep convolutional neural networks,” in Proceedings of the 2015 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, (ACM, 2015), 161–170.

Li, W.

W. Li, S. Ong, and A. Nee, “Hybrid genetic algorithm and simulated annealing approach for the optimization of process plans for prismatic parts,” Int. J. Prod. Res. 40(8), 1899–1922 (2002).
[Crossref]

Lillicrap, T.

S. Gu, E. Holly, T. Lillicrap, and S. Levine, “Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates,” in 2017 IEEE international conference on robotics and automation (ICRA), (IEEE, 2017), 3389–3396.

Lin, T.-H.

M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, and S. Jain, “Loihi: A neuromorphic manycore processor with on-chip learning,” IEEE Micro 38(1), 82–99 (2018).
[Crossref]

Lin, X.

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361(6406), 1004–1008 (2018).
[Crossref]

Lin, Z.

S. Molesky, Z. Lin, A. Y. Piggott, W. Jin, J. Vucković, and A. W. Rodriguez, “Inverse design in nanophotonics,” Nat. Photonics 12(11), 659–670 (2018).
[Crossref]

Lipsanen, H.

A. Autere, H. Jussila, Y. Dai, Y. Wang, H. Lipsanen, and Z. Sun, “Nonlinear optics with 2D layered materials,” Adv. Mater. 30(24), 1705963 (2018).
[Crossref]

Liu, Q.

T. Zhang, J. Wang, Q. Liu, J. Zhou, J. Dai, X. Han, Y. Zhou, and K. Xu, “Efficient spectrum prediction and inverse design for plasmonic waveguide systems based on artificial neural networks,” Photonics Res. 7(3), 368–380 (2019).
[Crossref]

Liu, W.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (CVPR, 2015), 1–9.

Lo, S.-C.

P.-H. Fu, S.-C. Lo, P.-C. Tsai, K.-L. Lee, and P.-K. Wei, “Optimization for Gold Nanostructure-Based Surface Plasmon Biosensors Using a Microgenetic Algorithm,” ACS Photonics 5(6), 2320–2327 (2018).
[Crossref]

Loutfi, A.

M. Längkvist, L. Karlsson, and A. Loutfi, “A review of unsupervised feature learning and deep learning for time-series modeling,” Pattern Recognition Letters 42, 11–24 (2014).
[Crossref]

Luo, Y.

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361(6406), 1004–1008 (2018).
[Crossref]

Macready, W. G.

D. H. Wolpert and W. G. Macready, “No free lunch theorems for optimization,” IEEE Trans. Evol. Computat. 1(1), 67–82 (1997).
[Crossref]

Madhavan, V.

F. P. Such, V. Madhavan, E. Conti, J. Lehman, K. O. Stanley, and J. Clune, “Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning,” arXiv preprint arXiv:1712.06567 (2017).

Maida, A.

A. Tavanaei, M. Ghodrati, S. R. Kheradpisheh, T. Masquelier, and A. Maida, “Deep learning in spiking neural networks,” Neural Networks 111, 47–63 (2019).
[Crossref]

Mak, J. C.

Maktoobi, S.

Manipatruni, S.

Masquelier, T.

A. Tavanaei, M. Ghodrati, S. R. Kheradpisheh, T. Masquelier, and A. Maida, “Deep learning in spiking neural networks,” Neural Networks 111, 47–63 (2019).
[Crossref]

Meng, H.

H. Bagherian, S. Skirlo, Y. Shen, H. Meng, V. Ceperic, and M. Soljacic, “On-Chip Optical Convolutional Neural Networks,” arXiv preprint arXiv:1808.03303 (2018).

Merolla, P.

F. Akopyan, J. Sawada, A. Cassidy, R. Alvarez-Icaza, J. Arthur, P. Merolla, N. Imam, Y. Nakamura, P. Datta, and G.-J. Nam, “Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip,” IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 34(10), 1537–1557 (2015).
[Crossref]

Metcalf, B. J.

Miikkulainen, R.

K. O. Stanley, J. Clune, J. Lehman, and R. Miikkulainen, “Designing neural networks through neuroevolution,” Nat. Mach. Intell. 1(1), 24–35 (2019).
[Crossref]

Minkov, M.

T. W. Hughes, M. Minkov, Y. Shi, and S. Fan, “Training of photonic neural networks through in situ backpropagation and gradient measurement,” Optica 5(7), 864–871 (2018).
[Crossref]

I. A. Williamson, T. W. Hughes, M. Minkov, B. Bartlett, S. Pai, and S. Fan, “Reprogrammable Electro-Optic Nonlinear Activation Functions for Optical Neural Networks,” arXiv preprint arXiv:1903.04579 (2019).

Mnih, V.

V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” arXiv preprint arXiv:1312.5602 (2013).

Modayil, J.

M. Hessel, J. Modayil, H. Van Hasselt, T. Schaul, G. Ostrovski, W. Dabney, D. Horgan, B. Piot, M. Azar, and D. Silver, “Rainbow: Combining improvements in deep reinforcement learning,” in Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI, 2018).

Mohamed, A.

G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, and T. N. Sainath, “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Process. Mag. 29(6), 82–97 (2012).
[Crossref]

Molesky, S.

S. Molesky, Z. Lin, A. Y. Piggott, W. Jin, J. Vucković, and A. W. Rodriguez, “Inverse design in nanophotonics,” Nat. Photonics 12(11), 659–670 (2018).
[Crossref]

Monfort, M.

M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, and J. Zhang, “End to end learning for self-driving cars,” arXiv preprint arXiv:1604.07316 (2016).

Mourgias-Alexandris, G.

G. Mourgias-Alexandris, A. Tsakyridis, N. Passalis, A. Tefas, K. Vyrsokinos, and N. Pleros, “An all-optical neuron with sigmoid activation function,” Opt. Express 27(7), 9620–9630 (2019).
[Crossref]

N. Passalis, G. Mourgias-Alexandris, A. Tsakyridis, N. Pleros, and A. Tefas, “Variance preserving initialization for training deep neuromorphic photonic networks with sinusoidal activations,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (IEEE, 2019), 1483–1487.

Muller, U.

M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, and J. Zhang, “End to end learning for self-driving cars,” arXiv preprint arXiv:1604.07316 (2016).

Nahmias, M. A.

P. R. Prucnal, B. J. Shastri, T. F. de Lima, M. A. Nahmias, and A. N. Tait, “Recent progress in semiconductor excitable lasers for photonic spike processing,” Adv. Opt. Photonics 8(2), 228–299 (2016).
[Crossref]

A. N. Tait, T. F. De Lima, M. A. Nahmias, B. J. Shastri, and P. R. Prucnal, “Multi-channel control for microring weight banks,” Opt. Express 24(8), 8895–8906 (2016).
[Crossref]

M. A. Nahmias, B. J. Shastri, A. N. Tait, and P. R. Prucnal, “A leaky integrate-and-fire laser neuron for ultrafast cognitive computing,” IEEE J. Sel. Top. Quantum Electron. 19(5), 1–12 (2013).
[Crossref]

Nakamura, Y.

F. Akopyan, J. Sawada, A. Cassidy, R. Alvarez-Icaza, J. Arthur, P. Merolla, N. Imam, Y. Nakamura, P. Datta, and G.-J. Nam, “Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip,” IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 34(10), 1537–1557 (2015).
[Crossref]

Nam, G.-J.

F. Akopyan, J. Sawada, A. Cassidy, R. Alvarez-Icaza, J. Arthur, P. Merolla, N. Imam, Y. Nakamura, P. Datta, and G.-J. Nam, “Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip,” IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 34(10), 1537–1557 (2015).
[Crossref]

Nandi, A. K.

E. E. Azzouz and A. K. Nandi, “Automatic identification of digital modulation types,” Signal Processing 47(1), 55–69 (1995).
[Crossref]

Nee, A.

W. Li, S. Ong, and A. Nee, “Hybrid genetic algorithm and simulated annealing approach for the optimization of process plans for prismatic parts,” Int. J. Prod. Res. 40(8), 1899–1922 (2002).
[Crossref]

Nguyen, P.

G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, and T. N. Sainath, “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Process. Mag. 29(6), 82–97 (2012).
[Crossref]

Olson, J. P.

G. R. Steinbrecher, J. P. Olson, D. Englund, and J. Carolan, “Quantum optical neural networks,” npj Quantum Inf 5(1), 60 (2019).
[Crossref]

Ong, S.

W. Li, S. Ong, and A. Nee, “Hybrid genetic algorithm and simulated annealing approach for the optimization of process plans for prismatic parts,” Int. J. Prod. Res. 40(8), 1899–1922 (2002).
[Crossref]

Ostrovski, G.

M. Hessel, J. Modayil, H. Van Hasselt, T. Schaul, G. Ostrovski, W. Dabney, D. Horgan, B. Piot, M. Azar, and D. Silver, “Rainbow: Combining improvements in deep reinforcement learning,” in Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI, 2018).

Ozcan, A.

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361(6406), 1004–1008 (2018).
[Crossref]

Pai, S.

I. A. Williamson, T. W. Hughes, M. Minkov, B. Bartlett, S. Pai, and S. Fan, “Reprogrammable Electro-Optic Nonlinear Activation Functions for Optical Neural Networks,” arXiv preprint arXiv:1903.04579 (2019).

Pascanu, R.

J. Bergstra, F. Bastien, O. Breuleux, P. Lamblin, R. Pascanu, O. Delalleau, G. Desjardins, D. Warde-Farley, I. Goodfellow, and A. Bergeron, “Theano: Deep learning on gpus with python,” in NIPS 2011, BigLearning Workshop, Granada, Spain, (Citeseer, 2011), 1–48.

Passalis, N.

G. Mourgias-Alexandris, A. Tsakyridis, N. Passalis, A. Tefas, K. Vyrsokinos, and N. Pleros, “An all-optical neuron with sigmoid activation function,” Opt. Express 27(7), 9620–9630 (2019).
[Crossref]

N. Passalis, G. Mourgias-Alexandris, A. Tsakyridis, N. Pleros, and A. Tefas, “Variance preserving initialization for training deep neuromorphic photonic networks with sinusoidal activations,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (IEEE, 2019), 1483–1487.

Pfeiffer, M.

M. Pfeiffer and T. Pfeil, “Deep learning with spiking neurons: opportunities and challenges,” Front. Neurosci. 12(774), 1–18 (2018).
[Crossref]

Pfeil, T.

M. Pfeiffer and T. Pfeil, “Deep learning with spiking neurons: opportunities and challenges,” Front. Neurosci. 12(774), 1–18 (2018).
[Crossref]

Piggott, A. Y.

S. Molesky, Z. Lin, A. Y. Piggott, W. Jin, J. Vucković, and A. W. Rodriguez, “Inverse design in nanophotonics,” Nat. Photonics 12(11), 659–670 (2018).
[Crossref]

Piot, B.

M. Hessel, J. Modayil, H. Van Hasselt, T. Schaul, G. Ostrovski, W. Dabney, D. Horgan, B. Piot, M. Azar, and D. Silver, “Rainbow: Combining improvements in deep reinforcement learning,” in Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI, 2018).

Plana, L. A.

S. B. Furber, F. Galluppi, S. Temple, and L. A. Plana, “The spinnaker project,” Proc. IEEE 102(5), 652–665 (2014).
[Crossref]

Pleros, N.

G. Mourgias-Alexandris, A. Tsakyridis, N. Passalis, A. Tefas, K. Vyrsokinos, and N. Pleros, “An all-optical neuron with sigmoid activation function,” Opt. Express 27(7), 9620–9630 (2019).
[Crossref]

N. Passalis, G. Mourgias-Alexandris, A. Tsakyridis, N. Pleros, and A. Tefas, “Variance preserving initialization for training deep neuromorphic photonic networks with sinusoidal activations,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (IEEE, 2019), 1483–1487.

Poon, J. K.

Poria, S.

T. Young, D. Hazarika, S. Poria, and E. Cambria, “Recent trends in deep learning based natural language processing,” IEEE Comput. Intell. Mag. 13(3), 55–75 (2018).
[Crossref]

Prabhu, M.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017).
[Crossref]

Prucnal, P. R.

P. R. Prucnal, B. J. Shastri, T. F. de Lima, M. A. Nahmias, and A. N. Tait, “Recent progress in semiconductor excitable lasers for photonic spike processing,” Adv. Opt. Photonics 8(2), 228–299 (2016).
[Crossref]

A. N. Tait, T. F. De Lima, M. A. Nahmias, B. J. Shastri, and P. R. Prucnal, “Multi-channel control for microring weight banks,” Opt. Express 24(8), 8895–8906 (2016).
[Crossref]

M. A. Nahmias, B. J. Shastri, A. N. Tait, and P. R. Prucnal, “A leaky integrate-and-fire laser neuron for ultrafast cognitive computing,” IEEE J. Sel. Top. Quantum Electron. 19(5), 1–12 (2013).
[Crossref]

Rabinovich, A.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (CVPR, 2015), 1–9.

Rajendran, B.

S. R. Kulkarni and B. Rajendran, “Spiking neural networks for handwritten digit recognition—Supervised learning and network optimization,” Neural Networks 103, 118–127 (2018).
[Crossref]

Reck, M.

M. Reck, A. Zeilinger, H. J. Bernstein, and P. Bertani, “Experimental realization of any discrete unitary operator,” Phys. Rev. Lett. 73(1), 58–61 (1994).
[Crossref]

Reed, S.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (CVPR, 2015), 1–9.

Ren, S.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (IEEE, 2016), 770–778.

Rezek, I.

S. J. Roberts, R. Everson, and I. Rezek, “Maximum certainty data partitioning,” Pattern Recognition 33(5), 833–839 (2000).
[Crossref]

Riedmiller, M.

V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” arXiv preprint arXiv:1312.5602 (2013).

Rivenson, Y.

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361(6406), 1004–1008 (2018).
[Crossref]

Roberts, S. J.

S. J. Roberts, R. Everson, and I. Rezek, “Maximum certainty data partitioning,” Pattern Recognition 33(5), 833–839 (2000).
[Crossref]

Rodriguez, A. W.

S. Molesky, Z. Lin, A. Y. Piggott, W. Jin, J. Vucković, and A. W. Rodriguez, “Inverse design in nanophotonics,” Nat. Photonics 12(11), 659–670 (2018).
[Crossref]

Sainath, T. N.

G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, and T. N. Sainath, “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Process. Mag. 29(6), 82–97 (2012).
[Crossref]

Sawada, J.

F. Akopyan, J. Sawada, A. Cassidy, R. Alvarez-Icaza, J. Arthur, P. Merolla, N. Imam, Y. Nakamura, P. Datta, and G.-J. Nam, “Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip,” IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 34(10), 1537–1557 (2015).
[Crossref]

Saxena, V.

X. Wu, V. Saxena, K. Zhu, and S. Balagopal, “A CMOS Spiking Neuron for Brain-Inspired Neural Networks With Resistive Synapses andIn SituLearning,” IEEE Trans. Circuits Syst. II 62(11), 1088–1092 (2015).
[Crossref]

Schaul, T.

M. Hessel, J. Modayil, H. Van Hasselt, T. Schaul, G. Ostrovski, W. Dabney, D. Horgan, B. Piot, M. Azar, and D. Silver, “Rainbow: Combining improvements in deep reinforcement learning,” in Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI, 2018).

Schmidhuber, J.

J. Schmidhuber, “Deep learning in neural networks: An overview,” Neural Networks 61, 85–117 (2015).
[Crossref]

S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation 9(8), 1735–1780 (1997).
[Crossref]

Senior, A.

G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, and T. N. Sainath, “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Process. Mag. 29(6), 82–97 (2012).
[Crossref]

Sermanet, P.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (CVPR, 2015), 1–9.

Shastri, B. J.

P. R. Prucnal, B. J. Shastri, T. F. de Lima, M. A. Nahmias, and A. N. Tait, “Recent progress in semiconductor excitable lasers for photonic spike processing,” Adv. Opt. Photonics 8(2), 228–299 (2016).
[Crossref]

A. N. Tait, T. F. De Lima, M. A. Nahmias, B. J. Shastri, and P. R. Prucnal, “Multi-channel control for microring weight banks,” Opt. Express 24(8), 8895–8906 (2016).
[Crossref]

M. A. Nahmias, B. J. Shastri, A. N. Tait, and P. R. Prucnal, “A leaky integrate-and-fire laser neuron for ultrafast cognitive computing,” IEEE J. Sel. Top. Quantum Electron. 19(5), 1–12 (2013).
[Crossref]

Shen, Y.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017).
[Crossref]

H. Bagherian, S. Skirlo, Y. Shen, H. Meng, V. Ceperic, and M. Soljacic, “On-Chip Optical Convolutional Neural Networks,” arXiv preprint arXiv:1808.03303 (2018).

Shi, Y.

Sideris, C.

Silver, D.

M. Hessel, J. Modayil, H. Van Hasselt, T. Schaul, G. Ostrovski, W. Dabney, D. Horgan, B. Piot, M. Azar, and D. Silver, “Rainbow: Combining improvements in deep reinforcement learning,” in Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI, 2018).

V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” arXiv preprint arXiv:1312.5602 (2013).

Simonyan, K.

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556 (2014).

Skirlo, S.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017).
[Crossref]

H. Bagherian, S. Skirlo, Y. Shen, H. Meng, V. Ceperic, and M. Soljacic, “On-Chip Optical Convolutional Neural Networks,” arXiv preprint arXiv:1808.03303 (2018).

Sludds, A.

R. Hamerly, L. Bernstein, A. Sludds, M. Soljačić, and D. Englund, “Large-scale optical neural networks based on photoelectric multiplication,” Phys. Rev. X 9(2), 021032 (2019).
[Crossref]

Soljacic, M.

R. Hamerly, L. Bernstein, A. Sludds, M. Soljačić, and D. Englund, “Large-scale optical neural networks based on photoelectric multiplication,” Phys. Rev. X 9(2), 021032 (2019).
[Crossref]

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017).
[Crossref]

H. Bagherian, S. Skirlo, Y. Shen, H. Meng, V. Ceperic, and M. Soljacic, “On-Chip Optical Convolutional Neural Networks,” arXiv preprint arXiv:1808.03303 (2018).

Sorger, V. J.

J. Touch, A.-H. Badawy, and V. J. Sorger, “Optical computing,” Nanophotonics 6(3), 503–505 (2017).
[Crossref]

Srinivasa, N.

M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, and S. Jain, “Loihi: A neuromorphic manycore processor with on-chip learning,” IEEE Micro 38(1), 82–99 (2018).
[Crossref]

Stanley, K. O.

K. O. Stanley, J. Clune, J. Lehman, and R. Miikkulainen, “Designing neural networks through neuroevolution,” Nat. Mach. Intell. 1(1), 24–35 (2019).
[Crossref]

F. P. Such, V. Madhavan, E. Conti, J. Lehman, K. O. Stanley, and J. Clune, “Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning,” arXiv preprint arXiv:1712.06567 (2017).

Steinbrecher, G. R.

G. R. Steinbrecher, J. P. Olson, D. Englund, and J. Carolan, “Quantum optical neural networks,” npj Quantum Inf 5(1), 60 (2019).
[Crossref]

Such, F. P.

F. P. Such, V. Madhavan, E. Conti, J. Lehman, K. O. Stanley, and J. Clune, “Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning,” arXiv preprint arXiv:1712.06567 (2017).

Sun, G.

C. Zhang, P. Li, G. Sun, Y. Guan, B. Xiao, and J. Cong, “Optimizing fpga-based accelerator design for deep convolutional neural networks,” in Proceedings of the 2015 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, (ACM, 2015), 161–170.

Sun, J.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (IEEE, 2016), 770–778.

Sun, N.

T. Chen, Z. Du, N. Sun, J. Wang, C. Wu, Y. Chen, and O. Temam, “Diannao: A small-footprint high-throughput accelerator for ubiquitous machine-learning,” in ACM Sigplan Notices, (ACM, 2014), 269–284.

Sun, X.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017).
[Crossref]

Z. Yu, H. Cui, and X. Sun, “Genetically optimized on-chip wideband ultracompact reflectors and Fabry–Perot cavities,” Photonics Res. 5(6), B15–B19 (2017).
[Crossref]

Sun, Z.

A. Autere, H. Jussila, Y. Dai, Y. Wang, H. Lipsanen, and Z. Sun, “Nonlinear optics with 2D layered materials,” Adv. Mater. 30(24), 1705963 (2018).
[Crossref]

Sutskever, I.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, (NIPS, 2012), 1097–1105.

Szegedy, C.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (CVPR, 2015), 1–9.

Tait, A. N.

P. R. Prucnal, B. J. Shastri, T. F. de Lima, M. A. Nahmias, and A. N. Tait, “Recent progress in semiconductor excitable lasers for photonic spike processing,” Adv. Opt. Photonics 8(2), 228–299 (2016).
[Crossref]

A. N. Tait, T. F. De Lima, M. A. Nahmias, B. J. Shastri, and P. R. Prucnal, “Multi-channel control for microring weight banks,” Opt. Express 24(8), 8895–8906 (2016).
[Crossref]

M. A. Nahmias, B. J. Shastri, A. N. Tait, and P. R. Prucnal, “A leaky integrate-and-fire laser neuron for ultrafast cognitive computing,” IEEE J. Sel. Top. Quantum Electron. 19(5), 1–12 (2013).
[Crossref]

Tavanaei, A.

A. Tavanaei, M. Ghodrati, S. R. Kheradpisheh, T. Masquelier, and A. Maida, “Deep learning in spiking neural networks,” Neural Networks 111, 47–63 (2019).
[Crossref]

Tefas, A.

G. Mourgias-Alexandris, A. Tsakyridis, N. Passalis, A. Tefas, K. Vyrsokinos, and N. Pleros, “An all-optical neuron with sigmoid activation function,” Opt. Express 27(7), 9620–9630 (2019).
[Crossref]

N. Passalis, G. Mourgias-Alexandris, A. Tsakyridis, N. Pleros, and A. Tefas, “Variance preserving initialization for training deep neuromorphic photonic networks with sinusoidal activations,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (IEEE, 2019), 1483–1487.

Temam, O.

T. Chen, Z. Du, N. Sun, J. Wang, C. Wu, Y. Chen, and O. Temam, “Diannao: A small-footprint high-throughput accelerator for ubiquitous machine-learning,” in ACM Sigplan Notices, (ACM, 2014), 269–284.

Temple, S.

S. B. Furber, F. Galluppi, S. Temple, and L. A. Plana, “The spinnaker project,” Proc. IEEE 102(5), 652–665 (2014).
[Crossref]

Touch, J.

J. Touch, A.-H. Badawy, and V. J. Sorger, “Optical computing,” Nanophotonics 6(3), 503–505 (2017).
[Crossref]

Tsai, P.-C.

P.-H. Fu, S.-C. Lo, P.-C. Tsai, K.-L. Lee, and P.-K. Wei, “Optimization for Gold Nanostructure-Based Surface Plasmon Biosensors Using a Microgenetic Algorithm,” ACS Photonics 5(6), 2320–2327 (2018).
[Crossref]

Tsakyridis, A.

G. Mourgias-Alexandris, A. Tsakyridis, N. Passalis, A. Tefas, K. Vyrsokinos, and N. Pleros, “An all-optical neuron with sigmoid activation function,” Opt. Express 27(7), 9620–9630 (2019).
[Crossref]

N. Passalis, G. Mourgias-Alexandris, A. Tsakyridis, N. Pleros, and A. Tefas, “Variance preserving initialization for training deep neuromorphic photonic networks with sinusoidal activations,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (IEEE, 2019), 1483–1487.

Van Hasselt, H.

M. Hessel, J. Modayil, H. Van Hasselt, T. Schaul, G. Ostrovski, W. Dabney, D. Horgan, B. Piot, M. Azar, and D. Silver, “Rainbow: Combining improvements in deep reinforcement learning,” in Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI, 2018).

Vanhoucke, V.

G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, and T. N. Sainath, “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Process. Mag. 29(6), 82–97 (2012).
[Crossref]

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (CVPR, 2015), 1–9.

Veli, M.

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361(6406), 1004–1008 (2018).
[Crossref]

Vuckovic, J.

S. Molesky, Z. Lin, A. Y. Piggott, W. Jin, J. Vucković, and A. W. Rodriguez, “Inverse design in nanophotonics,” Nat. Photonics 12(11), 659–670 (2018).
[Crossref]

Vyrsokinos, K.

Walmsley, I. A.

Wang, J.

T. Zhang, J. Wang, Q. Liu, J. Zhou, J. Dai, X. Han, Y. Zhou, and K. Xu, “Efficient spectrum prediction and inverse design for plasmonic waveguide systems based on artificial neural networks,” Photonics Res. 7(3), 368–380 (2019).
[Crossref]

T. Chen, Z. Du, N. Sun, J. Wang, C. Wu, Y. Chen, and O. Temam, “Diannao: A small-footprint high-throughput accelerator for ubiquitous machine-learning,” in ACM Sigplan Notices, (ACM, 2014), 269–284.

Wang, X.

H. Zhou, Y. Zhao, X. Wang, D. Gao, J. Dong, and X. Zhang, “Self-learning photonic signal processor with an optical neural network chip,” arXiv preprint arXiv:1902.07318 (2019).

Wang, Y.

A. Autere, H. Jussila, Y. Dai, Y. Wang, H. Lipsanen, and Z. Sun, “Nonlinear optics with 2D layered materials,” Adv. Mater. 30(24), 1705963 (2018).
[Crossref]

Warde-Farley, D.

J. Bergstra, F. Bastien, O. Breuleux, P. Lamblin, R. Pascanu, O. Delalleau, G. Desjardins, D. Warde-Farley, I. Goodfellow, and A. Bergeron, “Theano: Deep learning on gpus with python,” in NIPS 2011, BigLearning Workshop, Granada, Spain, (Citeseer, 2011), 1–48.

Wei, P.-K.

P.-H. Fu, S.-C. Lo, P.-C. Tsai, K.-L. Lee, and P.-K. Wei, “Optimization for Gold Nanostructure-Based Surface Plasmon Biosensors Using a Microgenetic Algorithm,” ACS Photonics 5(6), 2320–2327 (2018).
[Crossref]

Wierstra, D.

V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” arXiv preprint arXiv:1312.5602 (2013).

Wierzynski, C.

Williamson, I. A.

I. A. Williamson, T. W. Hughes, M. Minkov, B. Bartlett, S. Pai, and S. Fan, “Reprogrammable Electro-Optic Nonlinear Activation Functions for Optical Neural Networks,” arXiv preprint arXiv:1903.04579 (2019).

Wolpert, D. H.

D. H. Wolpert and W. G. Macready, “No free lunch theorems for optimization,” IEEE Trans. Evol. Computat. 1(1), 67–82 (1997).
[Crossref]

Wu, C.

T. Chen, Z. Du, N. Sun, J. Wang, C. Wu, Y. Chen, and O. Temam, “Diannao: A small-footprint high-throughput accelerator for ubiquitous machine-learning,” in ACM Sigplan Notices, (ACM, 2014), 269–284.

Wu, X.

X. Wu, V. Saxena, K. Zhu, and S. Balagopal, “A CMOS Spiking Neuron for Brain-Inspired Neural Networks With Resistive Synapses andIn SituLearning,” IEEE Trans. Circuits Syst. II 62(11), 1088–1092 (2015).
[Crossref]

Xiao, B.

C. Zhang, P. Li, G. Sun, Y. Guan, B. Xiao, and J. Cong, “Optimizing fpga-based accelerator design for deep convolutional neural networks,” in Proceedings of the 2015 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, (ACM, 2015), 161–170.

Xu, K.

T. Zhang, J. Wang, Q. Liu, J. Zhou, J. Dai, X. Han, Y. Zhou, and K. Xu, “Efficient spectrum prediction and inverse design for plasmonic waveguide systems based on artificial neural networks,” Photonics Res. 7(3), 368–380 (2019).
[Crossref]

Yardimci, N. T.

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361(6406), 1004–1008 (2018).
[Crossref]

Young, T.

T. Young, D. Hazarika, S. Poria, and E. Cambria, “Recent trends in deep learning based natural language processing,” IEEE Comput. Intell. Mag. 13(3), 55–75 (2018).
[Crossref]

Yu, D.

G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, and T. N. Sainath, “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Process. Mag. 29(6), 82–97 (2012).
[Crossref]

Yu, Z.

Z. Yu, H. Cui, and X. Sun, “Genetically optimized on-chip wideband ultracompact reflectors and Fabry–Perot cavities,” Photonics Res. 5(6), B15–B19 (2017).
[Crossref]

Zeilinger, A.

M. Reck, A. Zeilinger, H. J. Bernstein, and P. Bertani, “Experimental realization of any discrete unitary operator,” Phys. Rev. Lett. 73(1), 58–61 (1994).
[Crossref]

Zhang, C.

C. Zhang, P. Li, G. Sun, Y. Guan, B. Xiao, and J. Cong, “Optimizing fpga-based accelerator design for deep convolutional neural networks,” in Proceedings of the 2015 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, (ACM, 2015), 161–170.

Zhang, J.

M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, and J. Zhang, “End to end learning for self-driving cars,” arXiv preprint arXiv:1604.07316 (2016).

Zhang, T.

T. Zhang, J. Wang, Q. Liu, J. Zhou, J. Dai, X. Han, Y. Zhou, and K. Xu, “Efficient spectrum prediction and inverse design for plasmonic waveguide systems based on artificial neural networks,” Photonics Res. 7(3), 368–380 (2019).
[Crossref]

Zhang, X.

H. Zhou, Y. Zhao, X. Wang, D. Gao, J. Dong, and X. Zhang, “Self-learning photonic signal processor with an optical neural network chip,” arXiv preprint arXiv:1902.07318 (2019).

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (IEEE, 2016), 770–778.

Zhao, S.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017).
[Crossref]

Zhao, Y.

H. Zhou, Y. Zhao, X. Wang, D. Gao, J. Dong, and X. Zhang, “Self-learning photonic signal processor with an optical neural network chip,” arXiv preprint arXiv:1902.07318 (2019).

Zhou, H.

H. Zhou, Y. Zhao, X. Wang, D. Gao, J. Dong, and X. Zhang, “Self-learning photonic signal processor with an optical neural network chip,” arXiv preprint arXiv:1902.07318 (2019).

Zhou, J.

T. Zhang, J. Wang, Q. Liu, J. Zhou, J. Dai, X. Han, Y. Zhou, and K. Xu, “Efficient spectrum prediction and inverse design for plasmonic waveguide systems based on artificial neural networks,” Photonics Res. 7(3), 368–380 (2019).
[Crossref]

Zhou, Y.

T. Zhang, J. Wang, Q. Liu, J. Zhou, J. Dai, X. Han, Y. Zhou, and K. Xu, “Efficient spectrum prediction and inverse design for plasmonic waveguide systems based on artificial neural networks,” Photonics Res. 7(3), 368–380 (2019).
[Crossref]

Zhu, K.

X. Wu, V. Saxena, K. Zhu, and S. Balagopal, “A CMOS Spiking Neuron for Brain-Inspired Neural Networks With Resistive Synapses andIn SituLearning,” IEEE Trans. Circuits Syst. II 62(11), 1088–1092 (2015).
[Crossref]

Zisserman, A.

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556 (2014).

ACS Photonics (1)

P.-H. Fu, S.-C. Lo, P.-C. Tsai, K.-L. Lee, and P.-K. Wei, “Optimization for Gold Nanostructure-Based Surface Plasmon Biosensors Using a Microgenetic Algorithm,” ACS Photonics 5(6), 2320–2327 (2018).
[Crossref]

Adv. Mater. (1)

A. Autere, H. Jussila, Y. Dai, Y. Wang, H. Lipsanen, and Z. Sun, “Nonlinear optics with 2D layered materials,” Adv. Mater. 30(24), 1705963 (2018).
[Crossref]

Adv. Opt. Photonics (1)

P. R. Prucnal, B. J. Shastri, T. F. de Lima, M. A. Nahmias, and A. N. Tait, “Recent progress in semiconductor excitable lasers for photonic spike processing,” Adv. Opt. Photonics 8(2), 228–299 (2016).
[Crossref]

Front. Neurosci. (1)

M. Pfeiffer and T. Pfeil, “Deep learning with spiking neurons: opportunities and challenges,” Front. Neurosci. 12(774), 1–18 (2018).
[Crossref]

IEEE Comput. Intell. Mag. (1)

T. Young, D. Hazarika, S. Poria, and E. Cambria, “Recent trends in deep learning based natural language processing,” IEEE Comput. Intell. Mag. 13(3), 55–75 (2018).
[Crossref]

IEEE Geosci. Remote Sens. Lett. (1)

P. Ghamisi and J. A. Benediktsson, “Feature selection based on hybridization of genetic algorithm and particle swarm optimization,” IEEE Geosci. Remote Sens. Lett. 12(2), 309–313 (2015).
[Crossref]

IEEE J. Sel. Top. Quantum Electron. (1)

M. A. Nahmias, B. J. Shastri, A. N. Tait, and P. R. Prucnal, “A leaky integrate-and-fire laser neuron for ultrafast cognitive computing,” IEEE J. Sel. Top. Quantum Electron. 19(5), 1–12 (2013).
[Crossref]

IEEE Micro (1)

M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, and S. Jain, “Loihi: A neuromorphic manycore processor with on-chip learning,” IEEE Micro 38(1), 82–99 (2018).
[Crossref]

IEEE Signal Process. Mag. (1)

G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, and T. N. Sainath, “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Process. Mag. 29(6), 82–97 (2012).
[Crossref]

IEEE Trans. Circuits Syst. II (1)

X. Wu, V. Saxena, K. Zhu, and S. Balagopal, “A CMOS Spiking Neuron for Brain-Inspired Neural Networks With Resistive Synapses andIn SituLearning,” IEEE Trans. Circuits Syst. II 62(11), 1088–1092 (2015).
[Crossref]

IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. (1)

F. Akopyan, J. Sawada, A. Cassidy, R. Alvarez-Icaza, J. Arthur, P. Merolla, N. Imam, Y. Nakamura, P. Datta, and G.-J. Nam, “Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip,” IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 34(10), 1537–1557 (2015).
[Crossref]

IEEE Trans. Evol. Computat. (1)

D. H. Wolpert and W. G. Macready, “No free lunch theorems for optimization,” IEEE Trans. Evol. Computat. 1(1), 67–82 (1997).
[Crossref]

Int. J. Prod. Res. (1)

W. Li, S. Ong, and A. Nee, “Hybrid genetic algorithm and simulated annealing approach for the optimization of process plans for prismatic parts,” Int. J. Prod. Res. 40(8), 1899–1922 (2002).
[Crossref]

Laser Photonics Rev. (1)

W. Bogaerts and L. Chrostowski, “Silicon photonics circuit design: methods, tools and challenges,” Laser Photonics Rev. 12(4), 1700237 (2018).
[Crossref]

Nanophotonics (1)

J. Touch, A.-H. Badawy, and V. J. Sorger, “Optical computing,” Nanophotonics 6(3), 503–505 (2017).
[Crossref]

Nat. Mach. Intell. (1)

K. O. Stanley, J. Clune, J. Lehman, and R. Miikkulainen, “Designing neural networks through neuroevolution,” Nat. Mach. Intell. 1(1), 24–35 (2019).
[Crossref]

Nat. Photonics (2)

S. Molesky, Z. Lin, A. Y. Piggott, W. Jin, J. Vucković, and A. W. Rodriguez, “Inverse design in nanophotonics,” Nat. Photonics 12(11), 659–670 (2018).
[Crossref]

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017).
[Crossref]

Nature (1)

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

Neural Computation (1)

S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation 9(8), 1735–1780 (1997).
[Crossref]

Neural Networks (3)

J. Schmidhuber, “Deep learning in neural networks: An overview,” Neural Networks 61, 85–117 (2015).
[Crossref]

A. Tavanaei, M. Ghodrati, S. R. Kheradpisheh, T. Masquelier, and A. Maida, “Deep learning in spiking neural networks,” Neural Networks 111, 47–63 (2019).
[Crossref]

S. R. Kulkarni and B. Rajendran, “Spiking neural networks for handwritten digit recognition—Supervised learning and network optimization,” Neural Networks 103, 118–127 (2018).
[Crossref]

npj Quantum Inf (1)

G. R. Steinbrecher, J. P. Olson, D. Englund, and J. Carolan, “Quantum optical neural networks,” npj Quantum Inf 5(1), 60 (2019).
[Crossref]

Opt. Express (3)

Opt. Lett. (1)

Optica (3)

Pattern Recognition (1)

S. J. Roberts, R. Everson, and I. Rezek, “Maximum certainty data partitioning,” Pattern Recognition 33(5), 833–839 (2000).
[Crossref]

Pattern Recognition Letters (1)

M. Längkvist, L. Karlsson, and A. Loutfi, “A review of unsupervised feature learning and deep learning for time-series modeling,” Pattern Recognition Letters 42, 11–24 (2014).
[Crossref]

Photonics Res. (2)

T. Zhang, J. Wang, Q. Liu, J. Zhou, J. Dai, X. Han, Y. Zhou, and K. Xu, “Efficient spectrum prediction and inverse design for plasmonic waveguide systems based on artificial neural networks,” Photonics Res. 7(3), 368–380 (2019).
[Crossref]

Z. Yu, H. Cui, and X. Sun, “Genetically optimized on-chip wideband ultracompact reflectors and Fabry–Perot cavities,” Photonics Res. 5(6), B15–B19 (2017).
[Crossref]

Phys. Rev. Lett. (1)

M. Reck, A. Zeilinger, H. J. Bernstein, and P. Bertani, “Experimental realization of any discrete unitary operator,” Phys. Rev. Lett. 73(1), 58–61 (1994).
[Crossref]

Phys. Rev. X (1)

R. Hamerly, L. Bernstein, A. Sludds, M. Soljačić, and D. Englund, “Large-scale optical neural networks based on photoelectric multiplication,” Phys. Rev. X 9(2), 021032 (2019).
[Crossref]

Proc. IEEE (1)

S. B. Furber, F. Galluppi, S. Temple, and L. A. Plana, “The spinnaker project,” Proc. IEEE 102(5), 652–665 (2014).
[Crossref]

Science (1)

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361(6406), 1004–1008 (2018).
[Crossref]

Signal Processing (1)

E. E. Azzouz and A. K. Nandi, “Automatic identification of digital modulation types,” Signal Processing 47(1), 55–69 (1995).
[Crossref]

Other (18)

R. Kohavi, “A study of cross-validation and bootstrap for accuracy estimation and model selection,” in Ijcai, (Montreal, Canada, 1995), 1137–1145.

https://github.com/fancompute/neuroptica .

F. P. Such, V. Madhavan, E. Conti, J. Lehman, K. O. Stanley, and J. Clune, “Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning,” arXiv preprint arXiv:1712.06567 (2017).

M. Hessel, J. Modayil, H. Van Hasselt, T. Schaul, G. Ostrovski, W. Dabney, D. Horgan, B. Piot, M. Azar, and D. Silver, “Rainbow: Combining improvements in deep reinforcement learning,” in Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI, 2018).

H. Bagherian, S. Skirlo, Y. Shen, H. Meng, V. Ceperic, and M. Soljacic, “On-Chip Optical Convolutional Neural Networks,” arXiv preprint arXiv:1808.03303 (2018).

N. Passalis, G. Mourgias-Alexandris, A. Tsakyridis, N. Pleros, and A. Tefas, “Variance preserving initialization for training deep neuromorphic photonic networks with sinusoidal activations,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (IEEE, 2019), 1483–1487.

I. A. Williamson, T. W. Hughes, M. Minkov, B. Bartlett, S. Pai, and S. Fan, “Reprogrammable Electro-Optic Nonlinear Activation Functions for Optical Neural Networks,” arXiv preprint arXiv:1903.04579 (2019).

H. Zhou, Y. Zhao, X. Wang, D. Gao, J. Dong, and X. Zhang, “Self-learning photonic signal processor with an optical neural network chip,” arXiv preprint arXiv:1902.07318 (2019).

J. Bergstra, F. Bastien, O. Breuleux, P. Lamblin, R. Pascanu, O. Delalleau, G. Desjardins, D. Warde-Farley, I. Goodfellow, and A. Bergeron, “Theano: Deep learning on gpus with python,” in NIPS 2011, BigLearning Workshop, Granada, Spain, (Citeseer, 2011), 1–48.

C. Zhang, P. Li, G. Sun, Y. Guan, B. Xiao, and J. Cong, “Optimizing fpga-based accelerator design for deep convolutional neural networks,” in Proceedings of the 2015 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, (ACM, 2015), 161–170.

T. Chen, Z. Du, N. Sun, J. Wang, C. Wu, Y. Chen, and O. Temam, “Diannao: A small-footprint high-throughput accelerator for ubiquitous machine-learning,” in ACM Sigplan Notices, (ACM, 2014), 269–284.

M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, and J. Zhang, “End to end learning for self-driving cars,” arXiv preprint arXiv:1604.07316 (2016).

V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” arXiv preprint arXiv:1312.5602 (2013).

S. Gu, E. Holly, T. Lillicrap, and S. Levine, “Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates,” in 2017 IEEE international conference on robotics and automation (ICRA), (IEEE, 2017), 3389–3396.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, (NIPS, 2012), 1097–1105.

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556 (2014).

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (CVPR, 2015), 1–9.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (IEEE, 2016), 770–778.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. (a) The network architecture of the ANNs, which includes input neurons, hidden layers, and output neurons. (b) Decomposition of ANNs into a series of layers which implement linear matrix multiplication and nonlinear transform functions. (c) Physical implementation of the OIU that are composed of programmable MZIs.
Fig. 2.
Fig. 2. The flowcharts of the learning algorithms for the ONNs based on GA (a) and PSO (b).
Fig. 3.
Fig. 3. The calculated results of the ONNs trained by GA. (a) A simple dataset provided by the neuroptica simulation platform segments the square space into two triangle areas. (b) The MSEs and classification accuracies of the ONNs trained by AVM and GA for simple dataset. (c) The MSEs of the ONNs trained by GA for three test datasets. (d) The classification accuracies of the ONNs trained by GA for three test datasets.
Fig. 4.
Fig. 4. The MSEs and classification accuracies of the trained ONNs with different population sizes (a), mutation probabilities (b), crossover probabilities (c) and selection operators (d) of GA.
Fig. 5.
Fig. 5. The calculated results of the ONNs trained by PSO. (a) A simple dataset provided by the neuroptica simulation platform segments the square space into ring part and other part. (b) The MSEs and classification accuracies of the ONNs trained by PSO and AVM for simple dataset. (c) The MSEs of the ONNs trained by PSO for three test datasets. (d) The classification accuracies of the ONNs by PSO for three test datasets.
Fig. 6.
Fig. 6. The MSEs and classification accuracies of the trained ONNs for different population sizes (a), inertia weights (b), velocity ranges (c) of the PSO and the number of the layers in the ONNs (f).

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

ω ω η ( α R ( ω ) ω + L ω )
V i k + 1 = W V i k + c 1 r 1 ( p b i k X i k ) + c 2 r 2 ( g b k d X i k )
X i k + 1 = X i k + V i k + 1

Metrics