Abstract

Photonic neural network implementation has been gaining considerable attention as a potentially disruptive future technology. Demonstrating learning in large-scale neural networks is essential to establish photonic machine learning substrates as viable information processing systems. Realizing photonic neural networks with numerous nonlinear nodes in a fully parallel and efficient learning hardware has been lacking so far. We demonstrate a network of up to 2025 diffractively coupled photonic nodes, forming a large-scale recurrent neural network. Using a digital micro mirror device, we realize reinforcement learning. Our scheme is fully parallel, and the passive weights maximize energy efficiency and bandwidth. The computational output efficiently converges, and we achieve very good performance.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. INTRODUCTION

Multiple concepts of neural networks (NNs) have initiated a revolution in the way we process information. Deep NNs outperform humans in challenges previously deemed unsolvable by computers [1]. Among others, these systems are now capable of solving non-trivial computational problems in optics [2]. At the same time, reservoir computing (RC) emerged as a recurrent NN (RNN) concept [3]. Initially, RC received substantial attention due to excellent prediction performance achieved with minimal optimization effort. However, quickly it was realized that RC is highly attractive for analog hardware implementations [4,5].

As employed by the machine learning community, NNs consist of a large number of nonlinear nodes interacting with each other. Evolving the NNs’ state requires performing vector-matrix products with possibly millions of entries. NN concepts therefore fundamentally benefit from parallelism. Consequently, photonics was identified as an attractive alternative to electronic implementation [6,7]. Early implementations were bulky and suffered from lack of adequate technology and NN concepts. This recently started to change, first because RC enabled a tremendous complexity reduction of analog electronic and photonic RNNs [5,811]. In addition, integrated photonic platforms have matured and integrated photonic NNs are now feasible [12]. Various demonstrations of how a particular network of neurons can be implemented have been realized in hardware. Yet, NNs consisting of numerous photonic nonlinear nodes combined with photonically implemented learning so far have been demonstrated only in delay systems controlled by a field programmable gate array [13]. Due to the time multiplexing, delay system NNs fundamentally require such auxiliary infrastructure, and computational speed suffers due to their serial nature.

While networks with multiple nodes are more challenging to implement, they offer key advantages in terms of parallelism and speed, and for realizing the essential vector-matrix products. Here, we demonstrate a network of up to 2025 nonlinear network nodes, where each node is a pixel of a spatial light modulator (SLM). Recurrent and complex network connections are implemented using a diffractive optical element (DOE), an intrinsically parallel and passive device [14]. Simulations based on the angular spectrum of plane waves show that the concept is scalable to well over 90.000 nodes. In a photonic RNN with N=900 nodes, we implement learning using a digital micro-mirror device (DMD). The DMD is intrinsically parallel as well and, once weights have been trained, passive and energy efficient. The coupling and learning concepts’ bandwidth and power consumption are in practice not impacted by the system’s size, offering attractive scaling properties. Here, we apply such a passive and parallel readout layer to an analog hardware RNN, and introduce learning strategies improving performance of such systems. Using reinforcement learning, we implement time series prediction with excellent performance. Our findings open the door to novel and versatile photonic NN concepts.

2. NONLINEAR NODES AND DIFFRACTIVE NETWORK

Figure 1(a) conceptually illustrates our RNN. Information enters the system via a single input node, from where it is injected into a recurrently connected network of nonlinear nodes. The computational result is provided at the single output node after summing the network’s state according to weight matrix WDMD. Following the RC concept, one can choose the input and recurrent internal weights randomly [3]. Here, we create a complex and recurrently connected network using imaging that is spatially structured via a DOE, resulting in the internal connectivity matrix WDOE [14].

 figure: Fig. 1.

Fig. 1. (a) Schematic illustration of a recurrent neural network. (b) Experimental setup. The NN state encoded on the SLM is imaged onto the camera (CAM) through a polarizing beam splitter (PBS) and a diffractive optical element (DOE). In this way, we realize a recurrently coupled network of Ikeda maps. A digital micromirror device (DMD) creates a spatially modulated image of the SLM’s state. The integrated state, obtained via superimposing the modulated intensities, is detected, creating the photonic RNN’s output. (c) RNN’s coupling matrix established by the DOE, with the inset showing a zoom into a smaller region.

Download Full Size | PPT Slide | PDF

In Fig. 1(b), we schematically illustrate our experimental setup. A laser illumination field (Thorlabs LP660-SF20, λ=661.2nm, Ibias=89.69mA, T=23°C) is adjusted to s polarization. A consecutive 50/50 beam splitter (BS) reflects the illumination beam towards the polarizing beam splitter cube (PBS), from where it reflects further towards the SLM (Hamamatsu X13267-01). The BS’s transmission creates the output port for our photonic RNN, and for the 50/50 splitting ratio, the output power is maximized. By focusing the illumination laser onto the first microscope objective’s (MO1, Nikon CFI Plan Achro 10×) back focal plane, SLM pixel i is illuminated by a plane wave of amplitude Ei0. The λ2-plate between PBS and MO1 is adjusted such that the SLM operates in intensity modulation mode. Consequently, the p-polarized optical field transmitted through the PBS for pixel i=1N and at integer time n is given by

Ei(n)=Ei0cos(2πκSLM(xiSLM(n)+θi0)),
where xiSLM(n){0,1,,255} is the SLM pixel’s gray scale value and κSLM=244.6±1.6 the conversion between pixel gray scale and polarization angle in radians. Finally, gray scale offset θi0=11.1±1.1 is a device related constant. Uncertainties given for κSLM and θi0 correspond to the standard deviations measured across the 900 pixels of our network.

Ignoring for now the DOE’s effect for explanatory purposes, the transmitted field is imaged (MO2, same as MO1) on a mirror. A double-pass through the λ4-plate results in an s-polarized field, which is fully reflected by the PBS and consecutively imaged (MO3, Nikon CFI Plan Fluor 4×) on the camera (CAM, Thorlabs DCC1545M), creating camera state xiC(n)=α|Ei|2 with α=GSIsat·ND and xiC(n){0,1,,255}. GS=255 is the 8-bit camera gray scale, Isat its saturation intensity, and ND the transmission through a neutral density filter (ND) selected such that the dynamical range of the camera is best exploited and over-exposure is avoided. xiC(n) is linearly rescaled in size to match the number of active SLM pixels, which is necessary due to (i) an optical imaging magnification of 2.5, and (ii) different pixel sizes of SLM (12.5 μm) and camera (5.2 μm). After multiplication of the rescaled state x˜iC(n) with scalar feedback gain β, we add phase offset θ˜i and send xiSLM(n+1)=βx˜iC(n)+θ˜i back to the SLM. Defining the network’s new state x(n+1) as the intensity transmitted through the PBS, our system’s dynamical evolution is therefore governed by uncoupled Ikeda maps:

xi(n+1)=α|Ei0|2cos2(2πκSLM(βx˜iC(n)+θ˜i)).

Illumination wavelength, DOE (HOLOOR MS-443-650-Y-X), as well as MO1 were chosen such that the spacing between diffractive orders matches the pixel spacing of the SLM [14]. Therefore, upon adding the DOE to the beam path, the optical field on the camera becomes EiC=jNWi,jDOEEj, where WDOE is the network’s coupling matrix created by the DOE. As the DOE of 3×3 diffractive orders is operated in double pass, the final diffraction is a convolution of the diffraction pattern with itself, on average resulting in a 5×5 diffraction pattern. Figure 1(c) shows the experimentally obtained WDOE for a network of 900 nodes, clearly revealing the 5×5 structure. Upon inspection of the inset, one can see that local connectivity strengths vary significantly. This is due to each pixel illuminating a DOE area comparable to the DOE’s lowest spatial frequency. As this area shifts slightly from pixel to pixel, the intensity distribution between diffractive orders varies. This intended effect inherently creates the heterogeneous photonic network topology needed for computation [3]. Besides the reservoir internal coupling between photonic neurons, we also couple the system to external information via injection matrix Winj, whose entries are uniformly drawn from [0,1]. The resulting photonic RNN’s state x(n+1) is given by

xi(n+1)=α|Ei0|2cos2[β·α|jNWi,jDOEEj(n)|2+γWiinju(n+1)+θi].
Information to be injected into the RNN corresponds to u(n+1), and γ is the injection gain. MATLAB is used to control all instruments and to update the network state by combining the network’s internal state with the external input information. The overall update rate of the entire system is 5Hz, limited by MATLAB controlling the SLM. Currently, the maximum size of networks we can realize consists of 2500 nodes, which is limited by the imaging setup’s field of view and not by the concept itself. Numerically, we have investigated the scalability of photonic networks based on diffractive coupling. Optical fields were propagated using the spectrum of planar waves, importantly not using the paraxial approximation [15]. The microscope objectives were implemented based on the vectorial Debye integral representation [16]. For networks covering an area in excess of 3mm2, all optical fields relevant for coupling had a high degree of spatial overlap in the image plane. Assuming an emitter spacing of 10μm, a network of this size would consist of 90.000 photonic nodes coupled in parallel, which demonstrates the excellent scalability of our concept.

3. NETWORK READOUT WEIGHTS

The final step to information processing is to adjust the system such that it performs the desired computation, typically achieved by modifying connection weights according to some learning routine. Inspired by the RC concept [3], we constrain learning-induced weight adjustment to the readout layer. Our 900 RNN nodes are spatially distributed, and we therefore can use a simple lens (Thorlabs AC254-400-B) to image a version of the RNN’s state onto an array of micro mirrors (DLi4120 XGA, pitch 13.68 μm). Micro mirrors can be flipped between ±12°, and only for 12°, the optical signal contributes to the RNN’s output at the detector (DET, Thorlabs PM100A, S150C). Our physically implemented readout weights are therefore strictly Boolean. Using the orthogonality of polarization between the field imaged on the camera and the DMD, the RNN output becomes

yout(n+1)|iNWi,kDMD(Ei0Ei(n+1))|2.
Here, k is the current learning iteration or learning epoch. In the experiment, weight vector Wi=1N,kDMD corresponds to a square matrix, which can be seen in Fig. 1(b). Weights are not temporal modulations as in delay system implementations of RC [13], and therefore can be implemented by passive attenuations in reflection or transmission. Such passive weights are ultimately energy efficient and typically do not result in a bandwidth limitation. In this specific implementation, once trained, mirrors could simply remain in their position, and, if mechanically clamped, would not further consume energy. Finally, readout Eq. (4) is optically performed for all elements in parallel.

4. PHOTONIC LEARNING

The task is now to tailor Wi=1N,kDMD during k=1,2,,K learning iterations such that output yout(n+1) produces the desired response to the input u(n+1) at k=K. Two hundred points of the chaotic Mackey–Glass (MG) sequence [3] are used as the injected training signal u(n+1). From the RNNÂ’s output we removed the first 30 data points due to their transient nature. The remaining output was inverted, its mean subtracted and normalized by its standard deviation, creating y˜kout(n+1). At each iteration we modify the RNN’s output weights and determine the normalized mean square error (NMSE) ϵk between y˜kout(n+1) and u(n+2). A modification at k is rewarded if it resulted in ϵk<ϵk1. We therefore teach our photonic RNN to perform one-step-ahead prediction via a form of reinforcement learning. Parameters of the MG sequence were identical to [17], using an integration step size of 0.1.

Starting at k=1, the N readout weights Wi=1N,1DMD[0,1] are randomly initialized, the 170 points of y˜1out are measured, and ϵ1 is determined. For the next (k+1) and all following learning iterations, we select position lk of the readout weight to be modified according to

Wkselect=rand(N)·Wbias,
[lk,Wkselect,max]=max(Wkselect),
Wlk,kDMD=¬(Wlk,k1DMD).
Here, rand(N) creates a random vector with N entries, and Wbias[0,1] is randomly initialized if k=2. Entry Wi=lk,k=kDMD is therefore inverted [Eq. (7)] at the position of the largest entry in Wkselect. Wbias is updated according to
Wbias=1N+Wbias,Wlkbias=0.
Its values are therefore increased by 1N each learning iteration, with the most recently updated set to zero. Consequently, Wbias biases learning away from modifying weights that have recently been optimized, which in simulations showed significantly faster learning convergence. Technically, our exploration strategy resembles a stochastic gradient descent.

After the readout weight has been inverted, we record the new error signal ϵk and calculate

r(k)={1ifϵk<ϵk10ifϵkϵk1,
Wlk,kDMD=r(k)Wlk,kDMD+(r(k)1)Wlk,k1DMD,
where r(k) is the reward for the recent modification. On the basis of this reward, the current DMD configuration is kept if performance is improved; otherwise, we revert back to the previous, better configuration at k1. Equations (9) and (10) therefore reinforce modifications that were found beneficial.

At this stage, we would like to highlight a significant difference between NNs emulated on digital electronic computers and our photonic-hardware implementation. In our system, all connection weights are positive, and WDMD is Boolean. This restricts the functional space available for approximating the targeted input–output transformation. As a result, first evaluations of the learning procedure and prediction of the MG series suffered from minor performance. However, we were able to mitigate this restriction by harnessing the non-monotonous slope of the cos2 nonlinearity. We randomly divided the offset phases θi|i=1N, resulting in nodes with negative and positive slopes of their response function. Locally scanning offsets for optimal performance, we chose θ0=42=^0.34π and θ0+Δθ=106=^0.86π, respectively, with a probability of 1μ for θi=Θ0. Values for θ0 and θ0+Δθ are given in units of SLM gray scales and are connected to the angular argument via Eq. (1). As RNN-states and WDOE entries are exclusively positive, the nonlinear transformation of nodes with θi=Θ0 is predominantly along a positive slope, and for θi=Θ0+ΔΘ, along a negative slope. This enables the reinforcement learning procedure to select from nonlinear transformations with positive and negative slopes. We used feedback β=0.8 and injection gain γ=0.4, and learning curves for various ratios (μ=[0.25,0.35,0.45,0.5]) are shown in Fig. 2(a). They reveal a strong impact of this symmetry breaking. Optimum performance for each μ is shown in Fig. 2(b). Best performance is found for a RNN operating around almost equally distributed operating points at μ=0.45. This demonstrates that the absence of negative values in WDMD, WDOE, and x can be partially compensated for by incorporating nonlinear transformations with positive as well as negative slopes. This result is of high significance for optical NNs, which, e.g., motivated by robustness considerations, renounce making use of the optical phase to implement negative weights.

 figure: Fig. 2.

Fig. 2. (a) Learning curves for a photonic RNN with two phase offsets ϕi across the network. A fraction of (1μ) nodes have a response function with a negative slope (Θ0=0.17π), and the remaining nodes experience a response function with a positive slope (Θ0=0.43π). (b) An almost symmetric distribution of phase offsets, resulting in positive and negative node responses, benefits computational performance.

Download Full Size | PPT Slide | PDF

We further optimized our system’s performance by scanning the remaining parameters β and γ. In Fig. 3(a), we show the error convergence under optimized global conditions for a training sample size of 500 steps (blue stars). The error efficiently reduces, and finally stabilizes at ϵ0.013. Considering learning is limited to Boolean readout weights, this is an excellent result. After training, the prediction performance is evaluated further on a sequence of 4500 consecutive data points that were not part of the training dataset. As indicated by the red line in the same panel, the testing error matches the training error. We can therefore conclude that our photonic RNN successfully generalized the underlying target system’s properties. The excellent prediction performance can be appreciated in Fig. 3(b). Data belonging to the left y axis (blue line) shows the recorded output power, while on the right y axis (red dots), we show the normalized prediction target signal. A difference between both is hardly visible, and the prediction error ϵ (yellow dashed line) is small. Down-sampling the injected signals by 3 creates conditions identical to [17,18]. Under these conditions, our error (ϵ=0.042) is larger by a factor of 2.2 relative to a delay RC based on a semiconductor laser [17] and by 6.5 relative to a Mach–Zehnder-modulator-based setup [18]. These comparisons have to be evaluated in light of the significantly increased level of hardware implementation in our current setup. In [17,18], readout weights were applied digitally in an off-line procedure using weights with double precision. In [18], a strong impact of digitization resolution on the computational performance was identified, suggesting that ϵ can be significantly reduced by increasing the resolution of WDMD.

 figure: Fig. 3.

Fig. 3. (a) Learning performance at optimal parameters (β=0.8, γ=0.4, μ=0.45). (b) Photonic RNN’s predicted output in μW (blue line) can hardly be differentiated from the prediction target (red dots). Prediction error ϵ is given by the yellow dashed data.

Download Full Size | PPT Slide | PDF

5. CONCLUSION

We demonstrated a photonic RNN consisting of hundreds of photonic nonlinear nodes and the implementation of photonic reinforcement learning. Using a simple Boolean valued readout implemented with a DMD, we trained our system to predict the chaotic MG sequence. The resulting prediction error is very low despite the Boolean readout weights. Recently, a random weight update for photonic reinforcement learning has been demonstrated based on ultra-fast optical processes [19]. Importantly, we have realized a fully parallel set of photonic readout weights based on a DMD, an off-the-shelf technology with a wide range of commercial and scientific applications [20].

In our work, we demonstrate how symmetry breaking inside the RNN can partially compensate for exclusively positive intensities in our analog NN system. These results resolve a complication of general importance to NNs implemented in analog hardware. Hardware-implemented networks and readout weights based on physical devices open the door to a new class of experiments, i.e., evaluating the robustness and efficiency of learning strategies in fully implemented analog NNs. The final step, a photonic realization of the input, should be straightforward, as it requires only a complex spatial distribution of the input information. An additional development crucial for the relevance of photonic NNs is the realization of high-dimensional outputs. In our spatio-temporal RNN, one could employ, individually or even simultaneously, spatial and spectral multiplexing of the output. Also, our system is not limited to the reported slow opto-electronic system. Extremely fast all-optical systems can be realized employing the same concept, since we intentionally implemented a 4f architecture to allow for self-coupling [14]. Finally, after our proof of principle, other and more advanced learning strategies should be investigated.

Funding

NeuroQNet project, Volkswagen Foundation; Agence Nationale de la Recherche (ANR) (ANR-11-LABX-0001-0); Centre National de la Recherche Scientifique (CNRS) (PICS07300).

Acknowledgment

The authors would like to thank Christian Markus Dietrich for valuable contributions to earlier versions of the setup. The authors acknowledge the support of the Region Bourgogne Franche-Comté.

REFERENCES

1. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015). [CrossRef]  

2. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017). [CrossRef]  

3. H. Jaeger and H. Haas, “Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication,” Science 304, 78–80 (2004). [CrossRef]  

4. K. Vandoorne, W. Dierckx, B. Schrauwen, D. Verstraeten, R. Baets, P. Bienstman, and J. Van Campenhout, “Toward optical signal processing using photonic reservoir computing,” Opt. Express 16, 11182–11192 (2008). [CrossRef]  

5. L. Appeltant, M. C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. Dambre, B. Schrauwen, C. R. Mirasso, and I. Fischer, “Information processing using a single dynamical node as complex system,” Nat. Commun. 2, 468 (2011). [CrossRef]  

6. K. Wagner and D. Psaltis, “Multilayer optical learning networks,” Appl. Opt. 26, 5061–5076 (1987). [CrossRef]  

7. C. Denz, Optical Neural Networks (Springer Vieweg, 1998).

8. F. Duport, B. Schneider, A. Smerieri, M. Haelterman, and S. Massar, “All-optical reservoir computing,” Opt. Express 20, 22783–22795 (2012). [CrossRef]  

9. Y. Paquot, F. Duport, A. Smerieri, J. Dambre, B. Schrauwen, M. Haelterman, and S. Massar, “Optoelectronic reservoir computing,” Sci. Rep. 2, 287 (2012). [CrossRef]  

10. L. Larger, M. C. Soriano, D. Brunner, L. Appeltant, J. M. Gutierrez, L. Pesquera, C. R. Mirasso, and I. Fischer, “Photonic information processing beyond Turing: an optoelectronic implementation of reservoir computing,” Opt. Express 20, 3241–3249 (2012). [CrossRef]  

11. D. Brunner, M. C. Soriano, C. R. Mirasso, and I. Fischer, “Parallel photonic information processing at gigabyte per second data rates using transient states,” Nat. Commun. 4, 1364 (2013). [CrossRef]  

12. Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017). [CrossRef]  

13. P. Antonik, M. Haelterman, and S. Massar, “Online training for high-performance analogue readout layers in photonic reservoir computers,” Cognit. Comput. 9, 297–306 (2017). [CrossRef]  

14. D. Brunner and I. Fischer, “Reconfigurable semiconductor laser networks based on diffractive coupling,” Opt. Lett. 40, 3854–3857 (2015). [CrossRef]  

15. J. Goodman, Introduction to Fourier Optics (W. H. Freeman, 2017).

16. M. Leutenegger, R. Rao, R. A. Leitgeb, and T. Lasser, “Fast focus field calculations,” Opt. Express 14, 11277–11291 (2006). [CrossRef]  

17. J. Bueno, D. Brunner, M. Soriano, and I. Fischer, “Conditions for reservoir computing performance using semiconductor lasers with delayed optical feedback,” Opt. Express 25, 2401–2412 (2017). [CrossRef]  

18. M. C. Ortín, S. Soriano, L. Pesquera, D. Brunner, D. San-Martín, I. Fischer, C. R. Mirasso, and J. M. Gutiérrez, “A unified framework for reservoir computing and extreme learning machines based on a single time-delayed neuron,” Sci. Rep. 5, 14945 (2015). [CrossRef]  

19. M. Naruse, Y. Terashima, A. Uchida, and S.-J. Kim, “Ultrafast photonic reinforcement learning based on laser chaos,” Sci. Rep. 7, 8772 (2017). [CrossRef]  

20. D. B. Phillips, M.-J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3, e1601782 (2017). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
    [Crossref]
  2. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
    [Crossref]
  3. H. Jaeger and H. Haas, “Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication,” Science 304, 78–80 (2004).
    [Crossref]
  4. K. Vandoorne, W. Dierckx, B. Schrauwen, D. Verstraeten, R. Baets, P. Bienstman, and J. Van Campenhout, “Toward optical signal processing using photonic reservoir computing,” Opt. Express 16, 11182–11192 (2008).
    [Crossref]
  5. L. Appeltant, M. C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. Dambre, B. Schrauwen, C. R. Mirasso, and I. Fischer, “Information processing using a single dynamical node as complex system,” Nat. Commun. 2, 468 (2011).
    [Crossref]
  6. K. Wagner and D. Psaltis, “Multilayer optical learning networks,” Appl. Opt. 26, 5061–5076 (1987).
    [Crossref]
  7. C. Denz, Optical Neural Networks (Springer Vieweg, 1998).
  8. F. Duport, B. Schneider, A. Smerieri, M. Haelterman, and S. Massar, “All-optical reservoir computing,” Opt. Express 20, 22783–22795 (2012).
    [Crossref]
  9. Y. Paquot, F. Duport, A. Smerieri, J. Dambre, B. Schrauwen, M. Haelterman, and S. Massar, “Optoelectronic reservoir computing,” Sci. Rep. 2, 287 (2012).
    [Crossref]
  10. L. Larger, M. C. Soriano, D. Brunner, L. Appeltant, J. M. Gutierrez, L. Pesquera, C. R. Mirasso, and I. Fischer, “Photonic information processing beyond Turing: an optoelectronic implementation of reservoir computing,” Opt. Express 20, 3241–3249 (2012).
    [Crossref]
  11. D. Brunner, M. C. Soriano, C. R. Mirasso, and I. Fischer, “Parallel photonic information processing at gigabyte per second data rates using transient states,” Nat. Commun. 4, 1364 (2013).
    [Crossref]
  12. Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017).
    [Crossref]
  13. P. Antonik, M. Haelterman, and S. Massar, “Online training for high-performance analogue readout layers in photonic reservoir computers,” Cognit. Comput. 9, 297–306 (2017).
    [Crossref]
  14. D. Brunner and I. Fischer, “Reconfigurable semiconductor laser networks based on diffractive coupling,” Opt. Lett. 40, 3854–3857 (2015).
    [Crossref]
  15. J. Goodman, Introduction to Fourier Optics (W. H. Freeman, 2017).
  16. M. Leutenegger, R. Rao, R. A. Leitgeb, and T. Lasser, “Fast focus field calculations,” Opt. Express 14, 11277–11291 (2006).
    [Crossref]
  17. J. Bueno, D. Brunner, M. Soriano, and I. Fischer, “Conditions for reservoir computing performance using semiconductor lasers with delayed optical feedback,” Opt. Express 25, 2401–2412 (2017).
    [Crossref]
  18. M. C. Ortín, S. Soriano, L. Pesquera, D. Brunner, D. San-Martín, I. Fischer, C. R. Mirasso, and J. M. Gutiérrez, “A unified framework for reservoir computing and extreme learning machines based on a single time-delayed neuron,” Sci. Rep. 5, 14945 (2015).
    [Crossref]
  19. M. Naruse, Y. Terashima, A. Uchida, and S.-J. Kim, “Ultrafast photonic reinforcement learning based on laser chaos,” Sci. Rep. 7, 8772 (2017).
    [Crossref]
  20. D. B. Phillips, M.-J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3, e1601782 (2017).
    [Crossref]

2017 (6)

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
[Crossref]

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017).
[Crossref]

P. Antonik, M. Haelterman, and S. Massar, “Online training for high-performance analogue readout layers in photonic reservoir computers,” Cognit. Comput. 9, 297–306 (2017).
[Crossref]

J. Bueno, D. Brunner, M. Soriano, and I. Fischer, “Conditions for reservoir computing performance using semiconductor lasers with delayed optical feedback,” Opt. Express 25, 2401–2412 (2017).
[Crossref]

M. Naruse, Y. Terashima, A. Uchida, and S.-J. Kim, “Ultrafast photonic reinforcement learning based on laser chaos,” Sci. Rep. 7, 8772 (2017).
[Crossref]

D. B. Phillips, M.-J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3, e1601782 (2017).
[Crossref]

2015 (3)

M. C. Ortín, S. Soriano, L. Pesquera, D. Brunner, D. San-Martín, I. Fischer, C. R. Mirasso, and J. M. Gutiérrez, “A unified framework for reservoir computing and extreme learning machines based on a single time-delayed neuron,” Sci. Rep. 5, 14945 (2015).
[Crossref]

D. Brunner and I. Fischer, “Reconfigurable semiconductor laser networks based on diffractive coupling,” Opt. Lett. 40, 3854–3857 (2015).
[Crossref]

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

2013 (1)

D. Brunner, M. C. Soriano, C. R. Mirasso, and I. Fischer, “Parallel photonic information processing at gigabyte per second data rates using transient states,” Nat. Commun. 4, 1364 (2013).
[Crossref]

2012 (3)

2011 (1)

L. Appeltant, M. C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. Dambre, B. Schrauwen, C. R. Mirasso, and I. Fischer, “Information processing using a single dynamical node as complex system,” Nat. Commun. 2, 468 (2011).
[Crossref]

2008 (1)

2006 (1)

2004 (1)

H. Jaeger and H. Haas, “Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication,” Science 304, 78–80 (2004).
[Crossref]

1987 (1)

Antonik, P.

P. Antonik, M. Haelterman, and S. Massar, “Online training for high-performance analogue readout layers in photonic reservoir computers,” Cognit. Comput. 9, 297–306 (2017).
[Crossref]

Appeltant, L.

L. Larger, M. C. Soriano, D. Brunner, L. Appeltant, J. M. Gutierrez, L. Pesquera, C. R. Mirasso, and I. Fischer, “Photonic information processing beyond Turing: an optoelectronic implementation of reservoir computing,” Opt. Express 20, 3241–3249 (2012).
[Crossref]

L. Appeltant, M. C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. Dambre, B. Schrauwen, C. R. Mirasso, and I. Fischer, “Information processing using a single dynamical node as complex system,” Nat. Commun. 2, 468 (2011).
[Crossref]

Baehr-Jones, T.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017).
[Crossref]

Baets, R.

Barbastathis, G.

Barnett, S. M.

D. B. Phillips, M.-J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3, e1601782 (2017).
[Crossref]

Bengio, Y.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

Bienstman, P.

Brunner, D.

J. Bueno, D. Brunner, M. Soriano, and I. Fischer, “Conditions for reservoir computing performance using semiconductor lasers with delayed optical feedback,” Opt. Express 25, 2401–2412 (2017).
[Crossref]

M. C. Ortín, S. Soriano, L. Pesquera, D. Brunner, D. San-Martín, I. Fischer, C. R. Mirasso, and J. M. Gutiérrez, “A unified framework for reservoir computing and extreme learning machines based on a single time-delayed neuron,” Sci. Rep. 5, 14945 (2015).
[Crossref]

D. Brunner and I. Fischer, “Reconfigurable semiconductor laser networks based on diffractive coupling,” Opt. Lett. 40, 3854–3857 (2015).
[Crossref]

D. Brunner, M. C. Soriano, C. R. Mirasso, and I. Fischer, “Parallel photonic information processing at gigabyte per second data rates using transient states,” Nat. Commun. 4, 1364 (2013).
[Crossref]

L. Larger, M. C. Soriano, D. Brunner, L. Appeltant, J. M. Gutierrez, L. Pesquera, C. R. Mirasso, and I. Fischer, “Photonic information processing beyond Turing: an optoelectronic implementation of reservoir computing,” Opt. Express 20, 3241–3249 (2012).
[Crossref]

Bueno, J.

Dambre, J.

Y. Paquot, F. Duport, A. Smerieri, J. Dambre, B. Schrauwen, M. Haelterman, and S. Massar, “Optoelectronic reservoir computing,” Sci. Rep. 2, 287 (2012).
[Crossref]

L. Appeltant, M. C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. Dambre, B. Schrauwen, C. R. Mirasso, and I. Fischer, “Information processing using a single dynamical node as complex system,” Nat. Commun. 2, 468 (2011).
[Crossref]

Danckaert, J.

L. Appeltant, M. C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. Dambre, B. Schrauwen, C. R. Mirasso, and I. Fischer, “Information processing using a single dynamical node as complex system,” Nat. Commun. 2, 468 (2011).
[Crossref]

Denz, C.

C. Denz, Optical Neural Networks (Springer Vieweg, 1998).

Dierckx, W.

Duport, F.

F. Duport, B. Schneider, A. Smerieri, M. Haelterman, and S. Massar, “All-optical reservoir computing,” Opt. Express 20, 22783–22795 (2012).
[Crossref]

Y. Paquot, F. Duport, A. Smerieri, J. Dambre, B. Schrauwen, M. Haelterman, and S. Massar, “Optoelectronic reservoir computing,” Sci. Rep. 2, 287 (2012).
[Crossref]

Edgar, M. P.

D. B. Phillips, M.-J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3, e1601782 (2017).
[Crossref]

Englund, D.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017).
[Crossref]

Fischer, I.

J. Bueno, D. Brunner, M. Soriano, and I. Fischer, “Conditions for reservoir computing performance using semiconductor lasers with delayed optical feedback,” Opt. Express 25, 2401–2412 (2017).
[Crossref]

M. C. Ortín, S. Soriano, L. Pesquera, D. Brunner, D. San-Martín, I. Fischer, C. R. Mirasso, and J. M. Gutiérrez, “A unified framework for reservoir computing and extreme learning machines based on a single time-delayed neuron,” Sci. Rep. 5, 14945 (2015).
[Crossref]

D. Brunner and I. Fischer, “Reconfigurable semiconductor laser networks based on diffractive coupling,” Opt. Lett. 40, 3854–3857 (2015).
[Crossref]

D. Brunner, M. C. Soriano, C. R. Mirasso, and I. Fischer, “Parallel photonic information processing at gigabyte per second data rates using transient states,” Nat. Commun. 4, 1364 (2013).
[Crossref]

L. Larger, M. C. Soriano, D. Brunner, L. Appeltant, J. M. Gutierrez, L. Pesquera, C. R. Mirasso, and I. Fischer, “Photonic information processing beyond Turing: an optoelectronic implementation of reservoir computing,” Opt. Express 20, 3241–3249 (2012).
[Crossref]

L. Appeltant, M. C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. Dambre, B. Schrauwen, C. R. Mirasso, and I. Fischer, “Information processing using a single dynamical node as complex system,” Nat. Commun. 2, 468 (2011).
[Crossref]

Gibson, G. M.

D. B. Phillips, M.-J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3, e1601782 (2017).
[Crossref]

Goodman, J.

J. Goodman, Introduction to Fourier Optics (W. H. Freeman, 2017).

Gutierrez, J. M.

Gutiérrez, J. M.

M. C. Ortín, S. Soriano, L. Pesquera, D. Brunner, D. San-Martín, I. Fischer, C. R. Mirasso, and J. M. Gutiérrez, “A unified framework for reservoir computing and extreme learning machines based on a single time-delayed neuron,” Sci. Rep. 5, 14945 (2015).
[Crossref]

Haas, H.

H. Jaeger and H. Haas, “Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication,” Science 304, 78–80 (2004).
[Crossref]

Haelterman, M.

P. Antonik, M. Haelterman, and S. Massar, “Online training for high-performance analogue readout layers in photonic reservoir computers,” Cognit. Comput. 9, 297–306 (2017).
[Crossref]

Y. Paquot, F. Duport, A. Smerieri, J. Dambre, B. Schrauwen, M. Haelterman, and S. Massar, “Optoelectronic reservoir computing,” Sci. Rep. 2, 287 (2012).
[Crossref]

F. Duport, B. Schneider, A. Smerieri, M. Haelterman, and S. Massar, “All-optical reservoir computing,” Opt. Express 20, 22783–22795 (2012).
[Crossref]

Harris, N. C.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017).
[Crossref]

Hinton, G.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

Hochberg, M.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017).
[Crossref]

Jaeger, H.

H. Jaeger and H. Haas, “Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication,” Science 304, 78–80 (2004).
[Crossref]

Kim, S.-J.

M. Naruse, Y. Terashima, A. Uchida, and S.-J. Kim, “Ultrafast photonic reinforcement learning based on laser chaos,” Sci. Rep. 7, 8772 (2017).
[Crossref]

Larger, L.

Larochelle, H.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017).
[Crossref]

Lasser, T.

LeCun, Y.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

Lee, J.

Leitgeb, R. A.

Leutenegger, M.

Li, S.

Massar, S.

P. Antonik, M. Haelterman, and S. Massar, “Online training for high-performance analogue readout layers in photonic reservoir computers,” Cognit. Comput. 9, 297–306 (2017).
[Crossref]

Y. Paquot, F. Duport, A. Smerieri, J. Dambre, B. Schrauwen, M. Haelterman, and S. Massar, “Optoelectronic reservoir computing,” Sci. Rep. 2, 287 (2012).
[Crossref]

F. Duport, B. Schneider, A. Smerieri, M. Haelterman, and S. Massar, “All-optical reservoir computing,” Opt. Express 20, 22783–22795 (2012).
[Crossref]

L. Appeltant, M. C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. Dambre, B. Schrauwen, C. R. Mirasso, and I. Fischer, “Information processing using a single dynamical node as complex system,” Nat. Commun. 2, 468 (2011).
[Crossref]

Mirasso, C. R.

M. C. Ortín, S. Soriano, L. Pesquera, D. Brunner, D. San-Martín, I. Fischer, C. R. Mirasso, and J. M. Gutiérrez, “A unified framework for reservoir computing and extreme learning machines based on a single time-delayed neuron,” Sci. Rep. 5, 14945 (2015).
[Crossref]

D. Brunner, M. C. Soriano, C. R. Mirasso, and I. Fischer, “Parallel photonic information processing at gigabyte per second data rates using transient states,” Nat. Commun. 4, 1364 (2013).
[Crossref]

L. Larger, M. C. Soriano, D. Brunner, L. Appeltant, J. M. Gutierrez, L. Pesquera, C. R. Mirasso, and I. Fischer, “Photonic information processing beyond Turing: an optoelectronic implementation of reservoir computing,” Opt. Express 20, 3241–3249 (2012).
[Crossref]

L. Appeltant, M. C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. Dambre, B. Schrauwen, C. R. Mirasso, and I. Fischer, “Information processing using a single dynamical node as complex system,” Nat. Commun. 2, 468 (2011).
[Crossref]

Naruse, M.

M. Naruse, Y. Terashima, A. Uchida, and S.-J. Kim, “Ultrafast photonic reinforcement learning based on laser chaos,” Sci. Rep. 7, 8772 (2017).
[Crossref]

Ortín, M. C.

M. C. Ortín, S. Soriano, L. Pesquera, D. Brunner, D. San-Martín, I. Fischer, C. R. Mirasso, and J. M. Gutiérrez, “A unified framework for reservoir computing and extreme learning machines based on a single time-delayed neuron,” Sci. Rep. 5, 14945 (2015).
[Crossref]

Padgett, M. J.

D. B. Phillips, M.-J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3, e1601782 (2017).
[Crossref]

Paquot, Y.

Y. Paquot, F. Duport, A. Smerieri, J. Dambre, B. Schrauwen, M. Haelterman, and S. Massar, “Optoelectronic reservoir computing,” Sci. Rep. 2, 287 (2012).
[Crossref]

Pesquera, L.

M. C. Ortín, S. Soriano, L. Pesquera, D. Brunner, D. San-Martín, I. Fischer, C. R. Mirasso, and J. M. Gutiérrez, “A unified framework for reservoir computing and extreme learning machines based on a single time-delayed neuron,” Sci. Rep. 5, 14945 (2015).
[Crossref]

L. Larger, M. C. Soriano, D. Brunner, L. Appeltant, J. M. Gutierrez, L. Pesquera, C. R. Mirasso, and I. Fischer, “Photonic information processing beyond Turing: an optoelectronic implementation of reservoir computing,” Opt. Express 20, 3241–3249 (2012).
[Crossref]

Phillips, D. B.

D. B. Phillips, M.-J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3, e1601782 (2017).
[Crossref]

Prabhu, M.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017).
[Crossref]

Psaltis, D.

Rao, R.

San-Martín, D.

M. C. Ortín, S. Soriano, L. Pesquera, D. Brunner, D. San-Martín, I. Fischer, C. R. Mirasso, and J. M. Gutiérrez, “A unified framework for reservoir computing and extreme learning machines based on a single time-delayed neuron,” Sci. Rep. 5, 14945 (2015).
[Crossref]

Schneider, B.

Schrauwen, B.

Y. Paquot, F. Duport, A. Smerieri, J. Dambre, B. Schrauwen, M. Haelterman, and S. Massar, “Optoelectronic reservoir computing,” Sci. Rep. 2, 287 (2012).
[Crossref]

L. Appeltant, M. C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. Dambre, B. Schrauwen, C. R. Mirasso, and I. Fischer, “Information processing using a single dynamical node as complex system,” Nat. Commun. 2, 468 (2011).
[Crossref]

K. Vandoorne, W. Dierckx, B. Schrauwen, D. Verstraeten, R. Baets, P. Bienstman, and J. Van Campenhout, “Toward optical signal processing using photonic reservoir computing,” Opt. Express 16, 11182–11192 (2008).
[Crossref]

Shen, Y.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017).
[Crossref]

Sinha, A.

Skirlo, S.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017).
[Crossref]

Smerieri, A.

F. Duport, B. Schneider, A. Smerieri, M. Haelterman, and S. Massar, “All-optical reservoir computing,” Opt. Express 20, 22783–22795 (2012).
[Crossref]

Y. Paquot, F. Duport, A. Smerieri, J. Dambre, B. Schrauwen, M. Haelterman, and S. Massar, “Optoelectronic reservoir computing,” Sci. Rep. 2, 287 (2012).
[Crossref]

Soljacic, M.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017).
[Crossref]

Soriano, M.

Soriano, M. C.

D. Brunner, M. C. Soriano, C. R. Mirasso, and I. Fischer, “Parallel photonic information processing at gigabyte per second data rates using transient states,” Nat. Commun. 4, 1364 (2013).
[Crossref]

L. Larger, M. C. Soriano, D. Brunner, L. Appeltant, J. M. Gutierrez, L. Pesquera, C. R. Mirasso, and I. Fischer, “Photonic information processing beyond Turing: an optoelectronic implementation of reservoir computing,” Opt. Express 20, 3241–3249 (2012).
[Crossref]

L. Appeltant, M. C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. Dambre, B. Schrauwen, C. R. Mirasso, and I. Fischer, “Information processing using a single dynamical node as complex system,” Nat. Commun. 2, 468 (2011).
[Crossref]

Soriano, S.

M. C. Ortín, S. Soriano, L. Pesquera, D. Brunner, D. San-Martín, I. Fischer, C. R. Mirasso, and J. M. Gutiérrez, “A unified framework for reservoir computing and extreme learning machines based on a single time-delayed neuron,” Sci. Rep. 5, 14945 (2015).
[Crossref]

Sun, M.-J.

D. B. Phillips, M.-J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3, e1601782 (2017).
[Crossref]

Sun, X.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017).
[Crossref]

Taylor, J. M.

D. B. Phillips, M.-J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3, e1601782 (2017).
[Crossref]

Terashima, Y.

M. Naruse, Y. Terashima, A. Uchida, and S.-J. Kim, “Ultrafast photonic reinforcement learning based on laser chaos,” Sci. Rep. 7, 8772 (2017).
[Crossref]

Uchida, A.

M. Naruse, Y. Terashima, A. Uchida, and S.-J. Kim, “Ultrafast photonic reinforcement learning based on laser chaos,” Sci. Rep. 7, 8772 (2017).
[Crossref]

Van Campenhout, J.

Van der Sande, G.

L. Appeltant, M. C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. Dambre, B. Schrauwen, C. R. Mirasso, and I. Fischer, “Information processing using a single dynamical node as complex system,” Nat. Commun. 2, 468 (2011).
[Crossref]

Vandoorne, K.

Verstraeten, D.

Wagner, K.

Zhao, S.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017).
[Crossref]

Appl. Opt. (1)

Cognit. Comput. (1)

P. Antonik, M. Haelterman, and S. Massar, “Online training for high-performance analogue readout layers in photonic reservoir computers,” Cognit. Comput. 9, 297–306 (2017).
[Crossref]

Nat. Commun. (2)

D. Brunner, M. C. Soriano, C. R. Mirasso, and I. Fischer, “Parallel photonic information processing at gigabyte per second data rates using transient states,” Nat. Commun. 4, 1364 (2013).
[Crossref]

L. Appeltant, M. C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. Dambre, B. Schrauwen, C. R. Mirasso, and I. Fischer, “Information processing using a single dynamical node as complex system,” Nat. Commun. 2, 468 (2011).
[Crossref]

Nat. Photonics (1)

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017).
[Crossref]

Nature (1)

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

Opt. Express (5)

Opt. Lett. (1)

Optica (1)

Sci. Adv. (1)

D. B. Phillips, M.-J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3, e1601782 (2017).
[Crossref]

Sci. Rep. (3)

Y. Paquot, F. Duport, A. Smerieri, J. Dambre, B. Schrauwen, M. Haelterman, and S. Massar, “Optoelectronic reservoir computing,” Sci. Rep. 2, 287 (2012).
[Crossref]

M. C. Ortín, S. Soriano, L. Pesquera, D. Brunner, D. San-Martín, I. Fischer, C. R. Mirasso, and J. M. Gutiérrez, “A unified framework for reservoir computing and extreme learning machines based on a single time-delayed neuron,” Sci. Rep. 5, 14945 (2015).
[Crossref]

M. Naruse, Y. Terashima, A. Uchida, and S.-J. Kim, “Ultrafast photonic reinforcement learning based on laser chaos,” Sci. Rep. 7, 8772 (2017).
[Crossref]

Science (1)

H. Jaeger and H. Haas, “Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication,” Science 304, 78–80 (2004).
[Crossref]

Other (2)

C. Denz, Optical Neural Networks (Springer Vieweg, 1998).

J. Goodman, Introduction to Fourier Optics (W. H. Freeman, 2017).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (3)

Fig. 1.
Fig. 1. (a) Schematic illustration of a recurrent neural network. (b) Experimental setup. The NN state encoded on the SLM is imaged onto the camera (CAM) through a polarizing beam splitter (PBS) and a diffractive optical element (DOE). In this way, we realize a recurrently coupled network of Ikeda maps. A digital micromirror device (DMD) creates a spatially modulated image of the SLM’s state. The integrated state, obtained via superimposing the modulated intensities, is detected, creating the photonic RNN’s output. (c) RNN’s coupling matrix established by the DOE, with the inset showing a zoom into a smaller region.
Fig. 2.
Fig. 2. (a) Learning curves for a photonic RNN with two phase offsets ϕi across the network. A fraction of (1μ) nodes have a response function with a negative slope (Θ0=0.17π), and the remaining nodes experience a response function with a positive slope (Θ0=0.43π). (b) An almost symmetric distribution of phase offsets, resulting in positive and negative node responses, benefits computational performance.
Fig. 3.
Fig. 3. (a) Learning performance at optimal parameters (β=0.8, γ=0.4, μ=0.45). (b) Photonic RNN’s predicted output in μW (blue line) can hardly be differentiated from the prediction target (red dots). Prediction error ϵ is given by the yellow dashed data.

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

Ei(n)=Ei0cos(2πκSLM(xiSLM(n)+θi0)),
xi(n+1)=α|Ei0|2cos2(2πκSLM(βx˜iC(n)+θ˜i)).
xi(n+1)=α|Ei0|2cos2[β·α|jNWi,jDOEEj(n)|2+γWiinju(n+1)+θi].
yout(n+1)|iNWi,kDMD(Ei0Ei(n+1))|2.
Wkselect=rand(N)·Wbias,
[lk,Wkselect,max]=max(Wkselect),
Wlk,kDMD=¬(Wlk,k1DMD).
Wbias=1N+Wbias,Wlkbias=0.
r(k)={1ifϵk<ϵk10ifϵkϵk1,
Wlk,kDMD=r(k)Wlk,kDMD+(r(k)1)Wlk,k1DMD,

Metrics