Abstract

Photonic brain-inspired platforms are emerging as novel analog computing devices, enabling fast and energy-efficient operations for machine learning. These artificial neural networks generally require tailored optical elements, such as integrated photonic circuits, engineered diffractive layers, nanophotonic materials, or time-delay schemes, which are challenging to train or stabilize. Here, we present a neuromorphic photonic scheme, i.e., the photonic extreme learning machine, which can be implemented simply by using an optical encoder and coherent wave propagation in free space. We realize the concept through spatial light modulation of a laser beam, with the far field acting as a feature mapping space. We experimentally demonstrate learning from data on various classification and regression tasks, achieving accuracies comparable with digital kernel machines and deep photonic networks. Our findings point out an optical machine learning device that is easy to train, energetically efficient, scalable, and fabrication-constraint free. The scheme can be generalized to a plethora of photonic systems, opening the route to real-time neuromorphic processing of optical data.

© 2021 Chinese Laser Press

1. INTRODUCTION

Artificial neural networks excel in learning from data to perform classification, regression, and prediction tasks of vital importance [1]. As data information content increases, simulating these models becomes computationally expensive on conventional computers, making specialized signal processors crucial for intelligent systems. Training large networks is also costly in terms of energy consumption and carbon footprint [2]. Photonics provides a valuable alternative toward sustainable computing technologies. For this reason, machine learning through photonic components is gaining enormous interest [3]. In fact, many mathematical functions, which enable complex feature extraction from data, find a native implementation on optical platforms. Pioneering attempts using photosensitive masks [4] and volume holograms [5,6] have been recently developed into coherent optical neural networks that promise fast and energy-efficient optical computing [716]. These schemes exploit optical units such as nanophotonic circuits [7], on-chip frequency combs [1416], and spatial light modulators to perform matrix multiplications in parallel [17] or to carry out convolutional layers [1820]. Training consists in adjusting the optical response of each physical node [21], also by adopting external optical signals [22], which is demanding [23]. Moreover, photonic neural networks based on nanofabrication still have a considerable energy impact. A general and promising idea to overcome the issue is to adapt machine-learning paradigms and make them inclined to optical platforms. In this paper, we pursue this approach by constructing an easy-to-train optical architecture that requires only free-space optical propagation.

Photonic architectures that do not need control of the entire network are particularly attractive. A remarkable method for their design and training is reservoir computing [2426], in which the nonlinear dynamics of a recurrent system processes data, and training occurs only on the readout layer. Optical reservoir computing has demonstrated striking performance on dynamical series prediction by using delay-based setups [2730], laser networks [31], multimode waveguides [32], and random media [33]. Since several interesting complex systems can be exploited as physical reservoirs, the inverse approach is also appealing, i.e., the scheme can be trained for learning dynamical properties of the reservoir itself [34].

Extreme learning machines (ELMs) [35,36], or closely related schemes based on random neural networks [37,38], support vector machines [39] and kernel methods [40], are a powerful paradigm in which only a subset of connections is trained. ELMs perform comparably with basic deep and recurrent neural networks on the majority of learning tasks [36]. The basic mechanism is to use the network to establish a nonlinear mapping between the data set and a high-dimensional feature space, where a properly trained classifier performs the separation. Training occurs at the readout layer; however, at variance with reservoir computing, ELMs have no recurrences in the connection topology and do not possess dynamical memory. In optics, an interesting case of this general approach has been implemented by using speckle patterns emerging from multiple scattering [41] and multimode fibers [42] as a feature mapping. Although in principle many optical phenomena can form the building block of the architecture [43,44], the general potential of the ELM framework for photonic computing remains largely unexplored.

Here, we propose a photonic extreme learning machine (PELM) for performing classification and regression tasks using coherent light. We find that a minimal optical setting composed by an input element, which encodes embedded input data into the optical field, a wave layer, corresponding to propagation in free space, and a nonlinear detector, enables simple and efficient learning. The encoding method plays a crucial role in determining the learning ability of the machine. The architecture is experimentally implemented via phase modulation of a visible laser beam by a spatial light modulator (SLM). We benchmark the realized device on problems of different classes, achieving performance comparable with digital ELMs. These include a classification accuracy exceeding 92% on the well-known MNIST database, which overcomes fabricated diffractive neural networks [8]; further, it is comparable with convolutional artificial networks that employ photonic accelerators [15,16,19]. Given the massive parallelism provided by spatial optics and the ease of training, our approach is ideal for big data, i.e., extensive data sets with large dimension samples. Our PELM is intrinsically stable and adaptable over time, as it does not require engineered or sensitive components. It can potentially operate in dynamic environments as an intelligent device performing on-the-fly optical signal processing.

2. PELM ARCHITECTURE

An ELM is a feed-forward neural network in which a set of input signals is connected to at least one output node by a large structure of generalized artificial neurons [35]. An outline of the architecture is illustrated in Fig. 1(a). The hidden neurons form the computing reservoir. Unlike for neurons in a deep neural network, they do not require any training, and their individual responses can be unknown to the user [36]. Given an input data set with N samples X=[X1,,XN], the reservoir furnishes an hidden-layer output matrix H=[g(X1),,g(XN)], where g(x) is a nonlinear feature mapping, H is linearly combined with a set of weights β to give the output Y=Y(H;β) performing the classification, and Y=Hβ. To train an ELM classifier, the optimal output weights βi are the sole parameters that need to be determined, a problem that can be solved via ridge regression. Details on this technique are reported in Appendix A.

 figure: Fig. 1.

Fig. 1. Schematic architecture of the photonic extreme learning machine (PELM). (a) General ELM scheme with the input data set X, which is fed into a reservoir and gives out the hidden-layer output matrix H. The trainable readout weights βi determine the network output Y=Y(H;β). (b) In the optical case, the input (a mushroom in the example) is encoded on the optical field, and hidden neurons have been replaced by modes that interact during propagation. Training of the photonic classifier is enabled by M detection channels.

Download Full Size | PPT Slide | PDF

We transfer the ELM principle into the optical domain by considering the three-layer structure illustrated in Fig. 1(b). In the encoding layer, the input vectors Xi are embedded into the phase and/or amplitude of the field by an optical modulator. The reservoir consists of linear optical propagation and nonlinear detection of the final state. The output is recovered in the readout layer, where weights βi are applied to M measured channels. The β set is trained by solving the regression problem via digital hardware. For an extensive training set, with size larger than the number of channels (NM), an effective solution reads as [36]

β=(HTH+cI)1HTT,
where T indicates the training targets, I the identity matrix, and c is a regularizing constant. H contains projections of input samples in the feature space, where data are separated according to their features. This mapping is the key ingredient for learning and occurs entirely by means of optical propagation. Mapping functions can be constructed starting from the elementary components of the optical setup. For a general PELM based on the scheme in Fig. 1(b), we construct the feature mapping
Hji=gi(Xj)=G[M·p(Xj)·q(W)]i,
which describes linear propagation of input optical data and their nonlinear detection. In Eq. (2), G is a detection function, M is the complex transfer matrix modelling field transmission, p and q are two encoding functions, and W is a fixed character of the encoder that we term embedding matrix. W has no direct equivalent in the basic ELM model and serves taking into account that any laser beam has spatial amplitude modulations and phase inhomogeneities. The elements of this matrix can be understood as biases or as entries of a fan-in matrix connecting the input signal to the reservoir. We remark that, at variance with the ELM model, in the optical architecture, input and output data, network connections, and processing functions are all complex quantities. This fact can add an important degree of freedom to the network operation [45].

In the PELM architecture, each operation is defined by the corresponding optical elements. Therefore, Eq. (2) can represent the feature space of different optical settings. For instance, in the optical classifier demonstrated by Saade et al. [41], in which amplitude modulated data propagate through a scattering medium, p(Xj)=Xj (amplitude encoding) and M is a random complex Gaussian matrix. We validate and experimentally realize a three-component system composed by a phase-only SLM, free space, and a camera. In our case, Eq. (2) can thus be specified as follows. Phase encoding of the input data by spatial light modulation corresponds to p(x)=exp(ix), and, since a single encoder is employed, p=q, and H=G[M·expi(X+W)]. The nonlinear function G models the detection of the transmitted field. Using the saturation effect of the camera pixels, we have G(I)I/(I+Is), with I=|A|2, optical amplitude A, and saturation intensity Is. For free-space optical propagation, i.e., the light distribution in the far field or in the focal plane of a lens, M corresponds to the discrete Fourier transform [46].

We first validate the free-space PELM architecture and assess the condition for learning via numerical simulations. We consider digit recognition and train our classifier on the MNIST handwritten digit database. Figure 2 reports the learning properties for two representative phase-encoding methods: (i) noise embedding and (ii) Fourier embedding. In (i), W is a disordered matrix modelling a distortion on the encoder (see Appendix A for details). It remains unchanged during both training and testing. Each input is encoded by phase modulation in [0,π], and the embedding signal is encoded in the same phase interval. The effect of the noise embedding is illustrated in Fig. 2(a), where a typical digit is shown as a phase mask. Figure 2(b) shows the classification performance of the PELM with M=1600 channels when varying the mean noise amplitude. The machine always reaches accuracies close to that allowed by training. The classification error shows a sharp decrease as noise increases from zero, and it converges to a plateau already for small perturbations. Results varying the noise correlation length at fixed noise level are in Fig. 2(c). This behavior indicates that optimal learning occurs for small-scale noise, i.e., when W has its maximum information content (rank L). The relevance of W for learning extends to embedding matrices without randomness. In (ii), W is a modulated carrier signal [Fig. 2(d)]. We are superimposing the input data on a carrier pulse; feature mapping corresponds to analyzing the intensity of the entire spectrum after a nonlinear amplifier. In this case, the learning process can be interpretable: the learned features point out resonances between the input signal and the carrier pulse. As shown in Fig. 2(e), a few frequencies in the carrier are sufficient for the learning transition. The PELM is much more accurate in the classification as the frequency content of the embedding signal is larger. These results reveal that the embedding matrix plays a key role in enabling data-set learning. Although, in our experiments, we control this matrix via the encoder, we note that it can be intrinsic in any optical setup, e.g., a distortion of the optical wavefront.

 figure: Fig. 2.

Fig. 2. Learning ability of the PELM architecture. The optical computing scheme is evaluated on the MNIST data set by varying the encoding properties and feature space. (a) Input digit and 2D phase mask showing its encoding by noise embedding: the input signal overlaps with a disordered matrix. PELM training and testing error for noise embedding when varying the (b) noise amplitude and (c) its correlation length, for M=1600. (d) Input vector encoded over a carrier signal (Fourier embedding). (e) Classification error versus the number of frequencies of the embedding signal. (f) Minimum testing error with the increasing number of features M. The indicated accuracies are the best ones reported with random ELM (rELM) [47], random projections (RP) [41], and kernel ELM (kELM) [47] on the same task.

Download Full Size | PPT Slide | PDF

Another key hyperparameter of the scheme is the number of features (channels) M, which sets the dimensionality of the feature (measurement) space. Figure 2(f) reports the testing error with varying M. The performance rapidly increases with the number of features, whereas only hundreds of channels are necessary for a proficient PELM. For MN/20, the error approaches 2%, a value very close to the optimal accuracy achievable with learning machines based on the same paradigm [47]. In these digital machines, the main computational cost both for training and performing the classification is imputable to large matrix multiplications and their processing with nonlinear functions. In the optical device we carry out, these operations occur simply by free-space light propagation and detection. The only calculation we kept on digital hardware is the offline training via simple regression.

3. EXPERIMENTAL DEVICE

The experimental setup for the free-space photonic extreme learning machine is illustrated in Fig. 3(a) and detailed in Appendix B. A phase-only SLM encodes both the input data set and the embedding matrix on the phase spatial profile of the laser beam. In practice, distinct attributes of a data set are arranged in separate blocks, a data sample corresponds to a set of phase values on these blocks, and a preselected phase mask is superimposed [inset in Fig. 3(a)]. Phase information self-mixes during light propagation, a linear process that corresponds to the matrix multiplication in Eq. (2). The far field is detected on a camera that performs the nonlinear activation function. As shown in Fig. 3(b), the resulting acquired image is a saturated function of the optical amplitude in the lens focal plane. From this data matrix, we extract the M channels to construct the PELM feature space. An example of an input data measured on a feature space is reported in Fig. 3(c). In analogy with the numerical implementation, the device is trained by finding the weights βi that, when applied the optical output, allow performing the classification.

 figure: Fig. 3.

Fig. 3. Experimental implementation. (a) Sketch of the optical setup. A phase-only spatial light modulator (SLM) encodes data on the wavefront of a 532 nm continuous-wave laser. The far field in the lens focal plane is imaged on a camera. Insets show a false-color embedding matrix and training data encoded as phase blocks, respectively. (b) Detected spatial intensity distribution for a given input sample. White-colored areas reveal camera saturation in high-intensity regions, which provides the network nonlinear function. Pink boxes show some of the M spatial modes (blocks of pixels) that are used as readout channels. (c) Example of an input data in a feature space of dimension M=256, as projected by the optical device. Each bar represents an output channel, and training consists in finding the vector that properly tunes all the bar heights.

Download Full Size | PPT Slide | PDF

We test the optical device on various computing tasks and data sets. The aim is to point out that, when the PELM is fully effective on a given task, it can be also easily and rapidly retrained for a different application. The main results are summarized in Fig. 4 and demonstrate the learning capability of the optical device. We first use the MNIST data set to prove classification on a large-scale multiclass problem. When trained using M=4096 output channels, the PELM reaches a mean classification accuracy of 92% (±0.005) over Nt=10,000 test images [Figs. 4(a) and 4(b)]. We obtain accuracy that exceeds recent optical convolutional processors [15] and is comparable with optical deep neural networks [8], but without the fabrication of specific optical layers and the heavy training they require. The best classification performance is not sensitive to the embedding matrix employed on the encoder, which experimentally confirms our numerical findings (Fig. 2). Figure 4(b) shows a confusion matrix measured using the random embedding method, which gives 92.18% accuracy, compared with the 92.06% [Fig. 4(a)] when using Fourier embedding. Our free-space PELM hence surpasses diffractive neural networks [8] and competes with cutting-edge photonic hardware in terms of accuracy. In particular, convolutional opto-electronic setups also achieve a coarse accuracy of 92% [19]. Ultrafast photonic processors reach 95% precision with the help of electronic layers [15], whereas similar photonic accelerators operate with 88% accuracy [16].

 figure: Fig. 4.

Fig. 4. Experimental performance of the PELM on classification and regression tasks. Confusion matrices on the MNIST data set for a free-space PELM, which makes use of (a) Fourier and (b) random embedding (92.18% and 92.06% accuracy, M=4096). (c) Performance on the mushroom binary classification problem (95.4%). (d) Optical predictions and true values for the abalone data set. (e) Classification and (f) regression error as a function of the number of features. Rapid convergence to optimal performance is found. Experimental results are compared with numerical simulations and training errors.

Download Full Size | PPT Slide | PDF

By only updating the input training set and the output weights, the PELM can be quickly reconfigured for classifying different objects. We consider a binary class problem, i.e., the mushroom task, where the goal is to separate poisonous from edible mushrooms from a series of features (see Appendix C for details). Figure 4(c) shows that, on this data set, we achieve 95.4% accuracy, using Nt=4124 test samples and N=4000 training points, which is close to the precision of digital ELMs [36]. The excellent properties of the experimental device as a classifier generalizes to regression problems. We test the abalone data set (see Appendix C), a task where the input data ciphers a sea snail, and the output is expected to furnish its age. It is a solid benchmark since it has low-dimensional inputs (L=8) but contains extreme events that are difficult to predict. Figure 4(d) shows experimental abalone predictions obtained via a feature space of dimension M=2000. The testing error measured as normalized root mean square displacement (NRMSD) is 0.12, i.e., regression is performed with remarkable accuracy by using only linear optical propagation [44].

A key point of the PELM architecture is a testing error that rapidly converges to its optimal value as the feature space dimension increases. Experimental results demonstrating this property are shown in Figs. 4(e) and 4(f), respectively, for the MNIST and abalone data set. Good classification/regression performance is maintained even for a very small number of trained channels, if compared with the data set size. For instance, useful abalone predictions can be obtained with only M=128 channels, i.e., with a training process that consists only in inverting a modest matrix (128×128 elements). Heuristically, we find that the accuracy reaches a limiting value as MN/20, in agreement with numerical simulation for the same machine hyperparameters. The same behavior is found for training errors, which indicates that input data are already completely separated by the optical setup.

It is worth noting that the device does not need training at each use. Once we learn a given data set, we store the corresponding output weights and program a different problem; then, we are able to accomplish any of the posed tasks on-demand. The diverse classification and regression instances in Fig. 4 can be tested without any sequential order, simply specifying the type of task they belong to.

4. DISCUSSION

A. Improving the Photonic Hardware

The limit provided by training errors in Figs. 4(e) and 4(f) allows us to identify the various factors toward improving the computational resolution of the optical machine. We ascribe residual errors to practical nonidealities of the device. The finite precision of both encoding and readout, including noise in optical intensities, introduces errors that are absent in the digital implementation [48]. Considering a camera precision of 8 bit in the numerical model, we find a relevant decrease of the maximum accuracy. This indicates that improved performance is attainable by fine-tuning the optical components. Flickering effects of the phase modulator, which give a tiny but variable inaccuracy each time the machine is interrogated, are identified as another relevant source of error. Moreover, optical and mechanical fluctuations of the tabletop optical line during training can result in further inconsistencies. These fluctuations can be compensated in future experiments using incremental learning [36], a refined technique that allows us to adaptively adjust the weights while training is ongoing. Training by use of sparse regression methods may be also useful when dealing with larger data sets.

B. Comparison with Digital Kernel Methods

The learning principle of the PELM lies on the basis of various kernel methods [49]. In kernel classification, mapping of the input data to the high-dimensional feature space is not explicit, and correlations between samples are evaluated through a kernel function over the input space. Such a kernel K contains the inner product of all pairs of data points in the feature space, i.e., all the information sufficient to perform data separation. When trained via ridge regression, the output Y of a kernel classifier can be generally expressed as

Y=K(K+cI)1T,
where K=k(Xi,Xj) is an N×N kernel matrix, with k being a given kernel function. Comparing Eqs. (3) and (1) indicates that, in our PELM scheme, the free-space optical setup acts as an effective photonic kernel [41]. To provide a comparison with standard digital kernels, we have evaluated their performance on the MNIST data set. We exploit the modularity of the approach, i.e., we use the same training algorithm [Eq. (3)] with different kernel functions. We focus on the relevant case of the element-wise Gaussian kernel k(Xi,Xj)=exp(γXiXj2), with tunable parameter γ. Using this kernel, we obtain an average classification accuracy of 96%, to be compared with the 85% given by a linear kernel (K=XXT). Smaller classification errors can be obtained with more elaborate kernel functions [41]. Although the evaluation of the kernel matrix elements can be time-efficient, training requires storing and inverting this large matrix (N×N), which becomes unfeasible on large-scale data sets. This enormous memory consumption represents a major drawback, which makes digital kernels energetically inefficient with respect to the photonic implementation. In fact, in the PELM scheme, we make use of explicit feature mapping, and the output matrix used for training has a much smaller size (M×M). This implies that the computational cost for training the device does not depend on the data set size N but only on the selected number of channels M. The advantage of the proposed approach extends also to kernel methods based on different algorithms. For instance, a support-vector machine on MNIST takes about 107 multiply-adds (MACs) per recognition [50], whereas in our PELM the optical processors enables adequate classification of the single handwritten digit using only 104 digital MAC operations.

C. Application Potentials

Besides computing effectiveness comparable with its digital counterpart, the PELM hardware offers several advantages that are promising for fast processing of big data, especially for inputs that naturally present themselves as optical signals. In fact, unlike deep optical neural networks [51,52], training is simple and scales favorably with the data set size. It can also be performed online with affordable computational resources. Once trained, forward information propagates optically in a fully passive way and in free space. This can provide a key advantage with respect to ELM algorithms and kernel methods both in terms of scalability and latency. In fact, the matrix multiplication in Eq. (2), which maps input data to the feature space, on a digital computer requires a time and memory consumption that grows quadratically with the data dimension L. The optical hardware performs this operation totally in parallel, independently of the input data size, and without power dissipation. Since only M scalar multiplications are necessary from the optical detection to the final classification, the PELM has a low latency, which is independent on the data-set dimension. This is a crucial property in applications that require fast responses, such as in real-time computer vision. The main speed limitation of our free-space PELM is related to the frame rate of the input optical modulator. Liquid-crystal SLMs currently have typical frame rates of the order of 100 Hz, but phase modulators based on micro-electro-mechanical, electro-optical, or acousto-optical technologies are approaching GHz frequencies [53,54]. With similar components available, PELM hardware could perform R=L×M×109 operations per second (OPS), a value that, for large L, can overcome the current limits of electronic computing (peta OPS).

The absence of any optical element or medium along the network path, which differentiates our scheme from all other optical neuromorphic devices previously realized, is a valuable aspect for various reasons. The tiny optical absorption and scattering of light in air imply that our scheme needs low-power optical sources (μW or lower), i.e., it requires minimal energy consumption. More importantly, our device is extremely stable to mechanical and optical perturbations, because its operation does not depend on components that must be kept completely static after training. This suggests that the proposed optical processor is promising for use in dynamic environments. In fact, tolerance to external disturbances requires only the relative alignment between the SLM plane and the camera plane, which can be carried out with mechanical or optical stabilization. We thus expect the free-space PELM can operate even when being subject to shocks. For small perturbations, tolerance is already guaranteed by the finite spatial extent of the input and output modes. Specifically, the output channels are made by averaging over several camera pixels (see Appendix B), which reduces the effect of vibrations. Therefore, we expect that re-training the device would be necessary only when the perturbation is large enough that a change of the channel positions cannot be retrieved via some feedback mechanism.

In view of edge-computing applications, the PELM can be further adapted toward ease of use and energy efficiency. In fact, using optical signals from the environment as input data, the encoding operation performed by the SLM is reduced to the sole embedding. This operation can be integrated directly into optical propagation, removing the need for a phase modulator. For example, we can divide the coherent input field and recombine it after a digital micromirror device (DMD) has performed the embedding. The intensity extracted from a selected plane along the optical path gives the output matrix, which can be enriched with phase and spectral information. Similar schemes promise a miniaturization of our free-space device and its operation as a fast and stable optical processor.

5. CONCLUSION

Nowadays, artificial intelligence is permeating the field of photonics [55,56] and viceversa [57]. Optical platforms are enabling computing functionalities with unique features, ranging from photonic Ising machines for complex optimization problems [5865] to optical devices for cryptography [66]. We have realized a novel photonic neuromorphic computing device that is able to perform classification and feature extraction only by exploiting optical propagation in free space. Our results demonstrate that fabricated optical networks, or complex physical processes and materials, are not mandatory ingredients for performing effective machine learning on a optical setup. All the essential factors for learning can be included in optical propagation via encoding and decoding methods. On this principle, we demonstrated a photonic extreme-learning machine that, given its unique adaptability and intrinsic stability, is attractive for future processing of optical data in real-time and dynamic conditions. More generally, our approach envisions the exceptional possibility of harnessing any wave system as an intelligent device that learns from data, in photonics, and far beyond.

APPENDIX A: NETWORK DETAILS

1. ELM Framework

The basic ELM structure is a single-layer feedforward neural network in which hidden nodes are not tuned [35]. For real-valued random internal weights, the scheme matches single-layer random neural networks [37,38]. Considering an N×L input data set X and one output mode, the output function is

Y=Hβ=j=1MHj(X)βj,
where β is the weight vector determined by training, and H(X)=H is the N×M hidden-layer matrix outcome. H(X)=[g(X1),,g(XN)], where g(x) maps the input sample x from the L-dimensional input space to the M-dimensional feature space. Under appropriate conditions, the machine is able to interpolate any continuous function (universal interpolator) and acts as a universal classifier [36]. Given the target labels T, training corresponds to solving the ridge regression problem: argminβ(HβT2+c1β2), where c is a parameter controlling the trade-off between the training error and the regularization. This constrained optimization can be recast as a dual optimization problem [36]. A solution that is computationally affordable for large data sets is given by Eq. (1): β=(HTH+cI)1HTT. In this case, matrix inversion involves the M×M matrix HTH, which makes the method scalable and effective for large-scale applications. The output function of the classifier is thus
Y=H(HTH+cI)1HTT.

In the case of a single-output node performing binary classification [as for the problem in Fig. 4(c)], the decision function is f(X)= sign {H(X)[HT(X)H(X)+cI]1HT(X)T}. It generalizes for a multiclass classifier with multiple output nodes as f(X)=argmaxkYk(X), where Yk(X) denotes the output Y of the kth node.

2. Encoding Methods

We consider phase encoding of the input data in all the presented results (phase-only light modulation). The embedding matrix W is a fixed signal, independent of the specific input data, which characterizes the phase encoder and modifies the PELM feature space. In the noise embedding method, W is chosen as a uniformly distributed random matrix with maximum amplitude ρ (noise level) made by blocks of size l (noise correlation length). In the Fourier embedding method, we construct the embedding matrix W=[W1,,Wn] from n frequencies, Wk=ω=1n(aω/n)exp(iωk/n), with coefficients aω of equal amplitude and arbitrary phase. Both the embedding signal and input data are encoded within the phase interval [0,π]. In experiments, the preselected embedding matrix is discretized in gray levels and superimposed to each input data.

APPENDIX B: EXPERIMENTAL SETUP AND DEVICE TRAINING

A continuous-wave laser beam with wavelength λ=532nm is expanded and polarized, and illuminates a reflective phase-only SLM (Hamamatsu X13138, 1280×1024 pixels, 12.5 μm pixel pitch, 60 Hz maximum frame rate), which encodes input data within an embedding phase mask by pure phase modulation. By grouping several SLM pixels, the modulator active area is divided into L input nodes, with each node having 210 phase levels equally distributed in the 02π interval. Phase-modulated light propagates in free space through a focusing plano-convex lens (f=150mm). The optical field in the lens focal plane is collected by an imaging objective (NA=0.4) and detected by a CCD camera with 8 bit (256 gray levels) intensity sensitivity. To obtain a nonlinear saturated function of the transmitted field, the camera exposure is manually increased to obtain overexposed images. We note that similar performance is found using the camera in the unsaturated regime, which corresponds to a square nonlinear response. M output channels are preselected within the camera region of interest, where the signal in each channel is obtained by binning over few camera pixels (10×10 for MNIST classification) to reduce detection noise.

Training is performed by loading one by one the N input samples on the SLM and keeping fixed the embedding matrix. At each training step, values from the M output channels are acquired [Fig. 3(c)] and stored on a conventional computer controlling the setup. The readout weights are obtained by applying Eq. (1) on the entire set of measurements. They are readily used in the testing phase, where each of Nt testing samples is sent through the photonic machine and passively classified by weighing the detected output. Different tasks can also be performed on-demand without retraining the device. As expected, practical effects limit the time for which the device maintains its ability to perform well on a given task without retraining. We found that good performance is maintained for more than 1 h. On longer times, laser fluctuations and local variations of the SLM response due to thermal effects and liquid crystal relaxation become relevant.

APPENDIX C: DATA SETS FOR CLASSIFICATION AND REGRESSION

Recognition of handwritten digits is tested on the MNIST data set, a standard benchmark for multiclass classification. The data set, which includes 10 classes, consists of 60,000 training samples (N) and Nt=10,000 digits for testing, each with size L=28×28. Although state-of-the-art convolutional neural networks reach accuracies exceeding 99.8% on MNIST (https://github.com/Matuzas77/MNIST-0.17), the task remains the basic test for any novel machine learning device. In fact, superior algorithms are application-specific and require massive data processing.

The mushroom (https://archive.ics.uci.edu/ml/datasets/Mushroom) is a binary class data set with relatively large size and low dimension. It includes 8124 samples with L=22 features in random order. The goal is to separate edible from poisonous mushrooms. A typical ELM accuracy is 88.9 % for a split ratio N/Nt0.23 [36].

The abalone data set (https://archive.ics.uci.edu/ml/datasets/Abalone) is one of the mostly used benchmarks for machine learning and concerns the identification of sea snails in terms of age and physical parameters. Each training point has L=8 attributes, and the entire data set has 8177 samples. Digital ELMs report testing errors around 0.07 for N/Nt2. Errors are evaluated using the root mean square displacement RMSD =iNt(YiTi)2/Nt.

Funding

Ministero dell’Istruzione, dell’Università e della Ricerca (PRIN PELM 20177PSCKT).

Acknowledgment

We thank MD Deen Islam and V. H. Santos for technical support in the laboratory.

Disclosures

The authors declare no conflicts of interest.

Data Availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

REFERENCES

1. S. Haykin, Neural Networks and Learning Machines (Pearson Prentice Hall, 2008).

2. E. Strubell, A. Ganesh, and A. McCallum, “Energy and policy considerations for deep learning in NLP,” arXiv:1906.02243 (2019).

3. G. Wetzstein, A. Ozcan, S. Gigan, S. Fan, D. Englund, M. Soljačić, C. Denz, D. A. B. Miller, and D. Psaltis, “Inference in artificial intelligence with deep optics and photonics,” Nature 588, 39–47 (2020). [CrossRef]  

4. N. H. Farhat, D. Psaltis, A. Prata, and E. Paek, “Optical implementation of the Hopfield model,” Appl. Opt. 24, 1469–1475 (1985). [CrossRef]  

5. D. Psaltis, D. Brady, and K. Wagner, “Adaptive optical networks using photorefractive crystals,” Appl. Opt. 27, 1752–1759 (1988). [CrossRef]  

6. C. Denz, Optical Neural Networks (Springer, 1998).

7. Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017). [CrossRef]  

8. X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361, 1004–1008 (2018). [CrossRef]  

9. N. M. Estakhri, B. Edwards, and N. Engheta, “Inverse-designed metastructures that solve equations,” Science 363, 1333–1338 (2019). [CrossRef]  

10. J. Bueno, S. Maktoobi, L. Froehly, I. Fischer, M. Jacquot, L. Larger, and D. Brunner, “Reinforcement learning in a large-scale photonic recurrent neural network,” Optica 5, 756–760 (2018). [CrossRef]  

11. Y. Zuo, B. Li, Y. Zhao, Y. Jiang, Y. C. Chen, P. Chen, G. B. Jo, J. Liu, and S. Du, “All-optical neural network with nonlinear activation functions,” Optica 6, 1132–1137 (2019). [CrossRef]  

12. A. N. Tait, T. F. de Lima, E. Zhou, A. X. Wu, M.-A. Nahmias, B. J. Shastri, and P. R. Prucnal, “Neuromorphic photonic networks using silicon photonic weight banks,” Sci. Rep. 7, 7430 (2017). [CrossRef]  

13. J. Feldmann, N. Youngblood, C. D. Wright, H. Bhaskaran, and W. H. P. Pernice, “All-optical spiking neurosynaptic networks with self-learning capabilities,” Nature 569, 208–214 (2019). [CrossRef]  

14. J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A. S. Raja, J. Liu, C. D. Wright, A. Sebastian, T. J. Kippenberg, W. H. P. Pernice, and H. Bhaskaran, “Parallel convolutional processing using an integrated photonic tensor core,” Nature 589, 52–58 (2021). [CrossRef]  

15. X. Xu, M. Tan, B. Corcoran, J. Wu, A. Boes, T. G. Nguyen, S. T. Chu, B. E. Little, D. G. Hicks, R. Morandotti, A. Mitchell, and D. J. Moss, “11 TOPS photonic convolutional accelerator for optical neural networks,” Nature 589, 44–51 (2021). [CrossRef]  

16. X. Xu, M. Tan, B. Corcoran, J. Wu, T. G. Nguyen, A. Boes, S. T. Chu, B. E. Little, R. Morandotti, A. Mitchell, D. G. Hicks, and D. J. Moss, “Photonic perceptron based on a Kerr microcomb for high-speed, scalable, optical neural networks,” Laser Photon. Rev. 14, 2000070 (2020). [CrossRef]  

17. J. Spall, X. Guo, T. D. Barrett, and A. I. Lvovsky, “Fully reconfigurable coherent optical vector-matrix multiplication,” Opt. Lett. 45, 5752–5755 (2020). [CrossRef]  

18. J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, “Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification,” Sci. Rep. 8, 12324 (2018). [CrossRef]  

19. M. Miscuglio, Z. Hu, S. Li, J. George, R. Capanna, P. M. Bardet, P. Gupta, and V. J. Sorger, “Massively parallel amplitude-only Fourier neural network,” Optica 7, 1812–1819 (2020). [CrossRef]  

20. C. Wu, H. Yu, S. Lee, R. Peng, I. Takeuchi, and M. Li, “Programmable phase-change metasurfaces on waveguides for multimode photonic convolutional neural network,” Nat. Commun. 12, 96 (2021). [CrossRef]  

21. T. W. Hughes, I. A. Williamson, M. Minkov, and S. Fan, “Wave physics as an analog recurrent neural network,” Sci. Adv. 5, eaay6946 (2019). [CrossRef]  

22. R. Hamerly, L. Bernstein, A. Sludds, M. Soljačić, and D. Englund, “Large-scale optical neural networks based on photoelectric multiplication,” Phys. Rev. X 9, 021032 (2019). [CrossRef]  

23. T. W. Hughes, M. Minkov, Y. Shi, and S. Fan, “Training of photonic neural networks through in situ backpropagation and gradient measurement,” Optica 5, 864–871 (2018). [CrossRef]  

24. M. Lukosevicius and H. Jaeger, “Reservoir computing approaches to recurrent neural network training,” Comput. Sci. Rev. 3, 127–149 (2009). [CrossRef]  

25. C. Gallicchio, A. Micheli, and L. Pedrelli, “Deep reservoir computing: a critical experimental analysis,” Neurocomputing 268, 87–99 (2017). [CrossRef]  

26. G. Neofotistos, M. Mattheakis, G. D. Barmparis, J. Hizanidis, G. P. Tsironis, and E. Kaxiras, “Machine learning with observers predicts complex spatiotemporal behavior,” Front. Phys. 7, 24 (2019). [CrossRef]  

27. D. Brunner, M. C. Soriano, C. Mirasso, and I. Fischer, “Parallel photonic information processing at gigabyte per second data rates using transient states,” Nat. Commun. 4, 1364 (2013). [CrossRef]  

28. Q. Vinckier, F. Duport, A. Smerieri, K. Vandoorne, P. Bienstman, M. Haelterman, and S. Massar, “High-performance photonic reservoir computer based on a coherently driven passive cavity,” Optica 2, 438–446 (2015). [CrossRef]  

29. L. Larger, A. Baylón-Fuentes, R. Martinenghi, V. S. Udaltsov, Y. K. Chembo, and M. Jacquot, “High-speed photonic reservoir computing using a time-delay based architecture: million words per second classification,” Phys. Rev. X 7, 011015 (2017). [CrossRef]  

30. P. Antonik, N. Marsal, D. Brunner, and D. Rontani, “Human action recognition with a large-scale brain-inspired photonic computer,” Nat. Mach. Intell. 1, 530–537 (2019). [CrossRef]  

31. A. Röhm, L. Jaurigue, and K. Lüdge, “Reservoir computing using laser networks,” IEEE J. Sel. Top. Quantum Electron. 26, 7700108 (2020). [CrossRef]  

32. U. Paudel, M. Luengo-Kovac, J. Pilawa, T. J. Shaw, and G. C. Valley, “Classification of time-domain waveforms using a speckle-based optical reservoir computer,” Opt. Express 28, 1225–1237 (2020). [CrossRef]  

33. M. Rafayelyan, J. Dong, Y. Tan, F. Krzakala, and S. Gigan, “Large-scale optical reservoir computing for spatiotemporal chaotic systems prediction,” Phys. Rev. X 10, 041037 (2020). [CrossRef]  

34. D. Pierangeli, V. Palmieri, G. Marcucci, C. Moriconi, G. Perini, M. De Spirito, M. Papi, and C. Conti, “Living optical random neural network with three dimensional tumor spheroids for cancer morphodynamics,” Commun. Phys. 3, 160 (2020). [CrossRef]  

35. G. B. Huang, Q. Y. Zhu, and C. K. Siew, “Extreme learning machine: theory and applications,” Neurocomputing 70, 489–501 (2006). [CrossRef]  

36. G. B. Huang, H. Zhou, X. Ding, and R. Zhang, “Extreme learning machine for regression and multiclass classification,” IEEE Trans. Syst., Man, Cybern. B, Cybern. 42, 513–529 (2012). [CrossRef]  

37. W. F. Schmidt, M. A. Kraaijveld, and R. P. Duin, “Feed forward neural networks with random weights,” in International Conference on Pattern Recognition (IEEE Computer Society, 1992).

38. Y. H. Pao, G. H. Park, and D. J. Sobajic, “Learning and generalization characteristics of the random vector functional-link net,” Neurocomputing 6, 163–180 (1994). [CrossRef]  

39. J. A. K. Suykens and J. Vandewalle, “Least squares support vector machine classifiers,” Neural Process. Lett. 9, 293–300 (1999). [CrossRef]  

40. S. An, W. Liu, and S. Venkatesh, “Face recognition using kernel ridge regression,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2007), pp. 1–7.

41. A. Saade, F. Caltagirone, I. Carron, L. Daudet, A. Dremeau, S. Gigan, and F. Krzakala, “Random projections through multiple optical scattering: approximating kernels at the speed of light,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2016), pp. 6215–6219.

42. S. Sunada, K. Kanno, and A. Uchida, “Using multidimensional speckle dynamics for high-speed, large-scale, parallel photonic computing,” Opt. Express 28, 30349–30361 (2020). [CrossRef]  

43. G. Marcucci, D. Pierangeli, and C. Conti, “Theory of neuromorphic computing by waves: machine learning by rogue waves, dispersive shocks, and solitons,” Phys. Rev. Lett. 125, 093901 (2020). [CrossRef]  

44. U. Tegin, M. Yildirim, I. Oguz, C. Moser, and D. Psaltis, “Scalable optical learning operator,” arXiv:2012.12404 (2020).

45. H. Zhang, M. Gu, X. D. Jiang, J. Thompson, H. Cai, S. Paesani, R. Santagati, A. Laing, Y. Zhang, M. H. Yung, Y. Z. Shi, F. K. Muhammad, G. Q. Lo, X. S. Luo, B. Dong, D. L. Kwong, L. C. Kwek, and A. Q. Liu, “An optical neural chip for implementing complex-valued neural network,” Nat. Commun. 12, 457 (2021). [CrossRef]  

46. J. W. Goodman, Introduction to Fourier Optics (Roberts & Company, 2005).

47. L. C. C. Kasun, H. Zhou, G. B. Huang, and C. M. Vong, “Representational learning with extreme learning machine for big data,” IEEE Intell. Syst. 28, 31–34 (2013). [CrossRef]  

48. D. Pierangeli, M. Rafayelyan, C. Conti, and S. Gigan, “Scalable spin-glass optical simulator,” Phys. Rev. Appl. 15, 034087 (2021). [CrossRef]  

49. J. Shawe-Taylor and N. Cristianini, Kernel Methods for Pattern Analysis (Cambridge University, 2004).

50. Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86, 2278–2324 (1998). [CrossRef]  

51. T. Yan, J. Wu, T. Zhou, H. Xie, F. Xu, J. Fan, L. Fang, X. Lin, and Q. Dai, “Fourier-space diffractive deep neural network,” Phys. Rev. Lett. 123, 023901 (2019). [CrossRef]  

52. Y. Luo, D. Mengu, N. T. Yardimci, Y. Rivenson, M. Veli, M. Jarrahi, and A. Ozcan, “Design of task-specific optical systems using broadband diffractive neural networks,” Light Sci. Appl. 8, 112 (2019). [CrossRef]  

53. O. Tzang, E. Niv, S. Singh, S. Labouesse, G. Myatt, and R. Piestun, “Wavefront shaping in complex media with a 350 kHz modulator via a 1D-to-2D transform,” Nat. Photonics 13, 788–793 (2019). [CrossRef]  

54. B. Braverman, A. Skerjanc, N. Sullivan, and R. W. Boyd, “Fast generation and detection of spatial modes of light using an acousto-optic modulator,” Opt. Express 28, 29112–29121 (2020). [CrossRef]  

55. D. Piccinotti, K. F. MacDonald, S. Gregory, I. Youngs, and N. I. Zheludev, “Artificial intelligence for photonics and photonic materials,” Rep. Prog. Phys. 84, 012401 (2021). [CrossRef]  

56. G. Genty, L. Salmela, J. M. Dudley, D. Brunner, A. Kokhanovskiy, S. Kobtsev, and S. K. Turitsyn, “Machine learning and applications in ultrafast photonics,” Nat. Photonics 15, 91–101 (2020). [CrossRef]  

57. S. H. Rudy, S. L. Brunton, J. L. Proctor, and J. N. Kutz, “Data-driven discovery of partial differential equations,” Sci. Adv. 3, e1602614 (2017). [CrossRef]  

58. P. L. McMahon, A. Marandi, Y. Haribara, R. Hamerly, C. Langrock, S. Tamate, T. Inagaki, H. Takesue, S. Utsunomiya, K. Aihara, R. L. Byer, M. M. Fejer, H. Mabuchi, and Y. Yamamoto, “A fully-programmable 100-spin coherent Ising machine with all-to-all connections,” Science 354, 614–617 (2016). [CrossRef]  

59. T. Inagaki, Y. Haribara, K. Igarashi, T. Sonobe, S. Tamate, T. Honjo, A. Marandi, P. L. McMahon, T. Umeki, K. Enbutsu, O. Tadanaga, H. Takenouchi, K. Aihara, K.-I. Kawarabayashi, K. Inoue, S. Utsunomiya, and H. Takesue, “A coherent Ising machine for 2000-node optimization problems,” Science 354, 603–606 (2016). [CrossRef]  

60. F. Böhm, G. Verschaffelt, and G. Van der Sande, “A poor—coherent Ising machine based on opto-electronic feedback systems for solving optimization problems,” Nat. Commun. 10, 3538 (2019). [CrossRef]  

61. D. Pierangeli, G. Marcucci, and C. Conti, “Large-scale photonic Ising machine by spatial light modulation,” Phys. Rev. Lett. 122, 213902 (2019). [CrossRef]  

62. D. Pierangeli, G. Marcucci, and C. Conti, “Adiabatic evolution on a spatial-photonic Ising machine,” Optica 7, 1535–1543 (2020). [CrossRef]  

63. M. Prabhu, C. Roques-Carmes, Y. Shen, N. Harris, L. Jing, J. Carolan, R. Hamerly, T. Baehr-Jones, M. Hochberg, V. Čeperić, J. D. Joannopoulos, D. R. Englund, and M. Soljačić, “Accelerating recurrent Ising machines in photonic integrated circuits,” Optica 7, 551–558 (2020). [CrossRef]  

64. K. P. Kalinin, A. Amo, J. Bloch, and N. G. Berloff, “Polaritonic XY-Ising machine,” Nanophotonics 9, 4127–4138 (2020). [CrossRef]  

65. Y. Okawachi, M. Yu, J. K. Jang, X. Ji, Y. Zhao, B. Y. Kim, M. Lipson, and A. L. Gaeta, “Demonstration of chip-based coupled degenerate optical parametric oscillators for realizing a nanophotonic spin-glass,” Nat. Commun. 11, 4119 (2020). [CrossRef]  

66. A. Di Falco, V. Mazzone, A. Cruz, and A. Fratalocchi, “Perfect secrecy cryptography via mixing of chaotic waves in irreversible time-varying silicon chips,” Nat. Commun. 10, 5827 (2019). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. S. Haykin, Neural Networks and Learning Machines (Pearson Prentice Hall, 2008).
  2. E. Strubell, A. Ganesh, and A. McCallum, “Energy and policy considerations for deep learning in NLP,” arXiv:1906.02243 (2019).
  3. G. Wetzstein, A. Ozcan, S. Gigan, S. Fan, D. Englund, M. Soljačić, C. Denz, D. A. B. Miller, and D. Psaltis, “Inference in artificial intelligence with deep optics and photonics,” Nature 588, 39–47 (2020).
    [Crossref]
  4. N. H. Farhat, D. Psaltis, A. Prata, and E. Paek, “Optical implementation of the Hopfield model,” Appl. Opt. 24, 1469–1475 (1985).
    [Crossref]
  5. D. Psaltis, D. Brady, and K. Wagner, “Adaptive optical networks using photorefractive crystals,” Appl. Opt. 27, 1752–1759 (1988).
    [Crossref]
  6. C. Denz, Optical Neural Networks (Springer, 1998).
  7. Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017).
    [Crossref]
  8. X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361, 1004–1008 (2018).
    [Crossref]
  9. N. M. Estakhri, B. Edwards, and N. Engheta, “Inverse-designed metastructures that solve equations,” Science 363, 1333–1338 (2019).
    [Crossref]
  10. J. Bueno, S. Maktoobi, L. Froehly, I. Fischer, M. Jacquot, L. Larger, and D. Brunner, “Reinforcement learning in a large-scale photonic recurrent neural network,” Optica 5, 756–760 (2018).
    [Crossref]
  11. Y. Zuo, B. Li, Y. Zhao, Y. Jiang, Y. C. Chen, P. Chen, G. B. Jo, J. Liu, and S. Du, “All-optical neural network with nonlinear activation functions,” Optica 6, 1132–1137 (2019).
    [Crossref]
  12. A. N. Tait, T. F. de Lima, E. Zhou, A. X. Wu, M.-A. Nahmias, B. J. Shastri, and P. R. Prucnal, “Neuromorphic photonic networks using silicon photonic weight banks,” Sci. Rep. 7, 7430 (2017).
    [Crossref]
  13. J. Feldmann, N. Youngblood, C. D. Wright, H. Bhaskaran, and W. H. P. Pernice, “All-optical spiking neurosynaptic networks with self-learning capabilities,” Nature 569, 208–214 (2019).
    [Crossref]
  14. J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A. S. Raja, J. Liu, C. D. Wright, A. Sebastian, T. J. Kippenberg, W. H. P. Pernice, and H. Bhaskaran, “Parallel convolutional processing using an integrated photonic tensor core,” Nature 589, 52–58 (2021).
    [Crossref]
  15. X. Xu, M. Tan, B. Corcoran, J. Wu, A. Boes, T. G. Nguyen, S. T. Chu, B. E. Little, D. G. Hicks, R. Morandotti, A. Mitchell, and D. J. Moss, “11 TOPS photonic convolutional accelerator for optical neural networks,” Nature 589, 44–51 (2021).
    [Crossref]
  16. X. Xu, M. Tan, B. Corcoran, J. Wu, T. G. Nguyen, A. Boes, S. T. Chu, B. E. Little, R. Morandotti, A. Mitchell, D. G. Hicks, and D. J. Moss, “Photonic perceptron based on a Kerr microcomb for high-speed, scalable, optical neural networks,” Laser Photon. Rev. 14, 2000070 (2020).
    [Crossref]
  17. J. Spall, X. Guo, T. D. Barrett, and A. I. Lvovsky, “Fully reconfigurable coherent optical vector-matrix multiplication,” Opt. Lett. 45, 5752–5755 (2020).
    [Crossref]
  18. J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, “Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification,” Sci. Rep. 8, 12324 (2018).
    [Crossref]
  19. M. Miscuglio, Z. Hu, S. Li, J. George, R. Capanna, P. M. Bardet, P. Gupta, and V. J. Sorger, “Massively parallel amplitude-only Fourier neural network,” Optica 7, 1812–1819 (2020).
    [Crossref]
  20. C. Wu, H. Yu, S. Lee, R. Peng, I. Takeuchi, and M. Li, “Programmable phase-change metasurfaces on waveguides for multimode photonic convolutional neural network,” Nat. Commun. 12, 96 (2021).
    [Crossref]
  21. T. W. Hughes, I. A. Williamson, M. Minkov, and S. Fan, “Wave physics as an analog recurrent neural network,” Sci. Adv. 5, eaay6946 (2019).
    [Crossref]
  22. R. Hamerly, L. Bernstein, A. Sludds, M. Soljačić, and D. Englund, “Large-scale optical neural networks based on photoelectric multiplication,” Phys. Rev. X 9, 021032 (2019).
    [Crossref]
  23. T. W. Hughes, M. Minkov, Y. Shi, and S. Fan, “Training of photonic neural networks through in situ backpropagation and gradient measurement,” Optica 5, 864–871 (2018).
    [Crossref]
  24. M. Lukosevicius and H. Jaeger, “Reservoir computing approaches to recurrent neural network training,” Comput. Sci. Rev. 3, 127–149 (2009).
    [Crossref]
  25. C. Gallicchio, A. Micheli, and L. Pedrelli, “Deep reservoir computing: a critical experimental analysis,” Neurocomputing 268, 87–99 (2017).
    [Crossref]
  26. G. Neofotistos, M. Mattheakis, G. D. Barmparis, J. Hizanidis, G. P. Tsironis, and E. Kaxiras, “Machine learning with observers predicts complex spatiotemporal behavior,” Front. Phys. 7, 24 (2019).
    [Crossref]
  27. D. Brunner, M. C. Soriano, C. Mirasso, and I. Fischer, “Parallel photonic information processing at gigabyte per second data rates using transient states,” Nat. Commun. 4, 1364 (2013).
    [Crossref]
  28. Q. Vinckier, F. Duport, A. Smerieri, K. Vandoorne, P. Bienstman, M. Haelterman, and S. Massar, “High-performance photonic reservoir computer based on a coherently driven passive cavity,” Optica 2, 438–446 (2015).
    [Crossref]
  29. L. Larger, A. Baylón-Fuentes, R. Martinenghi, V. S. Udaltsov, Y. K. Chembo, and M. Jacquot, “High-speed photonic reservoir computing using a time-delay based architecture: million words per second classification,” Phys. Rev. X 7, 011015 (2017).
    [Crossref]
  30. P. Antonik, N. Marsal, D. Brunner, and D. Rontani, “Human action recognition with a large-scale brain-inspired photonic computer,” Nat. Mach. Intell. 1, 530–537 (2019).
    [Crossref]
  31. A. Röhm, L. Jaurigue, and K. Lüdge, “Reservoir computing using laser networks,” IEEE J. Sel. Top. Quantum Electron. 26, 7700108 (2020).
    [Crossref]
  32. U. Paudel, M. Luengo-Kovac, J. Pilawa, T. J. Shaw, and G. C. Valley, “Classification of time-domain waveforms using a speckle-based optical reservoir computer,” Opt. Express 28, 1225–1237 (2020).
    [Crossref]
  33. M. Rafayelyan, J. Dong, Y. Tan, F. Krzakala, and S. Gigan, “Large-scale optical reservoir computing for spatiotemporal chaotic systems prediction,” Phys. Rev. X 10, 041037 (2020).
    [Crossref]
  34. D. Pierangeli, V. Palmieri, G. Marcucci, C. Moriconi, G. Perini, M. De Spirito, M. Papi, and C. Conti, “Living optical random neural network with three dimensional tumor spheroids for cancer morphodynamics,” Commun. Phys. 3, 160 (2020).
    [Crossref]
  35. G. B. Huang, Q. Y. Zhu, and C. K. Siew, “Extreme learning machine: theory and applications,” Neurocomputing 70, 489–501 (2006).
    [Crossref]
  36. G. B. Huang, H. Zhou, X. Ding, and R. Zhang, “Extreme learning machine for regression and multiclass classification,” IEEE Trans. Syst., Man, Cybern. B, Cybern. 42, 513–529 (2012).
    [Crossref]
  37. W. F. Schmidt, M. A. Kraaijveld, and R. P. Duin, “Feed forward neural networks with random weights,” in International Conference on Pattern Recognition (IEEE Computer Society, 1992).
  38. Y. H. Pao, G. H. Park, and D. J. Sobajic, “Learning and generalization characteristics of the random vector functional-link net,” Neurocomputing 6, 163–180 (1994).
    [Crossref]
  39. J. A. K. Suykens and J. Vandewalle, “Least squares support vector machine classifiers,” Neural Process. Lett. 9, 293–300 (1999).
    [Crossref]
  40. S. An, W. Liu, and S. Venkatesh, “Face recognition using kernel ridge regression,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2007), pp. 1–7.
  41. A. Saade, F. Caltagirone, I. Carron, L. Daudet, A. Dremeau, S. Gigan, and F. Krzakala, “Random projections through multiple optical scattering: approximating kernels at the speed of light,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2016), pp. 6215–6219.
  42. S. Sunada, K. Kanno, and A. Uchida, “Using multidimensional speckle dynamics for high-speed, large-scale, parallel photonic computing,” Opt. Express 28, 30349–30361 (2020).
    [Crossref]
  43. G. Marcucci, D. Pierangeli, and C. Conti, “Theory of neuromorphic computing by waves: machine learning by rogue waves, dispersive shocks, and solitons,” Phys. Rev. Lett. 125, 093901 (2020).
    [Crossref]
  44. U. Tegin, M. Yildirim, I. Oguz, C. Moser, and D. Psaltis, “Scalable optical learning operator,” arXiv:2012.12404 (2020).
  45. H. Zhang, M. Gu, X. D. Jiang, J. Thompson, H. Cai, S. Paesani, R. Santagati, A. Laing, Y. Zhang, M. H. Yung, Y. Z. Shi, F. K. Muhammad, G. Q. Lo, X. S. Luo, B. Dong, D. L. Kwong, L. C. Kwek, and A. Q. Liu, “An optical neural chip for implementing complex-valued neural network,” Nat. Commun. 12, 457 (2021).
    [Crossref]
  46. J. W. Goodman, Introduction to Fourier Optics (Roberts & Company, 2005).
  47. L. C. C. Kasun, H. Zhou, G. B. Huang, and C. M. Vong, “Representational learning with extreme learning machine for big data,” IEEE Intell. Syst. 28, 31–34 (2013).
    [Crossref]
  48. D. Pierangeli, M. Rafayelyan, C. Conti, and S. Gigan, “Scalable spin-glass optical simulator,” Phys. Rev. Appl. 15, 034087 (2021).
    [Crossref]
  49. J. Shawe-Taylor and N. Cristianini, Kernel Methods for Pattern Analysis (Cambridge University, 2004).
  50. Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86, 2278–2324 (1998).
    [Crossref]
  51. T. Yan, J. Wu, T. Zhou, H. Xie, F. Xu, J. Fan, L. Fang, X. Lin, and Q. Dai, “Fourier-space diffractive deep neural network,” Phys. Rev. Lett. 123, 023901 (2019).
    [Crossref]
  52. Y. Luo, D. Mengu, N. T. Yardimci, Y. Rivenson, M. Veli, M. Jarrahi, and A. Ozcan, “Design of task-specific optical systems using broadband diffractive neural networks,” Light Sci. Appl. 8, 112 (2019).
    [Crossref]
  53. O. Tzang, E. Niv, S. Singh, S. Labouesse, G. Myatt, and R. Piestun, “Wavefront shaping in complex media with a 350  kHz modulator via a 1D-to-2D transform,” Nat. Photonics 13, 788–793 (2019).
    [Crossref]
  54. B. Braverman, A. Skerjanc, N. Sullivan, and R. W. Boyd, “Fast generation and detection of spatial modes of light using an acousto-optic modulator,” Opt. Express 28, 29112–29121 (2020).
    [Crossref]
  55. D. Piccinotti, K. F. MacDonald, S. Gregory, I. Youngs, and N. I. Zheludev, “Artificial intelligence for photonics and photonic materials,” Rep. Prog. Phys. 84, 012401 (2021).
    [Crossref]
  56. G. Genty, L. Salmela, J. M. Dudley, D. Brunner, A. Kokhanovskiy, S. Kobtsev, and S. K. Turitsyn, “Machine learning and applications in ultrafast photonics,” Nat. Photonics 15, 91–101 (2020).
    [Crossref]
  57. S. H. Rudy, S. L. Brunton, J. L. Proctor, and J. N. Kutz, “Data-driven discovery of partial differential equations,” Sci. Adv. 3, e1602614 (2017).
    [Crossref]
  58. P. L. McMahon, A. Marandi, Y. Haribara, R. Hamerly, C. Langrock, S. Tamate, T. Inagaki, H. Takesue, S. Utsunomiya, K. Aihara, R. L. Byer, M. M. Fejer, H. Mabuchi, and Y. Yamamoto, “A fully-programmable 100-spin coherent Ising machine with all-to-all connections,” Science 354, 614–617 (2016).
    [Crossref]
  59. T. Inagaki, Y. Haribara, K. Igarashi, T. Sonobe, S. Tamate, T. Honjo, A. Marandi, P. L. McMahon, T. Umeki, K. Enbutsu, O. Tadanaga, H. Takenouchi, K. Aihara, K.-I. Kawarabayashi, K. Inoue, S. Utsunomiya, and H. Takesue, “A coherent Ising machine for 2000-node optimization problems,” Science 354, 603–606 (2016).
    [Crossref]
  60. F. Böhm, G. Verschaffelt, and G. Van der Sande, “A poor—coherent Ising machine based on opto-electronic feedback systems for solving optimization problems,” Nat. Commun. 10, 3538 (2019).
    [Crossref]
  61. D. Pierangeli, G. Marcucci, and C. Conti, “Large-scale photonic Ising machine by spatial light modulation,” Phys. Rev. Lett. 122, 213902 (2019).
    [Crossref]
  62. D. Pierangeli, G. Marcucci, and C. Conti, “Adiabatic evolution on a spatial-photonic Ising machine,” Optica 7, 1535–1543 (2020).
    [Crossref]
  63. M. Prabhu, C. Roques-Carmes, Y. Shen, N. Harris, L. Jing, J. Carolan, R. Hamerly, T. Baehr-Jones, M. Hochberg, V. Čeperić, J. D. Joannopoulos, D. R. Englund, and M. Soljačić, “Accelerating recurrent Ising machines in photonic integrated circuits,” Optica 7, 551–558 (2020).
    [Crossref]
  64. K. P. Kalinin, A. Amo, J. Bloch, and N. G. Berloff, “Polaritonic XY-Ising machine,” Nanophotonics 9, 4127–4138 (2020).
    [Crossref]
  65. Y. Okawachi, M. Yu, J. K. Jang, X. Ji, Y. Zhao, B. Y. Kim, M. Lipson, and A. L. Gaeta, “Demonstration of chip-based coupled degenerate optical parametric oscillators for realizing a nanophotonic spin-glass,” Nat. Commun. 11, 4119 (2020).
    [Crossref]
  66. A. Di Falco, V. Mazzone, A. Cruz, and A. Fratalocchi, “Perfect secrecy cryptography via mixing of chaotic waves in irreversible time-varying silicon chips,” Nat. Commun. 10, 5827 (2019).
    [Crossref]

2021 (6)

J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A. S. Raja, J. Liu, C. D. Wright, A. Sebastian, T. J. Kippenberg, W. H. P. Pernice, and H. Bhaskaran, “Parallel convolutional processing using an integrated photonic tensor core,” Nature 589, 52–58 (2021).
[Crossref]

X. Xu, M. Tan, B. Corcoran, J. Wu, A. Boes, T. G. Nguyen, S. T. Chu, B. E. Little, D. G. Hicks, R. Morandotti, A. Mitchell, and D. J. Moss, “11 TOPS photonic convolutional accelerator for optical neural networks,” Nature 589, 44–51 (2021).
[Crossref]

C. Wu, H. Yu, S. Lee, R. Peng, I. Takeuchi, and M. Li, “Programmable phase-change metasurfaces on waveguides for multimode photonic convolutional neural network,” Nat. Commun. 12, 96 (2021).
[Crossref]

H. Zhang, M. Gu, X. D. Jiang, J. Thompson, H. Cai, S. Paesani, R. Santagati, A. Laing, Y. Zhang, M. H. Yung, Y. Z. Shi, F. K. Muhammad, G. Q. Lo, X. S. Luo, B. Dong, D. L. Kwong, L. C. Kwek, and A. Q. Liu, “An optical neural chip for implementing complex-valued neural network,” Nat. Commun. 12, 457 (2021).
[Crossref]

D. Pierangeli, M. Rafayelyan, C. Conti, and S. Gigan, “Scalable spin-glass optical simulator,” Phys. Rev. Appl. 15, 034087 (2021).
[Crossref]

D. Piccinotti, K. F. MacDonald, S. Gregory, I. Youngs, and N. I. Zheludev, “Artificial intelligence for photonics and photonic materials,” Rep. Prog. Phys. 84, 012401 (2021).
[Crossref]

2020 (16)

G. Genty, L. Salmela, J. M. Dudley, D. Brunner, A. Kokhanovskiy, S. Kobtsev, and S. K. Turitsyn, “Machine learning and applications in ultrafast photonics,” Nat. Photonics 15, 91–101 (2020).
[Crossref]

D. Pierangeli, G. Marcucci, and C. Conti, “Adiabatic evolution on a spatial-photonic Ising machine,” Optica 7, 1535–1543 (2020).
[Crossref]

M. Prabhu, C. Roques-Carmes, Y. Shen, N. Harris, L. Jing, J. Carolan, R. Hamerly, T. Baehr-Jones, M. Hochberg, V. Čeperić, J. D. Joannopoulos, D. R. Englund, and M. Soljačić, “Accelerating recurrent Ising machines in photonic integrated circuits,” Optica 7, 551–558 (2020).
[Crossref]

K. P. Kalinin, A. Amo, J. Bloch, and N. G. Berloff, “Polaritonic XY-Ising machine,” Nanophotonics 9, 4127–4138 (2020).
[Crossref]

Y. Okawachi, M. Yu, J. K. Jang, X. Ji, Y. Zhao, B. Y. Kim, M. Lipson, and A. L. Gaeta, “Demonstration of chip-based coupled degenerate optical parametric oscillators for realizing a nanophotonic spin-glass,” Nat. Commun. 11, 4119 (2020).
[Crossref]

S. Sunada, K. Kanno, and A. Uchida, “Using multidimensional speckle dynamics for high-speed, large-scale, parallel photonic computing,” Opt. Express 28, 30349–30361 (2020).
[Crossref]

G. Marcucci, D. Pierangeli, and C. Conti, “Theory of neuromorphic computing by waves: machine learning by rogue waves, dispersive shocks, and solitons,” Phys. Rev. Lett. 125, 093901 (2020).
[Crossref]

B. Braverman, A. Skerjanc, N. Sullivan, and R. W. Boyd, “Fast generation and detection of spatial modes of light using an acousto-optic modulator,” Opt. Express 28, 29112–29121 (2020).
[Crossref]

X. Xu, M. Tan, B. Corcoran, J. Wu, T. G. Nguyen, A. Boes, S. T. Chu, B. E. Little, R. Morandotti, A. Mitchell, D. G. Hicks, and D. J. Moss, “Photonic perceptron based on a Kerr microcomb for high-speed, scalable, optical neural networks,” Laser Photon. Rev. 14, 2000070 (2020).
[Crossref]

J. Spall, X. Guo, T. D. Barrett, and A. I. Lvovsky, “Fully reconfigurable coherent optical vector-matrix multiplication,” Opt. Lett. 45, 5752–5755 (2020).
[Crossref]

G. Wetzstein, A. Ozcan, S. Gigan, S. Fan, D. Englund, M. Soljačić, C. Denz, D. A. B. Miller, and D. Psaltis, “Inference in artificial intelligence with deep optics and photonics,” Nature 588, 39–47 (2020).
[Crossref]

M. Miscuglio, Z. Hu, S. Li, J. George, R. Capanna, P. M. Bardet, P. Gupta, and V. J. Sorger, “Massively parallel amplitude-only Fourier neural network,” Optica 7, 1812–1819 (2020).
[Crossref]

A. Röhm, L. Jaurigue, and K. Lüdge, “Reservoir computing using laser networks,” IEEE J. Sel. Top. Quantum Electron. 26, 7700108 (2020).
[Crossref]

U. Paudel, M. Luengo-Kovac, J. Pilawa, T. J. Shaw, and G. C. Valley, “Classification of time-domain waveforms using a speckle-based optical reservoir computer,” Opt. Express 28, 1225–1237 (2020).
[Crossref]

M. Rafayelyan, J. Dong, Y. Tan, F. Krzakala, and S. Gigan, “Large-scale optical reservoir computing for spatiotemporal chaotic systems prediction,” Phys. Rev. X 10, 041037 (2020).
[Crossref]

D. Pierangeli, V. Palmieri, G. Marcucci, C. Moriconi, G. Perini, M. De Spirito, M. Papi, and C. Conti, “Living optical random neural network with three dimensional tumor spheroids for cancer morphodynamics,” Commun. Phys. 3, 160 (2020).
[Crossref]

2019 (13)

P. Antonik, N. Marsal, D. Brunner, and D. Rontani, “Human action recognition with a large-scale brain-inspired photonic computer,” Nat. Mach. Intell. 1, 530–537 (2019).
[Crossref]

J. Feldmann, N. Youngblood, C. D. Wright, H. Bhaskaran, and W. H. P. Pernice, “All-optical spiking neurosynaptic networks with self-learning capabilities,” Nature 569, 208–214 (2019).
[Crossref]

G. Neofotistos, M. Mattheakis, G. D. Barmparis, J. Hizanidis, G. P. Tsironis, and E. Kaxiras, “Machine learning with observers predicts complex spatiotemporal behavior,” Front. Phys. 7, 24 (2019).
[Crossref]

N. M. Estakhri, B. Edwards, and N. Engheta, “Inverse-designed metastructures that solve equations,” Science 363, 1333–1338 (2019).
[Crossref]

Y. Zuo, B. Li, Y. Zhao, Y. Jiang, Y. C. Chen, P. Chen, G. B. Jo, J. Liu, and S. Du, “All-optical neural network with nonlinear activation functions,” Optica 6, 1132–1137 (2019).
[Crossref]

T. W. Hughes, I. A. Williamson, M. Minkov, and S. Fan, “Wave physics as an analog recurrent neural network,” Sci. Adv. 5, eaay6946 (2019).
[Crossref]

R. Hamerly, L. Bernstein, A. Sludds, M. Soljačić, and D. Englund, “Large-scale optical neural networks based on photoelectric multiplication,” Phys. Rev. X 9, 021032 (2019).
[Crossref]

F. Böhm, G. Verschaffelt, and G. Van der Sande, “A poor—coherent Ising machine based on opto-electronic feedback systems for solving optimization problems,” Nat. Commun. 10, 3538 (2019).
[Crossref]

D. Pierangeli, G. Marcucci, and C. Conti, “Large-scale photonic Ising machine by spatial light modulation,” Phys. Rev. Lett. 122, 213902 (2019).
[Crossref]

T. Yan, J. Wu, T. Zhou, H. Xie, F. Xu, J. Fan, L. Fang, X. Lin, and Q. Dai, “Fourier-space diffractive deep neural network,” Phys. Rev. Lett. 123, 023901 (2019).
[Crossref]

Y. Luo, D. Mengu, N. T. Yardimci, Y. Rivenson, M. Veli, M. Jarrahi, and A. Ozcan, “Design of task-specific optical systems using broadband diffractive neural networks,” Light Sci. Appl. 8, 112 (2019).
[Crossref]

O. Tzang, E. Niv, S. Singh, S. Labouesse, G. Myatt, and R. Piestun, “Wavefront shaping in complex media with a 350  kHz modulator via a 1D-to-2D transform,” Nat. Photonics 13, 788–793 (2019).
[Crossref]

A. Di Falco, V. Mazzone, A. Cruz, and A. Fratalocchi, “Perfect secrecy cryptography via mixing of chaotic waves in irreversible time-varying silicon chips,” Nat. Commun. 10, 5827 (2019).
[Crossref]

2018 (4)

T. W. Hughes, M. Minkov, Y. Shi, and S. Fan, “Training of photonic neural networks through in situ backpropagation and gradient measurement,” Optica 5, 864–871 (2018).
[Crossref]

J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, “Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification,” Sci. Rep. 8, 12324 (2018).
[Crossref]

J. Bueno, S. Maktoobi, L. Froehly, I. Fischer, M. Jacquot, L. Larger, and D. Brunner, “Reinforcement learning in a large-scale photonic recurrent neural network,” Optica 5, 756–760 (2018).
[Crossref]

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361, 1004–1008 (2018).
[Crossref]

2017 (5)

C. Gallicchio, A. Micheli, and L. Pedrelli, “Deep reservoir computing: a critical experimental analysis,” Neurocomputing 268, 87–99 (2017).
[Crossref]

L. Larger, A. Baylón-Fuentes, R. Martinenghi, V. S. Udaltsov, Y. K. Chembo, and M. Jacquot, “High-speed photonic reservoir computing using a time-delay based architecture: million words per second classification,” Phys. Rev. X 7, 011015 (2017).
[Crossref]

A. N. Tait, T. F. de Lima, E. Zhou, A. X. Wu, M.-A. Nahmias, B. J. Shastri, and P. R. Prucnal, “Neuromorphic photonic networks using silicon photonic weight banks,” Sci. Rep. 7, 7430 (2017).
[Crossref]

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017).
[Crossref]

S. H. Rudy, S. L. Brunton, J. L. Proctor, and J. N. Kutz, “Data-driven discovery of partial differential equations,” Sci. Adv. 3, e1602614 (2017).
[Crossref]

2016 (2)

P. L. McMahon, A. Marandi, Y. Haribara, R. Hamerly, C. Langrock, S. Tamate, T. Inagaki, H. Takesue, S. Utsunomiya, K. Aihara, R. L. Byer, M. M. Fejer, H. Mabuchi, and Y. Yamamoto, “A fully-programmable 100-spin coherent Ising machine with all-to-all connections,” Science 354, 614–617 (2016).
[Crossref]

T. Inagaki, Y. Haribara, K. Igarashi, T. Sonobe, S. Tamate, T. Honjo, A. Marandi, P. L. McMahon, T. Umeki, K. Enbutsu, O. Tadanaga, H. Takenouchi, K. Aihara, K.-I. Kawarabayashi, K. Inoue, S. Utsunomiya, and H. Takesue, “A coherent Ising machine for 2000-node optimization problems,” Science 354, 603–606 (2016).
[Crossref]

2015 (1)

2013 (2)

D. Brunner, M. C. Soriano, C. Mirasso, and I. Fischer, “Parallel photonic information processing at gigabyte per second data rates using transient states,” Nat. Commun. 4, 1364 (2013).
[Crossref]

L. C. C. Kasun, H. Zhou, G. B. Huang, and C. M. Vong, “Representational learning with extreme learning machine for big data,” IEEE Intell. Syst. 28, 31–34 (2013).
[Crossref]

2012 (1)

G. B. Huang, H. Zhou, X. Ding, and R. Zhang, “Extreme learning machine for regression and multiclass classification,” IEEE Trans. Syst., Man, Cybern. B, Cybern. 42, 513–529 (2012).
[Crossref]

2009 (1)

M. Lukosevicius and H. Jaeger, “Reservoir computing approaches to recurrent neural network training,” Comput. Sci. Rev. 3, 127–149 (2009).
[Crossref]

2006 (1)

G. B. Huang, Q. Y. Zhu, and C. K. Siew, “Extreme learning machine: theory and applications,” Neurocomputing 70, 489–501 (2006).
[Crossref]

1999 (1)

J. A. K. Suykens and J. Vandewalle, “Least squares support vector machine classifiers,” Neural Process. Lett. 9, 293–300 (1999).
[Crossref]

1998 (1)

Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86, 2278–2324 (1998).
[Crossref]

1994 (1)

Y. H. Pao, G. H. Park, and D. J. Sobajic, “Learning and generalization characteristics of the random vector functional-link net,” Neurocomputing 6, 163–180 (1994).
[Crossref]

1988 (1)

1985 (1)

Aihara, K.

P. L. McMahon, A. Marandi, Y. Haribara, R. Hamerly, C. Langrock, S. Tamate, T. Inagaki, H. Takesue, S. Utsunomiya, K. Aihara, R. L. Byer, M. M. Fejer, H. Mabuchi, and Y. Yamamoto, “A fully-programmable 100-spin coherent Ising machine with all-to-all connections,” Science 354, 614–617 (2016).
[Crossref]

T. Inagaki, Y. Haribara, K. Igarashi, T. Sonobe, S. Tamate, T. Honjo, A. Marandi, P. L. McMahon, T. Umeki, K. Enbutsu, O. Tadanaga, H. Takenouchi, K. Aihara, K.-I. Kawarabayashi, K. Inoue, S. Utsunomiya, and H. Takesue, “A coherent Ising machine for 2000-node optimization problems,” Science 354, 603–606 (2016).
[Crossref]

Amo, A.

K. P. Kalinin, A. Amo, J. Bloch, and N. G. Berloff, “Polaritonic XY-Ising machine,” Nanophotonics 9, 4127–4138 (2020).
[Crossref]

An, S.

S. An, W. Liu, and S. Venkatesh, “Face recognition using kernel ridge regression,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2007), pp. 1–7.

Antonik, P.

P. Antonik, N. Marsal, D. Brunner, and D. Rontani, “Human action recognition with a large-scale brain-inspired photonic computer,” Nat. Mach. Intell. 1, 530–537 (2019).
[Crossref]

Baehr-Jones, T.

M. Prabhu, C. Roques-Carmes, Y. Shen, N. Harris, L. Jing, J. Carolan, R. Hamerly, T. Baehr-Jones, M. Hochberg, V. Čeperić, J. D. Joannopoulos, D. R. Englund, and M. Soljačić, “Accelerating recurrent Ising machines in photonic integrated circuits,” Optica 7, 551–558 (2020).
[Crossref]

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017).
[Crossref]

Bardet, P. M.

Barmparis, G. D.

G. Neofotistos, M. Mattheakis, G. D. Barmparis, J. Hizanidis, G. P. Tsironis, and E. Kaxiras, “Machine learning with observers predicts complex spatiotemporal behavior,” Front. Phys. 7, 24 (2019).
[Crossref]

Barrett, T. D.

Baylón-Fuentes, A.

L. Larger, A. Baylón-Fuentes, R. Martinenghi, V. S. Udaltsov, Y. K. Chembo, and M. Jacquot, “High-speed photonic reservoir computing using a time-delay based architecture: million words per second classification,” Phys. Rev. X 7, 011015 (2017).
[Crossref]

Bengio, Y.

Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86, 2278–2324 (1998).
[Crossref]

Berloff, N. G.

K. P. Kalinin, A. Amo, J. Bloch, and N. G. Berloff, “Polaritonic XY-Ising machine,” Nanophotonics 9, 4127–4138 (2020).
[Crossref]

Bernstein, L.

R. Hamerly, L. Bernstein, A. Sludds, M. Soljačić, and D. Englund, “Large-scale optical neural networks based on photoelectric multiplication,” Phys. Rev. X 9, 021032 (2019).
[Crossref]

Bhaskaran, H.

J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A. S. Raja, J. Liu, C. D. Wright, A. Sebastian, T. J. Kippenberg, W. H. P. Pernice, and H. Bhaskaran, “Parallel convolutional processing using an integrated photonic tensor core,” Nature 589, 52–58 (2021).
[Crossref]

J. Feldmann, N. Youngblood, C. D. Wright, H. Bhaskaran, and W. H. P. Pernice, “All-optical spiking neurosynaptic networks with self-learning capabilities,” Nature 569, 208–214 (2019).
[Crossref]

Bienstman, P.

Bloch, J.

K. P. Kalinin, A. Amo, J. Bloch, and N. G. Berloff, “Polaritonic XY-Ising machine,” Nanophotonics 9, 4127–4138 (2020).
[Crossref]

Boes, A.

X. Xu, M. Tan, B. Corcoran, J. Wu, A. Boes, T. G. Nguyen, S. T. Chu, B. E. Little, D. G. Hicks, R. Morandotti, A. Mitchell, and D. J. Moss, “11 TOPS photonic convolutional accelerator for optical neural networks,” Nature 589, 44–51 (2021).
[Crossref]

X. Xu, M. Tan, B. Corcoran, J. Wu, T. G. Nguyen, A. Boes, S. T. Chu, B. E. Little, R. Morandotti, A. Mitchell, D. G. Hicks, and D. J. Moss, “Photonic perceptron based on a Kerr microcomb for high-speed, scalable, optical neural networks,” Laser Photon. Rev. 14, 2000070 (2020).
[Crossref]

Böhm, F.

F. Böhm, G. Verschaffelt, and G. Van der Sande, “A poor—coherent Ising machine based on opto-electronic feedback systems for solving optimization problems,” Nat. Commun. 10, 3538 (2019).
[Crossref]

Bottou, L.

Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86, 2278–2324 (1998).
[Crossref]

Boyd, R. W.

Brady, D.

Braverman, B.

Brunner, D.

G. Genty, L. Salmela, J. M. Dudley, D. Brunner, A. Kokhanovskiy, S. Kobtsev, and S. K. Turitsyn, “Machine learning and applications in ultrafast photonics,” Nat. Photonics 15, 91–101 (2020).
[Crossref]

P. Antonik, N. Marsal, D. Brunner, and D. Rontani, “Human action recognition with a large-scale brain-inspired photonic computer,” Nat. Mach. Intell. 1, 530–537 (2019).
[Crossref]

J. Bueno, S. Maktoobi, L. Froehly, I. Fischer, M. Jacquot, L. Larger, and D. Brunner, “Reinforcement learning in a large-scale photonic recurrent neural network,” Optica 5, 756–760 (2018).
[Crossref]

D. Brunner, M. C. Soriano, C. Mirasso, and I. Fischer, “Parallel photonic information processing at gigabyte per second data rates using transient states,” Nat. Commun. 4, 1364 (2013).
[Crossref]

Brunton, S. L.

S. H. Rudy, S. L. Brunton, J. L. Proctor, and J. N. Kutz, “Data-driven discovery of partial differential equations,” Sci. Adv. 3, e1602614 (2017).
[Crossref]

Bueno, J.

Byer, R. L.

P. L. McMahon, A. Marandi, Y. Haribara, R. Hamerly, C. Langrock, S. Tamate, T. Inagaki, H. Takesue, S. Utsunomiya, K. Aihara, R. L. Byer, M. M. Fejer, H. Mabuchi, and Y. Yamamoto, “A fully-programmable 100-spin coherent Ising machine with all-to-all connections,” Science 354, 614–617 (2016).
[Crossref]

Cai, H.

H. Zhang, M. Gu, X. D. Jiang, J. Thompson, H. Cai, S. Paesani, R. Santagati, A. Laing, Y. Zhang, M. H. Yung, Y. Z. Shi, F. K. Muhammad, G. Q. Lo, X. S. Luo, B. Dong, D. L. Kwong, L. C. Kwek, and A. Q. Liu, “An optical neural chip for implementing complex-valued neural network,” Nat. Commun. 12, 457 (2021).
[Crossref]

Caltagirone, F.

A. Saade, F. Caltagirone, I. Carron, L. Daudet, A. Dremeau, S. Gigan, and F. Krzakala, “Random projections through multiple optical scattering: approximating kernels at the speed of light,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2016), pp. 6215–6219.

Capanna, R.

Carolan, J.

Carron, I.

A. Saade, F. Caltagirone, I. Carron, L. Daudet, A. Dremeau, S. Gigan, and F. Krzakala, “Random projections through multiple optical scattering: approximating kernels at the speed of light,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2016), pp. 6215–6219.

Ceperic, V.

Chang, J.

J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, “Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification,” Sci. Rep. 8, 12324 (2018).
[Crossref]

Chembo, Y. K.

L. Larger, A. Baylón-Fuentes, R. Martinenghi, V. S. Udaltsov, Y. K. Chembo, and M. Jacquot, “High-speed photonic reservoir computing using a time-delay based architecture: million words per second classification,” Phys. Rev. X 7, 011015 (2017).
[Crossref]

Chen, P.

Chen, Y. C.

Chu, S. T.

X. Xu, M. Tan, B. Corcoran, J. Wu, A. Boes, T. G. Nguyen, S. T. Chu, B. E. Little, D. G. Hicks, R. Morandotti, A. Mitchell, and D. J. Moss, “11 TOPS photonic convolutional accelerator for optical neural networks,” Nature 589, 44–51 (2021).
[Crossref]

X. Xu, M. Tan, B. Corcoran, J. Wu, T. G. Nguyen, A. Boes, S. T. Chu, B. E. Little, R. Morandotti, A. Mitchell, D. G. Hicks, and D. J. Moss, “Photonic perceptron based on a Kerr microcomb for high-speed, scalable, optical neural networks,” Laser Photon. Rev. 14, 2000070 (2020).
[Crossref]

Conti, C.

D. Pierangeli, M. Rafayelyan, C. Conti, and S. Gigan, “Scalable spin-glass optical simulator,” Phys. Rev. Appl. 15, 034087 (2021).
[Crossref]

G. Marcucci, D. Pierangeli, and C. Conti, “Theory of neuromorphic computing by waves: machine learning by rogue waves, dispersive shocks, and solitons,” Phys. Rev. Lett. 125, 093901 (2020).
[Crossref]

D. Pierangeli, V. Palmieri, G. Marcucci, C. Moriconi, G. Perini, M. De Spirito, M. Papi, and C. Conti, “Living optical random neural network with three dimensional tumor spheroids for cancer morphodynamics,” Commun. Phys. 3, 160 (2020).
[Crossref]

D. Pierangeli, G. Marcucci, and C. Conti, “Adiabatic evolution on a spatial-photonic Ising machine,” Optica 7, 1535–1543 (2020).
[Crossref]

D. Pierangeli, G. Marcucci, and C. Conti, “Large-scale photonic Ising machine by spatial light modulation,” Phys. Rev. Lett. 122, 213902 (2019).
[Crossref]

Corcoran, B.

X. Xu, M. Tan, B. Corcoran, J. Wu, A. Boes, T. G. Nguyen, S. T. Chu, B. E. Little, D. G. Hicks, R. Morandotti, A. Mitchell, and D. J. Moss, “11 TOPS photonic convolutional accelerator for optical neural networks,” Nature 589, 44–51 (2021).
[Crossref]

X. Xu, M. Tan, B. Corcoran, J. Wu, T. G. Nguyen, A. Boes, S. T. Chu, B. E. Little, R. Morandotti, A. Mitchell, D. G. Hicks, and D. J. Moss, “Photonic perceptron based on a Kerr microcomb for high-speed, scalable, optical neural networks,” Laser Photon. Rev. 14, 2000070 (2020).
[Crossref]

Cristianini, N.

J. Shawe-Taylor and N. Cristianini, Kernel Methods for Pattern Analysis (Cambridge University, 2004).

Cruz, A.

A. Di Falco, V. Mazzone, A. Cruz, and A. Fratalocchi, “Perfect secrecy cryptography via mixing of chaotic waves in irreversible time-varying silicon chips,” Nat. Commun. 10, 5827 (2019).
[Crossref]

Dai, Q.

T. Yan, J. Wu, T. Zhou, H. Xie, F. Xu, J. Fan, L. Fang, X. Lin, and Q. Dai, “Fourier-space diffractive deep neural network,” Phys. Rev. Lett. 123, 023901 (2019).
[Crossref]

Daudet, L.

A. Saade, F. Caltagirone, I. Carron, L. Daudet, A. Dremeau, S. Gigan, and F. Krzakala, “Random projections through multiple optical scattering: approximating kernels at the speed of light,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2016), pp. 6215–6219.

de Lima, T. F.

A. N. Tait, T. F. de Lima, E. Zhou, A. X. Wu, M.-A. Nahmias, B. J. Shastri, and P. R. Prucnal, “Neuromorphic photonic networks using silicon photonic weight banks,” Sci. Rep. 7, 7430 (2017).
[Crossref]

De Spirito, M.

D. Pierangeli, V. Palmieri, G. Marcucci, C. Moriconi, G. Perini, M. De Spirito, M. Papi, and C. Conti, “Living optical random neural network with three dimensional tumor spheroids for cancer morphodynamics,” Commun. Phys. 3, 160 (2020).
[Crossref]

Denz, C.

G. Wetzstein, A. Ozcan, S. Gigan, S. Fan, D. Englund, M. Soljačić, C. Denz, D. A. B. Miller, and D. Psaltis, “Inference in artificial intelligence with deep optics and photonics,” Nature 588, 39–47 (2020).
[Crossref]

C. Denz, Optical Neural Networks (Springer, 1998).

Di Falco, A.

A. Di Falco, V. Mazzone, A. Cruz, and A. Fratalocchi, “Perfect secrecy cryptography via mixing of chaotic waves in irreversible time-varying silicon chips,” Nat. Commun. 10, 5827 (2019).
[Crossref]

Ding, X.

G. B. Huang, H. Zhou, X. Ding, and R. Zhang, “Extreme learning machine for regression and multiclass classification,” IEEE Trans. Syst., Man, Cybern. B, Cybern. 42, 513–529 (2012).
[Crossref]

Dong, B.

H. Zhang, M. Gu, X. D. Jiang, J. Thompson, H. Cai, S. Paesani, R. Santagati, A. Laing, Y. Zhang, M. H. Yung, Y. Z. Shi, F. K. Muhammad, G. Q. Lo, X. S. Luo, B. Dong, D. L. Kwong, L. C. Kwek, and A. Q. Liu, “An optical neural chip for implementing complex-valued neural network,” Nat. Commun. 12, 457 (2021).
[Crossref]

Dong, J.

M. Rafayelyan, J. Dong, Y. Tan, F. Krzakala, and S. Gigan, “Large-scale optical reservoir computing for spatiotemporal chaotic systems prediction,” Phys. Rev. X 10, 041037 (2020).
[Crossref]

Dremeau, A.

A. Saade, F. Caltagirone, I. Carron, L. Daudet, A. Dremeau, S. Gigan, and F. Krzakala, “Random projections through multiple optical scattering: approximating kernels at the speed of light,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2016), pp. 6215–6219.

Du, S.

Dudley, J. M.

G. Genty, L. Salmela, J. M. Dudley, D. Brunner, A. Kokhanovskiy, S. Kobtsev, and S. K. Turitsyn, “Machine learning and applications in ultrafast photonics,” Nat. Photonics 15, 91–101 (2020).
[Crossref]

Duin, R. P.

W. F. Schmidt, M. A. Kraaijveld, and R. P. Duin, “Feed forward neural networks with random weights,” in International Conference on Pattern Recognition (IEEE Computer Society, 1992).

Dun, X.

J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, “Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification,” Sci. Rep. 8, 12324 (2018).
[Crossref]

Duport, F.

Edwards, B.

N. M. Estakhri, B. Edwards, and N. Engheta, “Inverse-designed metastructures that solve equations,” Science 363, 1333–1338 (2019).
[Crossref]

Enbutsu, K.

T. Inagaki, Y. Haribara, K. Igarashi, T. Sonobe, S. Tamate, T. Honjo, A. Marandi, P. L. McMahon, T. Umeki, K. Enbutsu, O. Tadanaga, H. Takenouchi, K. Aihara, K.-I. Kawarabayashi, K. Inoue, S. Utsunomiya, and H. Takesue, “A coherent Ising machine for 2000-node optimization problems,” Science 354, 603–606 (2016).
[Crossref]

Engheta, N.

N. M. Estakhri, B. Edwards, and N. Engheta, “Inverse-designed metastructures that solve equations,” Science 363, 1333–1338 (2019).
[Crossref]

Englund, D.

G. Wetzstein, A. Ozcan, S. Gigan, S. Fan, D. Englund, M. Soljačić, C. Denz, D. A. B. Miller, and D. Psaltis, “Inference in artificial intelligence with deep optics and photonics,” Nature 588, 39–47 (2020).
[Crossref]

R. Hamerly, L. Bernstein, A. Sludds, M. Soljačić, and D. Englund, “Large-scale optical neural networks based on photoelectric multiplication,” Phys. Rev. X 9, 021032 (2019).
[Crossref]

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017).
[Crossref]

Englund, D. R.

Estakhri, N. M.

N. M. Estakhri, B. Edwards, and N. Engheta, “Inverse-designed metastructures that solve equations,” Science 363, 1333–1338 (2019).
[Crossref]

Fan, J.

T. Yan, J. Wu, T. Zhou, H. Xie, F. Xu, J. Fan, L. Fang, X. Lin, and Q. Dai, “Fourier-space diffractive deep neural network,” Phys. Rev. Lett. 123, 023901 (2019).
[Crossref]

Fan, S.

G. Wetzstein, A. Ozcan, S. Gigan, S. Fan, D. Englund, M. Soljačić, C. Denz, D. A. B. Miller, and D. Psaltis, “Inference in artificial intelligence with deep optics and photonics,” Nature 588, 39–47 (2020).
[Crossref]

T. W. Hughes, I. A. Williamson, M. Minkov, and S. Fan, “Wave physics as an analog recurrent neural network,” Sci. Adv. 5, eaay6946 (2019).
[Crossref]

T. W. Hughes, M. Minkov, Y. Shi, and S. Fan, “Training of photonic neural networks through in situ backpropagation and gradient measurement,” Optica 5, 864–871 (2018).
[Crossref]

Fang, L.

T. Yan, J. Wu, T. Zhou, H. Xie, F. Xu, J. Fan, L. Fang, X. Lin, and Q. Dai, “Fourier-space diffractive deep neural network,” Phys. Rev. Lett. 123, 023901 (2019).
[Crossref]

Farhat, N. H.

Fejer, M. M.

P. L. McMahon, A. Marandi, Y. Haribara, R. Hamerly, C. Langrock, S. Tamate, T. Inagaki, H. Takesue, S. Utsunomiya, K. Aihara, R. L. Byer, M. M. Fejer, H. Mabuchi, and Y. Yamamoto, “A fully-programmable 100-spin coherent Ising machine with all-to-all connections,” Science 354, 614–617 (2016).
[Crossref]

Feldmann, J.

J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A. S. Raja, J. Liu, C. D. Wright, A. Sebastian, T. J. Kippenberg, W. H. P. Pernice, and H. Bhaskaran, “Parallel convolutional processing using an integrated photonic tensor core,” Nature 589, 52–58 (2021).
[Crossref]

J. Feldmann, N. Youngblood, C. D. Wright, H. Bhaskaran, and W. H. P. Pernice, “All-optical spiking neurosynaptic networks with self-learning capabilities,” Nature 569, 208–214 (2019).
[Crossref]

Fischer, I.

J. Bueno, S. Maktoobi, L. Froehly, I. Fischer, M. Jacquot, L. Larger, and D. Brunner, “Reinforcement learning in a large-scale photonic recurrent neural network,” Optica 5, 756–760 (2018).
[Crossref]

D. Brunner, M. C. Soriano, C. Mirasso, and I. Fischer, “Parallel photonic information processing at gigabyte per second data rates using transient states,” Nat. Commun. 4, 1364 (2013).
[Crossref]

Fratalocchi, A.

A. Di Falco, V. Mazzone, A. Cruz, and A. Fratalocchi, “Perfect secrecy cryptography via mixing of chaotic waves in irreversible time-varying silicon chips,” Nat. Commun. 10, 5827 (2019).
[Crossref]

Froehly, L.

Fu, X.

J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A. S. Raja, J. Liu, C. D. Wright, A. Sebastian, T. J. Kippenberg, W. H. P. Pernice, and H. Bhaskaran, “Parallel convolutional processing using an integrated photonic tensor core,” Nature 589, 52–58 (2021).
[Crossref]

Gaeta, A. L.

Y. Okawachi, M. Yu, J. K. Jang, X. Ji, Y. Zhao, B. Y. Kim, M. Lipson, and A. L. Gaeta, “Demonstration of chip-based coupled degenerate optical parametric oscillators for realizing a nanophotonic spin-glass,” Nat. Commun. 11, 4119 (2020).
[Crossref]

Gallicchio, C.

C. Gallicchio, A. Micheli, and L. Pedrelli, “Deep reservoir computing: a critical experimental analysis,” Neurocomputing 268, 87–99 (2017).
[Crossref]

Ganesh, A.

E. Strubell, A. Ganesh, and A. McCallum, “Energy and policy considerations for deep learning in NLP,” arXiv:1906.02243 (2019).

Gehring, H.

J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A. S. Raja, J. Liu, C. D. Wright, A. Sebastian, T. J. Kippenberg, W. H. P. Pernice, and H. Bhaskaran, “Parallel convolutional processing using an integrated photonic tensor core,” Nature 589, 52–58 (2021).
[Crossref]

Genty, G.

G. Genty, L. Salmela, J. M. Dudley, D. Brunner, A. Kokhanovskiy, S. Kobtsev, and S. K. Turitsyn, “Machine learning and applications in ultrafast photonics,” Nat. Photonics 15, 91–101 (2020).
[Crossref]

George, J.

Gigan, S.

D. Pierangeli, M. Rafayelyan, C. Conti, and S. Gigan, “Scalable spin-glass optical simulator,” Phys. Rev. Appl. 15, 034087 (2021).
[Crossref]

M. Rafayelyan, J. Dong, Y. Tan, F. Krzakala, and S. Gigan, “Large-scale optical reservoir computing for spatiotemporal chaotic systems prediction,” Phys. Rev. X 10, 041037 (2020).
[Crossref]

G. Wetzstein, A. Ozcan, S. Gigan, S. Fan, D. Englund, M. Soljačić, C. Denz, D. A. B. Miller, and D. Psaltis, “Inference in artificial intelligence with deep optics and photonics,” Nature 588, 39–47 (2020).
[Crossref]

A. Saade, F. Caltagirone, I. Carron, L. Daudet, A. Dremeau, S. Gigan, and F. Krzakala, “Random projections through multiple optical scattering: approximating kernels at the speed of light,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2016), pp. 6215–6219.

Goodman, J. W.

J. W. Goodman, Introduction to Fourier Optics (Roberts & Company, 2005).

Gregory, S.

D. Piccinotti, K. F. MacDonald, S. Gregory, I. Youngs, and N. I. Zheludev, “Artificial intelligence for photonics and photonic materials,” Rep. Prog. Phys. 84, 012401 (2021).
[Crossref]

Gu, M.

H. Zhang, M. Gu, X. D. Jiang, J. Thompson, H. Cai, S. Paesani, R. Santagati, A. Laing, Y. Zhang, M. H. Yung, Y. Z. Shi, F. K. Muhammad, G. Q. Lo, X. S. Luo, B. Dong, D. L. Kwong, L. C. Kwek, and A. Q. Liu, “An optical neural chip for implementing complex-valued neural network,” Nat. Commun. 12, 457 (2021).
[Crossref]

Guo, X.

Gupta, P.

Haelterman, M.

Haffner, P.

Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86, 2278–2324 (1998).
[Crossref]

Hamerly, R.

M. Prabhu, C. Roques-Carmes, Y. Shen, N. Harris, L. Jing, J. Carolan, R. Hamerly, T. Baehr-Jones, M. Hochberg, V. Čeperić, J. D. Joannopoulos, D. R. Englund, and M. Soljačić, “Accelerating recurrent Ising machines in photonic integrated circuits,” Optica 7, 551–558 (2020).
[Crossref]

R. Hamerly, L. Bernstein, A. Sludds, M. Soljačić, and D. Englund, “Large-scale optical neural networks based on photoelectric multiplication,” Phys. Rev. X 9, 021032 (2019).
[Crossref]

P. L. McMahon, A. Marandi, Y. Haribara, R. Hamerly, C. Langrock, S. Tamate, T. Inagaki, H. Takesue, S. Utsunomiya, K. Aihara, R. L. Byer, M. M. Fejer, H. Mabuchi, and Y. Yamamoto, “A fully-programmable 100-spin coherent Ising machine with all-to-all connections,” Science 354, 614–617 (2016).
[Crossref]

Haribara, Y.

P. L. McMahon, A. Marandi, Y. Haribara, R. Hamerly, C. Langrock, S. Tamate, T. Inagaki, H. Takesue, S. Utsunomiya, K. Aihara, R. L. Byer, M. M. Fejer, H. Mabuchi, and Y. Yamamoto, “A fully-programmable 100-spin coherent Ising machine with all-to-all connections,” Science 354, 614–617 (2016).
[Crossref]

T. Inagaki, Y. Haribara, K. Igarashi, T. Sonobe, S. Tamate, T. Honjo, A. Marandi, P. L. McMahon, T. Umeki, K. Enbutsu, O. Tadanaga, H. Takenouchi, K. Aihara, K.-I. Kawarabayashi, K. Inoue, S. Utsunomiya, and H. Takesue, “A coherent Ising machine for 2000-node optimization problems,” Science 354, 603–606 (2016).
[Crossref]

Harris, N.

Harris, N. C.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017).
[Crossref]

Haykin, S.

S. Haykin, Neural Networks and Learning Machines (Pearson Prentice Hall, 2008).

Heidrich, W.

J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, “Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification,” Sci. Rep. 8, 12324 (2018).
[Crossref]

Hicks, D. G.

X. Xu, M. Tan, B. Corcoran, J. Wu, A. Boes, T. G. Nguyen, S. T. Chu, B. E. Little, D. G. Hicks, R. Morandotti, A. Mitchell, and D. J. Moss, “11 TOPS photonic convolutional accelerator for optical neural networks,” Nature 589, 44–51 (2021).
[Crossref]

X. Xu, M. Tan, B. Corcoran, J. Wu, T. G. Nguyen, A. Boes, S. T. Chu, B. E. Little, R. Morandotti, A. Mitchell, D. G. Hicks, and D. J. Moss, “Photonic perceptron based on a Kerr microcomb for high-speed, scalable, optical neural networks,” Laser Photon. Rev. 14, 2000070 (2020).
[Crossref]

Hizanidis, J.

G. Neofotistos, M. Mattheakis, G. D. Barmparis, J. Hizanidis, G. P. Tsironis, and E. Kaxiras, “Machine learning with observers predicts complex spatiotemporal behavior,” Front. Phys. 7, 24 (2019).
[Crossref]

Hochberg, M.

M. Prabhu, C. Roques-Carmes, Y. Shen, N. Harris, L. Jing, J. Carolan, R. Hamerly, T. Baehr-Jones, M. Hochberg, V. Čeperić, J. D. Joannopoulos, D. R. Englund, and M. Soljačić, “Accelerating recurrent Ising machines in photonic integrated circuits,” Optica 7, 551–558 (2020).
[Crossref]

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017).
[Crossref]

Honjo, T.

T. Inagaki, Y. Haribara, K. Igarashi, T. Sonobe, S. Tamate, T. Honjo, A. Marandi, P. L. McMahon, T. Umeki, K. Enbutsu, O. Tadanaga, H. Takenouchi, K. Aihara, K.-I. Kawarabayashi, K. Inoue, S. Utsunomiya, and H. Takesue, “A coherent Ising machine for 2000-node optimization problems,” Science 354, 603–606 (2016).
[Crossref]

Hu, Z.

Huang, G. B.

L. C. C. Kasun, H. Zhou, G. B. Huang, and C. M. Vong, “Representational learning with extreme learning machine for big data,” IEEE Intell. Syst. 28, 31–34 (2013).
[Crossref]

G. B. Huang, H. Zhou, X. Ding, and R. Zhang, “Extreme learning machine for regression and multiclass classification,” IEEE Trans. Syst., Man, Cybern. B, Cybern. 42, 513–529 (2012).
[Crossref]

G. B. Huang, Q. Y. Zhu, and C. K. Siew, “Extreme learning machine: theory and applications,” Neurocomputing 70, 489–501 (2006).
[Crossref]

Hughes, T. W.

T. W. Hughes, I. A. Williamson, M. Minkov, and S. Fan, “Wave physics as an analog recurrent neural network,” Sci. Adv. 5, eaay6946 (2019).
[Crossref]

T. W. Hughes, M. Minkov, Y. Shi, and S. Fan, “Training of photonic neural networks through in situ backpropagation and gradient measurement,” Optica 5, 864–871 (2018).
[Crossref]

Igarashi, K.

T. Inagaki, Y. Haribara, K. Igarashi, T. Sonobe, S. Tamate, T. Honjo, A. Marandi, P. L. McMahon, T. Umeki, K. Enbutsu, O. Tadanaga, H. Takenouchi, K. Aihara, K.-I. Kawarabayashi, K. Inoue, S. Utsunomiya, and H. Takesue, “A coherent Ising machine for 2000-node optimization problems,” Science 354, 603–606 (2016).
[Crossref]

Inagaki, T.

P. L. McMahon, A. Marandi, Y. Haribara, R. Hamerly, C. Langrock, S. Tamate, T. Inagaki, H. Takesue, S. Utsunomiya, K. Aihara, R. L. Byer, M. M. Fejer, H. Mabuchi, and Y. Yamamoto, “A fully-programmable 100-spin coherent Ising machine with all-to-all connections,” Science 354, 614–617 (2016).
[Crossref]

T. Inagaki, Y. Haribara, K. Igarashi, T. Sonobe, S. Tamate, T. Honjo, A. Marandi, P. L. McMahon, T. Umeki, K. Enbutsu, O. Tadanaga, H. Takenouchi, K. Aihara, K.-I. Kawarabayashi, K. Inoue, S. Utsunomiya, and H. Takesue, “A coherent Ising machine for 2000-node optimization problems,” Science 354, 603–606 (2016).
[Crossref]

Inoue, K.

T. Inagaki, Y. Haribara, K. Igarashi, T. Sonobe, S. Tamate, T. Honjo, A. Marandi, P. L. McMahon, T. Umeki, K. Enbutsu, O. Tadanaga, H. Takenouchi, K. Aihara, K.-I. Kawarabayashi, K. Inoue, S. Utsunomiya, and H. Takesue, “A coherent Ising machine for 2000-node optimization problems,” Science 354, 603–606 (2016).
[Crossref]

Jacquot, M.

J. Bueno, S. Maktoobi, L. Froehly, I. Fischer, M. Jacquot, L. Larger, and D. Brunner, “Reinforcement learning in a large-scale photonic recurrent neural network,” Optica 5, 756–760 (2018).
[Crossref]

L. Larger, A. Baylón-Fuentes, R. Martinenghi, V. S. Udaltsov, Y. K. Chembo, and M. Jacquot, “High-speed photonic reservoir computing using a time-delay based architecture: million words per second classification,” Phys. Rev. X 7, 011015 (2017).
[Crossref]

Jaeger, H.

M. Lukosevicius and H. Jaeger, “Reservoir computing approaches to recurrent neural network training,” Comput. Sci. Rev. 3, 127–149 (2009).
[Crossref]

Jang, J. K.

Y. Okawachi, M. Yu, J. K. Jang, X. Ji, Y. Zhao, B. Y. Kim, M. Lipson, and A. L. Gaeta, “Demonstration of chip-based coupled degenerate optical parametric oscillators for realizing a nanophotonic spin-glass,” Nat. Commun. 11, 4119 (2020).
[Crossref]

Jarrahi, M.

Y. Luo, D. Mengu, N. T. Yardimci, Y. Rivenson, M. Veli, M. Jarrahi, and A. Ozcan, “Design of task-specific optical systems using broadband diffractive neural networks,” Light Sci. Appl. 8, 112 (2019).
[Crossref]

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361, 1004–1008 (2018).
[Crossref]

Jaurigue, L.

A. Röhm, L. Jaurigue, and K. Lüdge, “Reservoir computing using laser networks,” IEEE J. Sel. Top. Quantum Electron. 26, 7700108 (2020).
[Crossref]

Ji, X.

Y. Okawachi, M. Yu, J. K. Jang, X. Ji, Y. Zhao, B. Y. Kim, M. Lipson, and A. L. Gaeta, “Demonstration of chip-based coupled degenerate optical parametric oscillators for realizing a nanophotonic spin-glass,” Nat. Commun. 11, 4119 (2020).
[Crossref]

Jiang, X. D.

H. Zhang, M. Gu, X. D. Jiang, J. Thompson, H. Cai, S. Paesani, R. Santagati, A. Laing, Y. Zhang, M. H. Yung, Y. Z. Shi, F. K. Muhammad, G. Q. Lo, X. S. Luo, B. Dong, D. L. Kwong, L. C. Kwek, and A. Q. Liu, “An optical neural chip for implementing complex-valued neural network,” Nat. Commun. 12, 457 (2021).
[Crossref]

Jiang, Y.

Jing, L.

Jo, G. B.

Joannopoulos, J. D.

Kalinin, K. P.

K. P. Kalinin, A. Amo, J. Bloch, and N. G. Berloff, “Polaritonic XY-Ising machine,” Nanophotonics 9, 4127–4138 (2020).
[Crossref]

Kanno, K.

Karpov, M.

J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A. S. Raja, J. Liu, C. D. Wright, A. Sebastian, T. J. Kippenberg, W. H. P. Pernice, and H. Bhaskaran, “Parallel convolutional processing using an integrated photonic tensor core,” Nature 589, 52–58 (2021).
[Crossref]

Kasun, L. C. C.

L. C. C. Kasun, H. Zhou, G. B. Huang, and C. M. Vong, “Representational learning with extreme learning machine for big data,” IEEE Intell. Syst. 28, 31–34 (2013).
[Crossref]

Kawarabayashi, K.-I.

T. Inagaki, Y. Haribara, K. Igarashi, T. Sonobe, S. Tamate, T. Honjo, A. Marandi, P. L. McMahon, T. Umeki, K. Enbutsu, O. Tadanaga, H. Takenouchi, K. Aihara, K.-I. Kawarabayashi, K. Inoue, S. Utsunomiya, and H. Takesue, “A coherent Ising machine for 2000-node optimization problems,” Science 354, 603–606 (2016).
[Crossref]

Kaxiras, E.

G. Neofotistos, M. Mattheakis, G. D. Barmparis, J. Hizanidis, G. P. Tsironis, and E. Kaxiras, “Machine learning with observers predicts complex spatiotemporal behavior,” Front. Phys. 7, 24 (2019).
[Crossref]

Kim, B. Y.

Y. Okawachi, M. Yu, J. K. Jang, X. Ji, Y. Zhao, B. Y. Kim, M. Lipson, and A. L. Gaeta, “Demonstration of chip-based coupled degenerate optical parametric oscillators for realizing a nanophotonic spin-glass,” Nat. Commun. 11, 4119 (2020).
[Crossref]

Kippenberg, T. J.

J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A. S. Raja, J. Liu, C. D. Wright, A. Sebastian, T. J. Kippenberg, W. H. P. Pernice, and H. Bhaskaran, “Parallel convolutional processing using an integrated photonic tensor core,” Nature 589, 52–58 (2021).
[Crossref]

Kobtsev, S.

G. Genty, L. Salmela, J. M. Dudley, D. Brunner, A. Kokhanovskiy, S. Kobtsev, and S. K. Turitsyn, “Machine learning and applications in ultrafast photonics,” Nat. Photonics 15, 91–101 (2020).
[Crossref]

Kokhanovskiy, A.

G. Genty, L. Salmela, J. M. Dudley, D. Brunner, A. Kokhanovskiy, S. Kobtsev, and S. K. Turitsyn, “Machine learning and applications in ultrafast photonics,” Nat. Photonics 15, 91–101 (2020).
[Crossref]

Kraaijveld, M. A.

W. F. Schmidt, M. A. Kraaijveld, and R. P. Duin, “Feed forward neural networks with random weights,” in International Conference on Pattern Recognition (IEEE Computer Society, 1992).

Krzakala, F.

M. Rafayelyan, J. Dong, Y. Tan, F. Krzakala, and S. Gigan, “Large-scale optical reservoir computing for spatiotemporal chaotic systems prediction,” Phys. Rev. X 10, 041037 (2020).
[Crossref]

A. Saade, F. Caltagirone, I. Carron, L. Daudet, A. Dremeau, S. Gigan, and F. Krzakala, “Random projections through multiple optical scattering: approximating kernels at the speed of light,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2016), pp. 6215–6219.

Kutz, J. N.

S. H. Rudy, S. L. Brunton, J. L. Proctor, and J. N. Kutz, “Data-driven discovery of partial differential equations,” Sci. Adv. 3, e1602614 (2017).
[Crossref]

Kwek, L. C.

H. Zhang, M. Gu, X. D. Jiang, J. Thompson, H. Cai, S. Paesani, R. Santagati, A. Laing, Y. Zhang, M. H. Yung, Y. Z. Shi, F. K. Muhammad, G. Q. Lo, X. S. Luo, B. Dong, D. L. Kwong, L. C. Kwek, and A. Q. Liu, “An optical neural chip for implementing complex-valued neural network,” Nat. Commun. 12, 457 (2021).
[Crossref]

Kwong, D. L.

H. Zhang, M. Gu, X. D. Jiang, J. Thompson, H. Cai, S. Paesani, R. Santagati, A. Laing, Y. Zhang, M. H. Yung, Y. Z. Shi, F. K. Muhammad, G. Q. Lo, X. S. Luo, B. Dong, D. L. Kwong, L. C. Kwek, and A. Q. Liu, “An optical neural chip for implementing complex-valued neural network,” Nat. Commun. 12, 457 (2021).
[Crossref]

Labouesse, S.

O. Tzang, E. Niv, S. Singh, S. Labouesse, G. Myatt, and R. Piestun, “Wavefront shaping in complex media with a 350  kHz modulator via a 1D-to-2D transform,” Nat. Photonics 13, 788–793 (2019).
[Crossref]

Laing, A.

H. Zhang, M. Gu, X. D. Jiang, J. Thompson, H. Cai, S. Paesani, R. Santagati, A. Laing, Y. Zhang, M. H. Yung, Y. Z. Shi, F. K. Muhammad, G. Q. Lo, X. S. Luo, B. Dong, D. L. Kwong, L. C. Kwek, and A. Q. Liu, “An optical neural chip for implementing complex-valued neural network,” Nat. Commun. 12, 457 (2021).
[Crossref]

Langrock, C.

P. L. McMahon, A. Marandi, Y. Haribara, R. Hamerly, C. Langrock, S. Tamate, T. Inagaki, H. Takesue, S. Utsunomiya, K. Aihara, R. L. Byer, M. M. Fejer, H. Mabuchi, and Y. Yamamoto, “A fully-programmable 100-spin coherent Ising machine with all-to-all connections,” Science 354, 614–617 (2016).
[Crossref]

Larger, L.

J. Bueno, S. Maktoobi, L. Froehly, I. Fischer, M. Jacquot, L. Larger, and D. Brunner, “Reinforcement learning in a large-scale photonic recurrent neural network,” Optica 5, 756–760 (2018).
[Crossref]

L. Larger, A. Baylón-Fuentes, R. Martinenghi, V. S. Udaltsov, Y. K. Chembo, and M. Jacquot, “High-speed photonic reservoir computing using a time-delay based architecture: million words per second classification,” Phys. Rev. X 7, 011015 (2017).
[Crossref]

Larochelle, H.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017).
[Crossref]

Le Gallo, M.

J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A. S. Raja, J. Liu, C. D. Wright, A. Sebastian, T. J. Kippenberg, W. H. P. Pernice, and H. Bhaskaran, “Parallel convolutional processing using an integrated photonic tensor core,” Nature 589, 52–58 (2021).
[Crossref]

Lecun, Y.

Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86, 2278–2324 (1998).
[Crossref]

Lee, S.

C. Wu, H. Yu, S. Lee, R. Peng, I. Takeuchi, and M. Li, “Programmable phase-change metasurfaces on waveguides for multimode photonic convolutional neural network,” Nat. Commun. 12, 96 (2021).
[Crossref]

Li, B.

Li, M.

C. Wu, H. Yu, S. Lee, R. Peng, I. Takeuchi, and M. Li, “Programmable phase-change metasurfaces on waveguides for multimode photonic convolutional neural network,” Nat. Commun. 12, 96 (2021).
[Crossref]

Li, S.

Li, X.

J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A. S. Raja, J. Liu, C. D. Wright, A. Sebastian, T. J. Kippenberg, W. H. P. Pernice, and H. Bhaskaran, “Parallel convolutional processing using an integrated photonic tensor core,” Nature 589, 52–58 (2021).
[Crossref]

Lin, X.

T. Yan, J. Wu, T. Zhou, H. Xie, F. Xu, J. Fan, L. Fang, X. Lin, and Q. Dai, “Fourier-space diffractive deep neural network,” Phys. Rev. Lett. 123, 023901 (2019).
[Crossref]

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361, 1004–1008 (2018).
[Crossref]

Lipson, M.

Y. Okawachi, M. Yu, J. K. Jang, X. Ji, Y. Zhao, B. Y. Kim, M. Lipson, and A. L. Gaeta, “Demonstration of chip-based coupled degenerate optical parametric oscillators for realizing a nanophotonic spin-glass,” Nat. Commun. 11, 4119 (2020).
[Crossref]

Little, B. E.

X. Xu, M. Tan, B. Corcoran, J. Wu, A. Boes, T. G. Nguyen, S. T. Chu, B. E. Little, D. G. Hicks, R. Morandotti, A. Mitchell, and D. J. Moss, “11 TOPS photonic convolutional accelerator for optical neural networks,” Nature 589, 44–51 (2021).
[Crossref]

X. Xu, M. Tan, B. Corcoran, J. Wu, T. G. Nguyen, A. Boes, S. T. Chu, B. E. Little, R. Morandotti, A. Mitchell, D. G. Hicks, and D. J. Moss, “Photonic perceptron based on a Kerr microcomb for high-speed, scalable, optical neural networks,” Laser Photon. Rev. 14, 2000070 (2020).
[Crossref]

Liu, A. Q.

H. Zhang, M. Gu, X. D. Jiang, J. Thompson, H. Cai, S. Paesani, R. Santagati, A. Laing, Y. Zhang, M. H. Yung, Y. Z. Shi, F. K. Muhammad, G. Q. Lo, X. S. Luo, B. Dong, D. L. Kwong, L. C. Kwek, and A. Q. Liu, “An optical neural chip for implementing complex-valued neural network,” Nat. Commun. 12, 457 (2021).
[Crossref]

Liu, J.

J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A. S. Raja, J. Liu, C. D. Wright, A. Sebastian, T. J. Kippenberg, W. H. P. Pernice, and H. Bhaskaran, “Parallel convolutional processing using an integrated photonic tensor core,” Nature 589, 52–58 (2021).
[Crossref]

Y. Zuo, B. Li, Y. Zhao, Y. Jiang, Y. C. Chen, P. Chen, G. B. Jo, J. Liu, and S. Du, “All-optical neural network with nonlinear activation functions,” Optica 6, 1132–1137 (2019).
[Crossref]

Liu, W.

S. An, W. Liu, and S. Venkatesh, “Face recognition using kernel ridge regression,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2007), pp. 1–7.

Lo, G. Q.

H. Zhang, M. Gu, X. D. Jiang, J. Thompson, H. Cai, S. Paesani, R. Santagati, A. Laing, Y. Zhang, M. H. Yung, Y. Z. Shi, F. K. Muhammad, G. Q. Lo, X. S. Luo, B. Dong, D. L. Kwong, L. C. Kwek, and A. Q. Liu, “An optical neural chip for implementing complex-valued neural network,” Nat. Commun. 12, 457 (2021).
[Crossref]

Lüdge, K.

A. Röhm, L. Jaurigue, and K. Lüdge, “Reservoir computing using laser networks,” IEEE J. Sel. Top. Quantum Electron. 26, 7700108 (2020).
[Crossref]

Luengo-Kovac, M.

Lukashchuk, A.

J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A. S. Raja, J. Liu, C. D. Wright, A. Sebastian, T. J. Kippenberg, W. H. P. Pernice, and H. Bhaskaran, “Parallel convolutional processing using an integrated photonic tensor core,” Nature 589, 52–58 (2021).
[Crossref]

Lukosevicius, M.

M. Lukosevicius and H. Jaeger, “Reservoir computing approaches to recurrent neural network training,” Comput. Sci. Rev. 3, 127–149 (2009).
[Crossref]

Luo, X. S.

H. Zhang, M. Gu, X. D. Jiang, J. Thompson, H. Cai, S. Paesani, R. Santagati, A. Laing, Y. Zhang, M. H. Yung, Y. Z. Shi, F. K. Muhammad, G. Q. Lo, X. S. Luo, B. Dong, D. L. Kwong, L. C. Kwek, and A. Q. Liu, “An optical neural chip for implementing complex-valued neural network,” Nat. Commun. 12, 457 (2021).
[Crossref]

Luo, Y.

Y. Luo, D. Mengu, N. T. Yardimci, Y. Rivenson, M. Veli, M. Jarrahi, and A. Ozcan, “Design of task-specific optical systems using broadband diffractive neural networks,” Light Sci. Appl. 8, 112 (2019).
[Crossref]

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361, 1004–1008 (2018).
[Crossref]

Lvovsky, A. I.

Mabuchi, H.

P. L. McMahon, A. Marandi, Y. Haribara, R. Hamerly, C. Langrock, S. Tamate, T. Inagaki, H. Takesue, S. Utsunomiya, K. Aihara, R. L. Byer, M. M. Fejer, H. Mabuchi, and Y. Yamamoto, “A fully-programmable 100-spin coherent Ising machine with all-to-all connections,” Science 354, 614–617 (2016).
[Crossref]

MacDonald, K. F.

D. Piccinotti, K. F. MacDonald, S. Gregory, I. Youngs, and N. I. Zheludev, “Artificial intelligence for photonics and photonic materials,” Rep. Prog. Phys. 84, 012401 (2021).
[Crossref]

Maktoobi, S.

Marandi, A.

P. L. McMahon, A. Marandi, Y. Haribara, R. Hamerly, C. Langrock, S. Tamate, T. Inagaki, H. Takesue, S. Utsunomiya, K. Aihara, R. L. Byer, M. M. Fejer, H. Mabuchi, and Y. Yamamoto, “A fully-programmable 100-spin coherent Ising machine with all-to-all connections,” Science 354, 614–617 (2016).
[Crossref]

T. Inagaki, Y. Haribara, K. Igarashi, T. Sonobe, S. Tamate, T. Honjo, A. Marandi, P. L. McMahon, T. Umeki, K. Enbutsu, O. Tadanaga, H. Takenouchi, K. Aihara, K.-I. Kawarabayashi, K. Inoue, S. Utsunomiya, and H. Takesue, “A coherent Ising machine for 2000-node optimization problems,” Science 354, 603–606 (2016).
[Crossref]

Marcucci, G.

D. Pierangeli, G. Marcucci, and C. Conti, “Adiabatic evolution on a spatial-photonic Ising machine,” Optica 7, 1535–1543 (2020).
[Crossref]

G. Marcucci, D. Pierangeli, and C. Conti, “Theory of neuromorphic computing by waves: machine learning by rogue waves, dispersive shocks, and solitons,” Phys. Rev. Lett. 125, 093901 (2020).
[Crossref]

D. Pierangeli, V. Palmieri, G. Marcucci, C. Moriconi, G. Perini, M. De Spirito, M. Papi, and C. Conti, “Living optical random neural network with three dimensional tumor spheroids for cancer morphodynamics,” Commun. Phys. 3, 160 (2020).
[Crossref]

D. Pierangeli, G. Marcucci, and C. Conti, “Large-scale photonic Ising machine by spatial light modulation,” Phys. Rev. Lett. 122, 213902 (2019).
[Crossref]

Marsal, N.

P. Antonik, N. Marsal, D. Brunner, and D. Rontani, “Human action recognition with a large-scale brain-inspired photonic computer,” Nat. Mach. Intell. 1, 530–537 (2019).
[Crossref]

Martinenghi, R.

L. Larger, A. Baylón-Fuentes, R. Martinenghi, V. S. Udaltsov, Y. K. Chembo, and M. Jacquot, “High-speed photonic reservoir computing using a time-delay based architecture: million words per second classification,” Phys. Rev. X 7, 011015 (2017).
[Crossref]

Massar, S.

Mattheakis, M.

G. Neofotistos, M. Mattheakis, G. D. Barmparis, J. Hizanidis, G. P. Tsironis, and E. Kaxiras, “Machine learning with observers predicts complex spatiotemporal behavior,” Front. Phys. 7, 24 (2019).
[Crossref]

Mazzone, V.

A. Di Falco, V. Mazzone, A. Cruz, and A. Fratalocchi, “Perfect secrecy cryptography via mixing of chaotic waves in irreversible time-varying silicon chips,” Nat. Commun. 10, 5827 (2019).
[Crossref]

McCallum, A.

E. Strubell, A. Ganesh, and A. McCallum, “Energy and policy considerations for deep learning in NLP,” arXiv:1906.02243 (2019).

McMahon, P. L.

T. Inagaki, Y. Haribara, K. Igarashi, T. Sonobe, S. Tamate, T. Honjo, A. Marandi, P. L. McMahon, T. Umeki, K. Enbutsu, O. Tadanaga, H. Takenouchi, K. Aihara, K.-I. Kawarabayashi, K. Inoue, S. Utsunomiya, and H. Takesue, “A coherent Ising machine for 2000-node optimization problems,” Science 354, 603–606 (2016).
[Crossref]

P. L. McMahon, A. Marandi, Y. Haribara, R. Hamerly, C. Langrock, S. Tamate, T. Inagaki, H. Takesue, S. Utsunomiya, K. Aihara, R. L. Byer, M. M. Fejer, H. Mabuchi, and Y. Yamamoto, “A fully-programmable 100-spin coherent Ising machine with all-to-all connections,” Science 354, 614–617 (2016).
[Crossref]

Mengu, D.

Y. Luo, D. Mengu, N. T. Yardimci, Y. Rivenson, M. Veli, M. Jarrahi, and A. Ozcan, “Design of task-specific optical systems using broadband diffractive neural networks,” Light Sci. Appl. 8, 112 (2019).
[Crossref]

Micheli, A.

C. Gallicchio, A. Micheli, and L. Pedrelli, “Deep reservoir computing: a critical experimental analysis,” Neurocomputing 268, 87–99 (2017).
[Crossref]

Miller, D. A. B.

G. Wetzstein, A. Ozcan, S. Gigan, S. Fan, D. Englund, M. Soljačić, C. Denz, D. A. B. Miller, and D. Psaltis, “Inference in artificial intelligence with deep optics and photonics,” Nature 588, 39–47 (2020).
[Crossref]

Minkov, M.

T. W. Hughes, I. A. Williamson, M. Minkov, and S. Fan, “Wave physics as an analog recurrent neural network,” Sci. Adv. 5, eaay6946 (2019).
[Crossref]

T. W. Hughes, M. Minkov, Y. Shi, and S. Fan, “Training of photonic neural networks through in situ backpropagation and gradient measurement,” Optica 5, 864–871 (2018).
[Crossref]

Mirasso, C.

D. Brunner, M. C. Soriano, C. Mirasso, and I. Fischer, “Parallel photonic information processing at gigabyte per second data rates using transient states,” Nat. Commun. 4, 1364 (2013).
[Crossref]

Miscuglio, M.

Mitchell, A.

X. Xu, M. Tan, B. Corcoran, J. Wu, A. Boes, T. G. Nguyen, S. T. Chu, B. E. Little, D. G. Hicks, R. Morandotti, A. Mitchell, and D. J. Moss, “11 TOPS photonic convolutional accelerator for optical neural networks,” Nature 589, 44–51 (2021).
[Crossref]

X. Xu, M. Tan, B. Corcoran, J. Wu, T. G. Nguyen, A. Boes, S. T. Chu, B. E. Little, R. Morandotti, A. Mitchell, D. G. Hicks, and D. J. Moss, “Photonic perceptron based on a Kerr microcomb for high-speed, scalable, optical neural networks,” Laser Photon. Rev. 14, 2000070 (2020).
[Crossref]

Morandotti, R.

X. Xu, M. Tan, B. Corcoran, J. Wu, A. Boes, T. G. Nguyen, S. T. Chu, B. E. Little, D. G. Hicks, R. Morandotti, A. Mitchell, and D. J. Moss, “11 TOPS photonic convolutional accelerator for optical neural networks,” Nature 589, 44–51 (2021).
[Crossref]

X. Xu, M. Tan, B. Corcoran, J. Wu, T. G. Nguyen, A. Boes, S. T. Chu, B. E. Little, R. Morandotti, A. Mitchell, D. G. Hicks, and D. J. Moss, “Photonic perceptron based on a Kerr microcomb for high-speed, scalable, optical neural networks,” Laser Photon. Rev. 14, 2000070 (2020).
[Crossref]

Moriconi, C.

D. Pierangeli, V. Palmieri, G. Marcucci, C. Moriconi, G. Perini, M. De Spirito, M. Papi, and C. Conti, “Living optical random neural network with three dimensional tumor spheroids for cancer morphodynamics,” Commun. Phys. 3, 160 (2020).
[Crossref]

Moser, C.

U. Tegin, M. Yildirim, I. Oguz, C. Moser, and D. Psaltis, “Scalable optical learning operator,” arXiv:2012.12404 (2020).

Moss, D. J.

X. Xu, M. Tan, B. Corcoran, J. Wu, A. Boes, T. G. Nguyen, S. T. Chu, B. E. Little, D. G. Hicks, R. Morandotti, A. Mitchell, and D. J. Moss, “11 TOPS photonic convolutional accelerator for optical neural networks,” Nature 589, 44–51 (2021).
[Crossref]

X. Xu, M. Tan, B. Corcoran, J. Wu, T. G. Nguyen, A. Boes, S. T. Chu, B. E. Little, R. Morandotti, A. Mitchell, D. G. Hicks, and D. J. Moss, “Photonic perceptron based on a Kerr microcomb for high-speed, scalable, optical neural networks,” Laser Photon. Rev. 14, 2000070 (2020).
[Crossref]

Muhammad, F. K.

H. Zhang, M. Gu, X. D. Jiang, J. Thompson, H. Cai, S. Paesani, R. Santagati, A. Laing, Y. Zhang, M. H. Yung, Y. Z. Shi, F. K. Muhammad, G. Q. Lo, X. S. Luo, B. Dong, D. L. Kwong, L. C. Kwek, and A. Q. Liu, “An optical neural chip for implementing complex-valued neural network,” Nat. Commun. 12, 457 (2021).
[Crossref]

Myatt, G.

O. Tzang, E. Niv, S. Singh, S. Labouesse, G. Myatt, and R. Piestun, “Wavefront shaping in complex media with a 350  kHz modulator via a 1D-to-2D transform,” Nat. Photonics 13, 788–793 (2019).
[Crossref]

Nahmias, M.-A.

A. N. Tait, T. F. de Lima, E. Zhou, A. X. Wu, M.-A. Nahmias, B. J. Shastri, and P. R. Prucnal, “Neuromorphic photonic networks using silicon photonic weight banks,” Sci. Rep. 7, 7430 (2017).
[Crossref]

Neofotistos, G.

G. Neofotistos, M. Mattheakis, G. D. Barmparis, J. Hizanidis, G. P. Tsironis, and E. Kaxiras, “Machine learning with observers predicts complex spatiotemporal behavior,” Front. Phys. 7, 24 (2019).
[Crossref]

Nguyen, T. G.

X. Xu, M. Tan, B. Corcoran, J. Wu, A. Boes, T. G. Nguyen, S. T. Chu, B. E. Little, D. G. Hicks, R. Morandotti, A. Mitchell, and D. J. Moss, “11 TOPS photonic convolutional accelerator for optical neural networks,” Nature 589, 44–51 (2021).
[Crossref]

X. Xu, M. Tan, B. Corcoran, J. Wu, T. G. Nguyen, A. Boes, S. T. Chu, B. E. Little, R. Morandotti, A. Mitchell, D. G. Hicks, and D. J. Moss, “Photonic perceptron based on a Kerr microcomb for high-speed, scalable, optical neural networks,” Laser Photon. Rev. 14, 2000070 (2020).
[Crossref]

Niv, E.

O. Tzang, E. Niv, S. Singh, S. Labouesse, G. Myatt, and R. Piestun, “Wavefront shaping in complex media with a 350  kHz modulator via a 1D-to-2D transform,” Nat. Photonics 13, 788–793 (2019).
[Crossref]

Oguz, I.

U. Tegin, M. Yildirim, I. Oguz, C. Moser, and D. Psaltis, “Scalable optical learning operator,” arXiv:2012.12404 (2020).

Okawachi, Y.

Y. Okawachi, M. Yu, J. K. Jang, X. Ji, Y. Zhao, B. Y. Kim, M. Lipson, and A. L. Gaeta, “Demonstration of chip-based coupled degenerate optical parametric oscillators for realizing a nanophotonic spin-glass,” Nat. Commun. 11, 4119 (2020).
[Crossref]

Ozcan, A.

G. Wetzstein, A. Ozcan, S. Gigan, S. Fan, D. Englund, M. Soljačić, C. Denz, D. A. B. Miller, and D. Psaltis, “Inference in artificial intelligence with deep optics and photonics,” Nature 588, 39–47 (2020).
[Crossref]

Y. Luo, D. Mengu, N. T. Yardimci, Y. Rivenson, M. Veli, M. Jarrahi, and A. Ozcan, “Design of task-specific optical systems using broadband diffractive neural networks,” Light Sci. Appl. 8, 112 (2019).
[Crossref]

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361, 1004–1008 (2018).
[Crossref]

Paek, E.

Paesani, S.

H. Zhang, M. Gu, X. D. Jiang, J. Thompson, H. Cai, S. Paesani, R. Santagati, A. Laing, Y. Zhang, M. H. Yung, Y. Z. Shi, F. K. Muhammad, G. Q. Lo, X. S. Luo, B. Dong, D. L. Kwong, L. C. Kwek, and A. Q. Liu, “An optical neural chip for implementing complex-valued neural network,” Nat. Commun. 12, 457 (2021).
[Crossref]

Palmieri, V.

D. Pierangeli, V. Palmieri, G. Marcucci, C. Moriconi, G. Perini, M. De Spirito, M. Papi, and C. Conti, “Living optical random neural network with three dimensional tumor spheroids for cancer morphodynamics,” Commun. Phys. 3, 160 (2020).
[Crossref]

Pao, Y. H.

Y. H. Pao, G. H. Park, and D. J. Sobajic, “Learning and generalization characteristics of the random vector functional-link net,” Neurocomputing 6, 163–180 (1994).
[Crossref]

Papi, M.

D. Pierangeli, V. Palmieri, G. Marcucci, C. Moriconi, G. Perini, M. De Spirito, M. Papi, and C. Conti, “Living optical random neural network with three dimensional tumor spheroids for cancer morphodynamics,” Commun. Phys. 3, 160 (2020).
[Crossref]

Park, G. H.

Y. H. Pao, G. H. Park, and D. J. Sobajic, “Learning and generalization characteristics of the random vector functional-link net,” Neurocomputing 6, 163–180 (1994).
[Crossref]

Paudel, U.

Pedrelli, L.

C. Gallicchio, A. Micheli, and L. Pedrelli, “Deep reservoir computing: a critical experimental analysis,” Neurocomputing 268, 87–99 (2017).
[Crossref]

Peng, R.

C. Wu, H. Yu, S. Lee, R. Peng, I. Takeuchi, and M. Li, “Programmable phase-change metasurfaces on waveguides for multimode photonic convolutional neural network,” Nat. Commun. 12, 96 (2021).
[Crossref]

Perini, G.

D. Pierangeli, V. Palmieri, G. Marcucci, C. Moriconi, G. Perini, M. De Spirito, M. Papi, and C. Conti, “Living optical random neural network with three dimensional tumor spheroids for cancer morphodynamics,” Commun. Phys. 3, 160 (2020).
[Crossref]

Pernice, W. H. P.

J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A. S. Raja, J. Liu, C. D. Wright, A. Sebastian, T. J. Kippenberg, W. H. P. Pernice, and H. Bhaskaran, “Parallel convolutional processing using an integrated photonic tensor core,” Nature 589, 52–58 (2021).
[Crossref]

J. Feldmann, N. Youngblood, C. D. Wright, H. Bhaskaran, and W. H. P. Pernice, “All-optical spiking neurosynaptic networks with self-learning capabilities,” Nature 569, 208–214 (2019).
[Crossref]

Piccinotti, D.

D. Piccinotti, K. F. MacDonald, S. Gregory, I. Youngs, and N. I. Zheludev, “Artificial intelligence for photonics and photonic materials,” Rep. Prog. Phys. 84, 012401 (2021).
[Crossref]

Pierangeli, D.

D. Pierangeli, M. Rafayelyan, C. Conti, and S. Gigan, “Scalable spin-glass optical simulator,” Phys. Rev. Appl. 15, 034087 (2021).
[Crossref]

G. Marcucci, D. Pierangeli, and C. Conti, “Theory of neuromorphic computing by waves: machine learning by rogue waves, dispersive shocks, and solitons,” Phys. Rev. Lett. 125, 093901 (2020).
[Crossref]

D. Pierangeli, V. Palmieri, G. Marcucci, C. Moriconi, G. Perini, M. De Spirito, M. Papi, and C. Conti, “Living optical random neural network with three dimensional tumor spheroids for cancer morphodynamics,” Commun. Phys. 3, 160 (2020).
[Crossref]

D. Pierangeli, G. Marcucci, and C. Conti, “Adiabatic evolution on a spatial-photonic Ising machine,” Optica 7, 1535–1543 (2020).
[Crossref]

D. Pierangeli, G. Marcucci, and C. Conti, “Large-scale photonic Ising machine by spatial light modulation,” Phys. Rev. Lett. 122, 213902 (2019).
[Crossref]

Piestun, R.

O. Tzang, E. Niv, S. Singh, S. Labouesse, G. Myatt, and R. Piestun, “Wavefront shaping in complex media with a 350  kHz modulator via a 1D-to-2D transform,” Nat. Photonics 13, 788–793 (2019).
[Crossref]

Pilawa, J.

Prabhu, M.

M. Prabhu, C. Roques-Carmes, Y. Shen, N. Harris, L. Jing, J. Carolan, R. Hamerly, T. Baehr-Jones, M. Hochberg, V. Čeperić, J. D. Joannopoulos, D. R. Englund, and M. Soljačić, “Accelerating recurrent Ising machines in photonic integrated circuits,” Optica 7, 551–558 (2020).
[Crossref]

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017).
[Crossref]

Prata, A.

Proctor, J. L.

S. H. Rudy, S. L. Brunton, J. L. Proctor, and J. N. Kutz, “Data-driven discovery of partial differential equations,” Sci. Adv. 3, e1602614 (2017).
[Crossref]

Prucnal, P. R.

A. N. Tait, T. F. de Lima, E. Zhou, A. X. Wu, M.-A. Nahmias, B. J. Shastri, and P. R. Prucnal, “Neuromorphic photonic networks using silicon photonic weight banks,” Sci. Rep. 7, 7430 (2017).
[Crossref]

Psaltis, D.

G. Wetzstein, A. Ozcan, S. Gigan, S. Fan, D. Englund, M. Soljačić, C. Denz, D. A. B. Miller, and D. Psaltis, “Inference in artificial intelligence with deep optics and photonics,” Nature 588, 39–47 (2020).
[Crossref]

D. Psaltis, D. Brady, and K. Wagner, “Adaptive optical networks using photorefractive crystals,” Appl. Opt. 27, 1752–1759 (1988).
[Crossref]

N. H. Farhat, D. Psaltis, A. Prata, and E. Paek, “Optical implementation of the Hopfield model,” Appl. Opt. 24, 1469–1475 (1985).
[Crossref]

U. Tegin, M. Yildirim, I. Oguz, C. Moser, and D. Psaltis, “Scalable optical learning operator,” arXiv:2012.12404 (2020).

Rafayelyan, M.

D. Pierangeli, M. Rafayelyan, C. Conti, and S. Gigan, “Scalable spin-glass optical simulator,” Phys. Rev. Appl. 15, 034087 (2021).
[Crossref]

M. Rafayelyan, J. Dong, Y. Tan, F. Krzakala, and S. Gigan, “Large-scale optical reservoir computing for spatiotemporal chaotic systems prediction,” Phys. Rev. X 10, 041037 (2020).
[Crossref]

Raja, A. S.

J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A. S. Raja, J. Liu, C. D. Wright, A. Sebastian, T. J. Kippenberg, W. H. P. Pernice, and H. Bhaskaran, “Parallel convolutional processing using an integrated photonic tensor core,” Nature 589, 52–58 (2021).
[Crossref]

Rivenson, Y.

Y. Luo, D. Mengu, N. T. Yardimci, Y. Rivenson, M. Veli, M. Jarrahi, and A. Ozcan, “Design of task-specific optical systems using broadband diffractive neural networks,” Light Sci. Appl. 8, 112 (2019).
[Crossref]

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361, 1004–1008 (2018).
[Crossref]

Röhm, A.

A. Röhm, L. Jaurigue, and K. Lüdge, “Reservoir computing using laser networks,” IEEE J. Sel. Top. Quantum Electron. 26, 7700108 (2020).
[Crossref]

Rontani, D.

P. Antonik, N. Marsal, D. Brunner, and D. Rontani, “Human action recognition with a large-scale brain-inspired photonic computer,” Nat. Mach. Intell. 1, 530–537 (2019).
[Crossref]

Roques-Carmes, C.

Rudy, S. H.

S. H. Rudy, S. L. Brunton, J. L. Proctor, and J. N. Kutz, “Data-driven discovery of partial differential equations,” Sci. Adv. 3, e1602614 (2017).
[Crossref]

Saade, A.

A. Saade, F. Caltagirone, I. Carron, L. Daudet, A. Dremeau, S. Gigan, and F. Krzakala, “Random projections through multiple optical scattering: approximating kernels at the speed of light,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2016), pp. 6215–6219.

Salmela, L.

G. Genty, L. Salmela, J. M. Dudley, D. Brunner, A. Kokhanovskiy, S. Kobtsev, and S. K. Turitsyn, “Machine learning and applications in ultrafast photonics,” Nat. Photonics 15, 91–101 (2020).
[Crossref]

Santagati, R.

H. Zhang, M. Gu, X. D. Jiang, J. Thompson, H. Cai, S. Paesani, R. Santagati, A. Laing, Y. Zhang, M. H. Yung, Y. Z. Shi, F. K. Muhammad, G. Q. Lo, X. S. Luo, B. Dong, D. L. Kwong, L. C. Kwek, and A. Q. Liu, “An optical neural chip for implementing complex-valued neural network,” Nat. Commun. 12, 457 (2021).
[Crossref]

Schmidt, W. F.

W. F. Schmidt, M. A. Kraaijveld, and R. P. Duin, “Feed forward neural networks with random weights,” in International Conference on Pattern Recognition (IEEE Computer Society, 1992).

Sebastian, A.

J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A. S. Raja, J. Liu, C. D. Wright, A. Sebastian, T. J. Kippenberg, W. H. P. Pernice, and H. Bhaskaran, “Parallel convolutional processing using an integrated photonic tensor core,” Nature 589, 52–58 (2021).
[Crossref]

Shastri, B. J.

A. N. Tait, T. F. de Lima, E. Zhou, A. X. Wu, M.-A. Nahmias, B. J. Shastri, and P. R. Prucnal, “Neuromorphic photonic networks using silicon photonic weight banks,” Sci. Rep. 7, 7430 (2017).
[Crossref]

Shaw, T. J.

Shawe-Taylor, J.

J. Shawe-Taylor and N. Cristianini, Kernel Methods for Pattern Analysis (Cambridge University, 2004).

Shen, Y.

M. Prabhu, C. Roques-Carmes, Y. Shen, N. Harris, L. Jing, J. Carolan, R. Hamerly, T. Baehr-Jones, M. Hochberg, V. Čeperić, J. D. Joannopoulos, D. R. Englund, and M. Soljačić, “Accelerating recurrent Ising machines in photonic integrated circuits,” Optica 7, 551–558 (2020).
[Crossref]

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017).
[Crossref]

Shi, Y.

Shi, Y. Z.

H. Zhang, M. Gu, X. D. Jiang, J. Thompson, H. Cai, S. Paesani, R. Santagati, A. Laing, Y. Zhang, M. H. Yung, Y. Z. Shi, F. K. Muhammad, G. Q. Lo, X. S. Luo, B. Dong, D. L. Kwong, L. C. Kwek, and A. Q. Liu, “An optical neural chip for implementing complex-valued neural network,” Nat. Commun. 12, 457 (2021).
[Crossref]

Siew, C. K.

G. B. Huang, Q. Y. Zhu, and C. K. Siew, “Extreme learning machine: theory and applications,” Neurocomputing 70, 489–501 (2006).
[Crossref]

Singh, S.

O. Tzang, E. Niv, S. Singh, S. Labouesse, G. Myatt, and R. Piestun, “Wavefront shaping in complex media with a 350  kHz modulator via a 1D-to-2D transform,” Nat. Photonics 13, 788–793 (2019).
[Crossref]

Sitzmann, V.

J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, “Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification,” Sci. Rep. 8, 12324 (2018).
[Crossref]

Skerjanc, A.

Skirlo, S.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017).
[Crossref]

Sludds, A.

R. Hamerly, L. Bernstein, A. Sludds, M. Soljačić, and D. Englund, “Large-scale optical neural networks based on photoelectric multiplication,” Phys. Rev. X 9, 021032 (2019).
[Crossref]

Smerieri, A.

Sobajic, D. J.

Y. H. Pao, G. H. Park, and D. J. Sobajic, “Learning and generalization characteristics of the random vector functional-link net,” Neurocomputing 6, 163–180 (1994).
[Crossref]

Soljacic, M.

G. Wetzstein, A. Ozcan, S. Gigan, S. Fan, D. Englund, M. Soljačić, C. Denz, D. A. B. Miller, and D. Psaltis, “Inference in artificial intelligence with deep optics and photonics,” Nature 588, 39–47 (2020).
[Crossref]

M. Prabhu, C. Roques-Carmes, Y. Shen, N. Harris, L. Jing, J. Carolan, R. Hamerly, T. Baehr-Jones, M. Hochberg, V. Čeperić, J. D. Joannopoulos, D. R. Englund, and M. Soljačić, “Accelerating recurrent Ising machines in photonic integrated circuits,” Optica 7, 551–558 (2020).
[Crossref]

R. Hamerly, L. Bernstein, A. Sludds, M. Soljačić, and D. Englund, “Large-scale optical neural networks based on photoelectric multiplication,” Phys. Rev. X 9, 021032 (2019).
[Crossref]

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017).
[Crossref]

Sonobe, T.

T. Inagaki, Y. Haribara, K. Igarashi, T. Sonobe, S. Tamate, T. Honjo, A. Marandi, P. L. McMahon, T. Umeki, K. Enbutsu, O. Tadanaga, H. Takenouchi, K. Aihara, K.-I. Kawarabayashi, K. Inoue, S. Utsunomiya, and H. Takesue, “A coherent Ising machine for 2000-node optimization problems,” Science 354, 603–606 (2016).
[Crossref]

Sorger, V. J.

Soriano, M. C.

D. Brunner, M. C. Soriano, C. Mirasso, and I. Fischer, “Parallel photonic information processing at gigabyte per second data rates using transient states,” Nat. Commun. 4, 1364 (2013).
[Crossref]

Spall, J.

Stappers, M.

J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A. S. Raja, J. Liu, C. D. Wright, A. Sebastian, T. J. Kippenberg, W. H. P. Pernice, and H. Bhaskaran, “Parallel convolutional processing using an integrated photonic tensor core,” Nature 589, 52–58 (2021).
[Crossref]

Strubell, E.

E. Strubell, A. Ganesh, and A. McCallum, “Energy and policy considerations for deep learning in NLP,” arXiv:1906.02243 (2019).

Sullivan, N.

Sun, X.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017).
[Crossref]

Sunada, S.

Suykens, J. A. K.

J. A. K. Suykens and J. Vandewalle, “Least squares support vector machine classifiers,” Neural Process. Lett. 9, 293–300 (1999).
[Crossref]

Tadanaga, O.

T. Inagaki, Y. Haribara, K. Igarashi, T. Sonobe, S. Tamate, T. Honjo, A. Marandi, P. L. McMahon, T. Umeki, K. Enbutsu, O. Tadanaga, H. Takenouchi, K. Aihara, K.-I. Kawarabayashi, K. Inoue, S. Utsunomiya, and H. Takesue, “A coherent Ising machine for 2000-node optimization problems,” Science 354, 603–606 (2016).
[Crossref]

Tait, A. N.

A. N. Tait, T. F. de Lima, E. Zhou, A. X. Wu, M.-A. Nahmias, B. J. Shastri, and P. R. Prucnal, “Neuromorphic photonic networks using silicon photonic weight banks,” Sci. Rep. 7, 7430 (2017).
[Crossref]

Takenouchi, H.

T. Inagaki, Y. Haribara, K. Igarashi, T. Sonobe, S. Tamate, T. Honjo, A. Marandi, P. L. McMahon, T. Umeki, K. Enbutsu, O. Tadanaga, H. Takenouchi, K. Aihara, K.-I. Kawarabayashi, K. Inoue, S. Utsunomiya, and H. Takesue, “A coherent Ising machine for 2000-node optimization problems,” Science 354, 603–606 (2016).
[Crossref]

Takesue, H.

P. L. McMahon, A. Marandi, Y. Haribara, R. Hamerly, C. Langrock, S. Tamate, T. Inagaki, H. Takesue, S. Utsunomiya, K. Aihara, R. L. Byer, M. M. Fejer, H. Mabuchi, and Y. Yamamoto, “A fully-programmable 100-spin coherent Ising machine with all-to-all connections,” Science 354, 614–617 (2016).
[Crossref]

T. Inagaki, Y. Haribara, K. Igarashi, T. Sonobe, S. Tamate, T. Honjo, A. Marandi, P. L. McMahon, T. Umeki, K. Enbutsu, O. Tadanaga, H. Takenouchi, K. Aihara, K.-I. Kawarabayashi, K. Inoue, S. Utsunomiya, and H. Takesue, “A coherent Ising machine for 2000-node optimization problems,” Science 354, 603–606 (2016).
[Crossref]

Takeuchi, I.

C. Wu, H. Yu, S. Lee, R. Peng, I. Takeuchi, and M. Li, “Programmable phase-change metasurfaces on waveguides for multimode photonic convolutional neural network,” Nat. Commun. 12, 96 (2021).
[Crossref]

Tamate, S.

T. Inagaki, Y. Haribara, K. Igarashi, T. Sonobe, S. Tamate, T. Honjo, A. Marandi, P. L. McMahon, T. Umeki, K. Enbutsu, O. Tadanaga, H. Takenouchi, K. Aihara, K.-I. Kawarabayashi, K. Inoue, S. Utsunomiya, and H. Takesue, “A coherent Ising machine for 2000-node optimization problems,” Science 354, 603–606 (2016).
[Crossref]

P. L. McMahon, A. Marandi, Y. Haribara, R. Hamerly, C. Langrock, S. Tamate, T. Inagaki, H. Takesue, S. Utsunomiya, K. Aihara, R. L. Byer, M. M. Fejer, H. Mabuchi, and Y. Yamamoto, “A fully-programmable 100-spin coherent Ising machine with all-to-all connections,” Science 354, 614–617 (2016).
[Crossref]

Tan, M.

X. Xu, M. Tan, B. Corcoran, J. Wu, A. Boes, T. G. Nguyen, S. T. Chu, B. E. Little, D. G. Hicks, R. Morandotti, A. Mitchell, and D. J. Moss, “11 TOPS photonic convolutional accelerator for optical neural networks,” Nature 589, 44–51 (2021).
[Crossref]

X. Xu, M. Tan, B. Corcoran, J. Wu, T. G. Nguyen, A. Boes, S. T. Chu, B. E. Little, R. Morandotti, A. Mitchell, D. G. Hicks, and D. J. Moss, “Photonic perceptron based on a Kerr microcomb for high-speed, scalable, optical neural networks,” Laser Photon. Rev. 14, 2000070 (2020).
[Crossref]

Tan, Y.

M. Rafayelyan, J. Dong, Y. Tan, F. Krzakala, and S. Gigan, “Large-scale optical reservoir computing for spatiotemporal chaotic systems prediction,” Phys. Rev. X 10, 041037 (2020).
[Crossref]

Tegin, U.

U. Tegin, M. Yildirim, I. Oguz, C. Moser, and D. Psaltis, “Scalable optical learning operator,” arXiv:2012.12404 (2020).

Thompson, J.

H. Zhang, M. Gu, X. D. Jiang, J. Thompson, H. Cai, S. Paesani, R. Santagati, A. Laing, Y. Zhang, M. H. Yung, Y. Z. Shi, F. K. Muhammad, G. Q. Lo, X. S. Luo, B. Dong, D. L. Kwong, L. C. Kwek, and A. Q. Liu, “An optical neural chip for implementing complex-valued neural network,” Nat. Commun. 12, 457 (2021).
[Crossref]

Tsironis, G. P.

G. Neofotistos, M. Mattheakis, G. D. Barmparis, J. Hizanidis, G. P. Tsironis, and E. Kaxiras, “Machine learning with observers predicts complex spatiotemporal behavior,” Front. Phys. 7, 24 (2019).
[Crossref]

Turitsyn, S. K.

G. Genty, L. Salmela, J. M. Dudley, D. Brunner, A. Kokhanovskiy, S. Kobtsev, and S. K. Turitsyn, “Machine learning and applications in ultrafast photonics,” Nat. Photonics 15, 91–101 (2020).
[Crossref]

Tzang, O.

O. Tzang, E. Niv, S. Singh, S. Labouesse, G. Myatt, and R. Piestun, “Wavefront shaping in complex media with a 350  kHz modulator via a 1D-to-2D transform,” Nat. Photonics 13, 788–793 (2019).
[Crossref]

Uchida, A.

Udaltsov, V. S.

L. Larger, A. Baylón-Fuentes, R. Martinenghi, V. S. Udaltsov, Y. K. Chembo, and M. Jacquot, “High-speed photonic reservoir computing using a time-delay based architecture: million words per second classification,” Phys. Rev. X 7, 011015 (2017).
[Crossref]

Umeki, T.

T. Inagaki, Y. Haribara, K. Igarashi, T. Sonobe, S. Tamate, T. Honjo, A. Marandi, P. L. McMahon, T. Umeki, K. Enbutsu, O. Tadanaga, H. Takenouchi, K. Aihara, K.-I. Kawarabayashi, K. Inoue, S. Utsunomiya, and H. Takesue, “A coherent Ising machine for 2000-node optimization problems,” Science 354, 603–606 (2016).
[Crossref]

Utsunomiya, S.

P. L. McMahon, A. Marandi, Y. Haribara, R. Hamerly, C. Langrock, S. Tamate, T. Inagaki, H. Takesue, S. Utsunomiya, K. Aihara, R. L. Byer, M. M. Fejer, H. Mabuchi, and Y. Yamamoto, “A fully-programmable 100-spin coherent Ising machine with all-to-all connections,” Science 354, 614–617 (2016).
[Crossref]

T. Inagaki, Y. Haribara, K. Igarashi, T. Sonobe, S. Tamate, T. Honjo, A. Marandi, P. L. McMahon, T. Umeki, K. Enbutsu, O. Tadanaga, H. Takenouchi, K. Aihara, K.-I. Kawarabayashi, K. Inoue, S. Utsunomiya, and H. Takesue, “A coherent Ising machine for 2000-node optimization problems,” Science 354, 603–606 (2016).
[Crossref]

Valley, G. C.

Van der Sande, G.

F. Böhm, G. Verschaffelt, and G. Van der Sande, “A poor—coherent Ising machine based on opto-electronic feedback systems for solving optimization problems,” Nat. Commun. 10, 3538 (2019).
[Crossref]

Vandewalle, J.

J. A. K. Suykens and J. Vandewalle, “Least squares support vector machine classifiers,” Neural Process. Lett. 9, 293–300 (1999).
[Crossref]

Vandoorne, K.

Veli, M.

Y. Luo, D. Mengu, N. T. Yardimci, Y. Rivenson, M. Veli, M. Jarrahi, and A. Ozcan, “Design of task-specific optical systems using broadband diffractive neural networks,” Light Sci. Appl. 8, 112 (2019).
[Crossref]

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361, 1004–1008 (2018).
[Crossref]

Venkatesh, S.

S. An, W. Liu, and S. Venkatesh, “Face recognition using kernel ridge regression,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2007), pp. 1–7.

Verschaffelt, G.

F. Böhm, G. Verschaffelt, and G. Van der Sande, “A poor—coherent Ising machine based on opto-electronic feedback systems for solving optimization problems,” Nat. Commun. 10, 3538 (2019).
[Crossref]

Vinckier, Q.

Vong, C. M.

L. C. C. Kasun, H. Zhou, G. B. Huang, and C. M. Vong, “Representational learning with extreme learning machine for big data,” IEEE Intell. Syst. 28, 31–34 (2013).
[Crossref]

Wagner, K.

Wetzstein, G.

G. Wetzstein, A. Ozcan, S. Gigan, S. Fan, D. Englund, M. Soljačić, C. Denz, D. A. B. Miller, and D. Psaltis, “Inference in artificial intelligence with deep optics and photonics,” Nature 588, 39–47 (2020).
[Crossref]

J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, “Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification,” Sci. Rep. 8, 12324 (2018).
[Crossref]

Williamson, I. A.

T. W. Hughes, I. A. Williamson, M. Minkov, and S. Fan, “Wave physics as an analog recurrent neural network,” Sci. Adv. 5, eaay6946 (2019).
[Crossref]

Wright, C. D.

J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A. S. Raja, J. Liu, C. D. Wright, A. Sebastian, T. J. Kippenberg, W. H. P. Pernice, and H. Bhaskaran, “Parallel convolutional processing using an integrated photonic tensor core,” Nature 589, 52–58 (2021).
[Crossref]

J. Feldmann, N. Youngblood, C. D. Wright, H. Bhaskaran, and W. H. P. Pernice, “All-optical spiking neurosynaptic networks with self-learning capabilities,” Nature 569, 208–214 (2019).
[Crossref]

Wu, A. X.

A. N. Tait, T. F. de Lima, E. Zhou, A. X. Wu, M.-A. Nahmias, B. J. Shastri, and P. R. Prucnal, “Neuromorphic photonic networks using silicon photonic weight banks,” Sci. Rep. 7, 7430 (2017).
[Crossref]

Wu, C.

C. Wu, H. Yu, S. Lee, R. Peng, I. Takeuchi, and M. Li, “Programmable phase-change metasurfaces on waveguides for multimode photonic convolutional neural network,” Nat. Commun. 12, 96 (2021).
[Crossref]

Wu, J.

X. Xu, M. Tan, B. Corcoran, J. Wu, A. Boes, T. G. Nguyen, S. T. Chu, B. E. Little, D. G. Hicks, R. Morandotti, A. Mitchell, and D. J. Moss, “11 TOPS photonic convolutional accelerator for optical neural networks,” Nature 589, 44–51 (2021).
[Crossref]

X. Xu, M. Tan, B. Corcoran, J. Wu, T. G. Nguyen, A. Boes, S. T. Chu, B. E. Little, R. Morandotti, A. Mitchell, D. G. Hicks, and D. J. Moss, “Photonic perceptron based on a Kerr microcomb for high-speed, scalable, optical neural networks,” Laser Photon. Rev. 14, 2000070 (2020).
[Crossref]

T. Yan, J. Wu, T. Zhou, H. Xie, F. Xu, J. Fan, L. Fang, X. Lin, and Q. Dai, “Fourier-space diffractive deep neural network,” Phys. Rev. Lett. 123, 023901 (2019).
[Crossref]

Xie, H.

T. Yan, J. Wu, T. Zhou, H. Xie, F. Xu, J. Fan, L. Fang, X. Lin, and Q. Dai, “Fourier-space diffractive deep neural network,” Phys. Rev. Lett. 123, 023901 (2019).
[Crossref]

Xu, F.

T. Yan, J. Wu, T. Zhou, H. Xie, F. Xu, J. Fan, L. Fang, X. Lin, and Q. Dai, “Fourier-space diffractive deep neural network,” Phys. Rev. Lett. 123, 023901 (2019).
[Crossref]

Xu, X.

X. Xu, M. Tan, B. Corcoran, J. Wu, A. Boes, T. G. Nguyen, S. T. Chu, B. E. Little, D. G. Hicks, R. Morandotti, A. Mitchell, and D. J. Moss, “11 TOPS photonic convolutional accelerator for optical neural networks,” Nature 589, 44–51 (2021).
[Crossref]

X. Xu, M. Tan, B. Corcoran, J. Wu, T. G. Nguyen, A. Boes, S. T. Chu, B. E. Little, R. Morandotti, A. Mitchell, D. G. Hicks, and D. J. Moss, “Photonic perceptron based on a Kerr microcomb for high-speed, scalable, optical neural networks,” Laser Photon. Rev. 14, 2000070 (2020).
[Crossref]

Yamamoto, Y.

P. L. McMahon, A. Marandi, Y. Haribara, R. Hamerly, C. Langrock, S. Tamate, T. Inagaki, H. Takesue, S. Utsunomiya, K. Aihara, R. L. Byer, M. M. Fejer, H. Mabuchi, and Y. Yamamoto, “A fully-programmable 100-spin coherent Ising machine with all-to-all connections,” Science 354, 614–617 (2016).
[Crossref]

Yan, T.

T. Yan, J. Wu, T. Zhou, H. Xie, F. Xu, J. Fan, L. Fang, X. Lin, and Q. Dai, “Fourier-space diffractive deep neural network,” Phys. Rev. Lett. 123, 023901 (2019).
[Crossref]

Yardimci, N. T.

Y. Luo, D. Mengu, N. T. Yardimci, Y. Rivenson, M. Veli, M. Jarrahi, and A. Ozcan, “Design of task-specific optical systems using broadband diffractive neural networks,” Light Sci. Appl. 8, 112 (2019).
[Crossref]

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361, 1004–1008 (2018).
[Crossref]

Yildirim, M.

U. Tegin, M. Yildirim, I. Oguz, C. Moser, and D. Psaltis, “Scalable optical learning operator,” arXiv:2012.12404 (2020).

Youngblood, N.

J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A. S. Raja, J. Liu, C. D. Wright, A. Sebastian, T. J. Kippenberg, W. H. P. Pernice, and H. Bhaskaran, “Parallel convolutional processing using an integrated photonic tensor core,” Nature 589, 52–58 (2021).
[Crossref]

J. Feldmann, N. Youngblood, C. D. Wright, H. Bhaskaran, and W. H. P. Pernice, “All-optical spiking neurosynaptic networks with self-learning capabilities,” Nature 569, 208–214 (2019).
[Crossref]

Youngs, I.

D. Piccinotti, K. F. MacDonald, S. Gregory, I. Youngs, and N. I. Zheludev, “Artificial intelligence for photonics and photonic materials,” Rep. Prog. Phys. 84, 012401 (2021).
[Crossref]

Yu, H.

C. Wu, H. Yu, S. Lee, R. Peng, I. Takeuchi, and M. Li, “Programmable phase-change metasurfaces on waveguides for multimode photonic convolutional neural network,” Nat. Commun. 12, 96 (2021).
[Crossref]

Yu, M.

Y. Okawachi, M. Yu, J. K. Jang, X. Ji, Y. Zhao, B. Y. Kim, M. Lipson, and A. L. Gaeta, “Demonstration of chip-based coupled degenerate optical parametric oscillators for realizing a nanophotonic spin-glass,” Nat. Commun. 11, 4119 (2020).
[Crossref]

Yung, M. H.

H. Zhang, M. Gu, X. D. Jiang, J. Thompson, H. Cai, S. Paesani, R. Santagati, A. Laing, Y. Zhang, M. H. Yung, Y. Z. Shi, F. K. Muhammad, G. Q. Lo, X. S. Luo, B. Dong, D. L. Kwong, L. C. Kwek, and A. Q. Liu, “An optical neural chip for implementing complex-valued neural network,” Nat. Commun. 12, 457 (2021).
[Crossref]

Zhang, H.

H. Zhang, M. Gu, X. D. Jiang, J. Thompson, H. Cai, S. Paesani, R. Santagati, A. Laing, Y. Zhang, M. H. Yung, Y. Z. Shi, F. K. Muhammad, G. Q. Lo, X. S. Luo, B. Dong, D. L. Kwong, L. C. Kwek, and A. Q. Liu, “An optical neural chip for implementing complex-valued neural network,” Nat. Commun. 12, 457 (2021).
[Crossref]

Zhang, R.

G. B. Huang, H. Zhou, X. Ding, and R. Zhang, “Extreme learning machine for regression and multiclass classification,” IEEE Trans. Syst., Man, Cybern. B, Cybern. 42, 513–529 (2012).
[Crossref]

Zhang, Y.

H. Zhang, M. Gu, X. D. Jiang, J. Thompson, H. Cai, S. Paesani, R. Santagati, A. Laing, Y. Zhang, M. H. Yung, Y. Z. Shi, F. K. Muhammad, G. Q. Lo, X. S. Luo, B. Dong, D. L. Kwong, L. C. Kwek, and A. Q. Liu, “An optical neural chip for implementing complex-valued neural network,” Nat. Commun. 12, 457 (2021).
[Crossref]

Zhao, S.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017).
[Crossref]

Zhao, Y.

Y. Okawachi, M. Yu, J. K. Jang, X. Ji, Y. Zhao, B. Y. Kim, M. Lipson, and A. L. Gaeta, “Demonstration of chip-based coupled degenerate optical parametric oscillators for realizing a nanophotonic spin-glass,” Nat. Commun. 11, 4119 (2020).
[Crossref]

Y. Zuo, B. Li, Y. Zhao, Y. Jiang, Y. C. Chen, P. Chen, G. B. Jo, J. Liu, and S. Du, “All-optical neural network with nonlinear activation functions,” Optica 6, 1132–1137 (2019).
[Crossref]

Zheludev, N. I.

D. Piccinotti, K. F. MacDonald, S. Gregory, I. Youngs, and N. I. Zheludev, “Artificial intelligence for photonics and photonic materials,” Rep. Prog. Phys. 84, 012401 (2021).
[Crossref]

Zhou, E.

A. N. Tait, T. F. de Lima, E. Zhou, A. X. Wu, M.-A. Nahmias, B. J. Shastri, and P. R. Prucnal, “Neuromorphic photonic networks using silicon photonic weight banks,” Sci. Rep. 7, 7430 (2017).
[Crossref]

Zhou, H.

L. C. C. Kasun, H. Zhou, G. B. Huang, and C. M. Vong, “Representational learning with extreme learning machine for big data,” IEEE Intell. Syst. 28, 31–34 (2013).
[Crossref]

G. B. Huang, H. Zhou, X. Ding, and R. Zhang, “Extreme learning machine for regression and multiclass classification,” IEEE Trans. Syst., Man, Cybern. B, Cybern. 42, 513–529 (2012).
[Crossref]

Zhou, T.

T. Yan, J. Wu, T. Zhou, H. Xie, F. Xu, J. Fan, L. Fang, X. Lin, and Q. Dai, “Fourier-space diffractive deep neural network,” Phys. Rev. Lett. 123, 023901 (2019).
[Crossref]

Zhu, Q. Y.

G. B. Huang, Q. Y. Zhu, and C. K. Siew, “Extreme learning machine: theory and applications,” Neurocomputing 70, 489–501 (2006).
[Crossref]

Zuo, Y.

Appl. Opt. (2)

Commun. Phys. (1)

D. Pierangeli, V. Palmieri, G. Marcucci, C. Moriconi, G. Perini, M. De Spirito, M. Papi, and C. Conti, “Living optical random neural network with three dimensional tumor spheroids for cancer morphodynamics,” Commun. Phys. 3, 160 (2020).
[Crossref]

Comput. Sci. Rev. (1)

M. Lukosevicius and H. Jaeger, “Reservoir computing approaches to recurrent neural network training,” Comput. Sci. Rev. 3, 127–149 (2009).
[Crossref]

Front. Phys. (1)

G. Neofotistos, M. Mattheakis, G. D. Barmparis, J. Hizanidis, G. P. Tsironis, and E. Kaxiras, “Machine learning with observers predicts complex spatiotemporal behavior,” Front. Phys. 7, 24 (2019).
[Crossref]

IEEE Intell. Syst. (1)

L. C. C. Kasun, H. Zhou, G. B. Huang, and C. M. Vong, “Representational learning with extreme learning machine for big data,” IEEE Intell. Syst. 28, 31–34 (2013).
[Crossref]

IEEE J. Sel. Top. Quantum Electron. (1)

A. Röhm, L. Jaurigue, and K. Lüdge, “Reservoir computing using laser networks,” IEEE J. Sel. Top. Quantum Electron. 26, 7700108 (2020).
[Crossref]

IEEE Trans. Syst., Man, Cybern. B, Cybern. (1)

G. B. Huang, H. Zhou, X. Ding, and R. Zhang, “Extreme learning machine for regression and multiclass classification,” IEEE Trans. Syst., Man, Cybern. B, Cybern. 42, 513–529 (2012).
[Crossref]

Laser Photon. Rev. (1)

X. Xu, M. Tan, B. Corcoran, J. Wu, T. G. Nguyen, A. Boes, S. T. Chu, B. E. Little, R. Morandotti, A. Mitchell, D. G. Hicks, and D. J. Moss, “Photonic perceptron based on a Kerr microcomb for high-speed, scalable, optical neural networks,” Laser Photon. Rev. 14, 2000070 (2020).
[Crossref]

Light Sci. Appl. (1)

Y. Luo, D. Mengu, N. T. Yardimci, Y. Rivenson, M. Veli, M. Jarrahi, and A. Ozcan, “Design of task-specific optical systems using broadband diffractive neural networks,” Light Sci. Appl. 8, 112 (2019).
[Crossref]

Nanophotonics (1)

K. P. Kalinin, A. Amo, J. Bloch, and N. G. Berloff, “Polaritonic XY-Ising machine,” Nanophotonics 9, 4127–4138 (2020).
[Crossref]

Nat. Commun. (6)

Y. Okawachi, M. Yu, J. K. Jang, X. Ji, Y. Zhao, B. Y. Kim, M. Lipson, and A. L. Gaeta, “Demonstration of chip-based coupled degenerate optical parametric oscillators for realizing a nanophotonic spin-glass,” Nat. Commun. 11, 4119 (2020).
[Crossref]

A. Di Falco, V. Mazzone, A. Cruz, and A. Fratalocchi, “Perfect secrecy cryptography via mixing of chaotic waves in irreversible time-varying silicon chips,” Nat. Commun. 10, 5827 (2019).
[Crossref]

F. Böhm, G. Verschaffelt, and G. Van der Sande, “A poor—coherent Ising machine based on opto-electronic feedback systems for solving optimization problems,” Nat. Commun. 10, 3538 (2019).
[Crossref]

H. Zhang, M. Gu, X. D. Jiang, J. Thompson, H. Cai, S. Paesani, R. Santagati, A. Laing, Y. Zhang, M. H. Yung, Y. Z. Shi, F. K. Muhammad, G. Q. Lo, X. S. Luo, B. Dong, D. L. Kwong, L. C. Kwek, and A. Q. Liu, “An optical neural chip for implementing complex-valued neural network,” Nat. Commun. 12, 457 (2021).
[Crossref]

C. Wu, H. Yu, S. Lee, R. Peng, I. Takeuchi, and M. Li, “Programmable phase-change metasurfaces on waveguides for multimode photonic convolutional neural network,” Nat. Commun. 12, 96 (2021).
[Crossref]

D. Brunner, M. C. Soriano, C. Mirasso, and I. Fischer, “Parallel photonic information processing at gigabyte per second data rates using transient states,” Nat. Commun. 4, 1364 (2013).
[Crossref]

Nat. Mach. Intell. (1)

P. Antonik, N. Marsal, D. Brunner, and D. Rontani, “Human action recognition with a large-scale brain-inspired photonic computer,” Nat. Mach. Intell. 1, 530–537 (2019).
[Crossref]

Nat. Photonics (3)

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017).
[Crossref]

O. Tzang, E. Niv, S. Singh, S. Labouesse, G. Myatt, and R. Piestun, “Wavefront shaping in complex media with a 350  kHz modulator via a 1D-to-2D transform,” Nat. Photonics 13, 788–793 (2019).
[Crossref]

G. Genty, L. Salmela, J. M. Dudley, D. Brunner, A. Kokhanovskiy, S. Kobtsev, and S. K. Turitsyn, “Machine learning and applications in ultrafast photonics,” Nat. Photonics 15, 91–101 (2020).
[Crossref]

Nature (4)

G. Wetzstein, A. Ozcan, S. Gigan, S. Fan, D. Englund, M. Soljačić, C. Denz, D. A. B. Miller, and D. Psaltis, “Inference in artificial intelligence with deep optics and photonics,” Nature 588, 39–47 (2020).
[Crossref]

J. Feldmann, N. Youngblood, C. D. Wright, H. Bhaskaran, and W. H. P. Pernice, “All-optical spiking neurosynaptic networks with self-learning capabilities,” Nature 569, 208–214 (2019).
[Crossref]

J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A. S. Raja, J. Liu, C. D. Wright, A. Sebastian, T. J. Kippenberg, W. H. P. Pernice, and H. Bhaskaran, “Parallel convolutional processing using an integrated photonic tensor core,” Nature 589, 52–58 (2021).
[Crossref]

X. Xu, M. Tan, B. Corcoran, J. Wu, A. Boes, T. G. Nguyen, S. T. Chu, B. E. Little, D. G. Hicks, R. Morandotti, A. Mitchell, and D. J. Moss, “11 TOPS photonic convolutional accelerator for optical neural networks,” Nature 589, 44–51 (2021).
[Crossref]

Neural Process. Lett. (1)

J. A. K. Suykens and J. Vandewalle, “Least squares support vector machine classifiers,” Neural Process. Lett. 9, 293–300 (1999).
[Crossref]

Neurocomputing (3)

Y. H. Pao, G. H. Park, and D. J. Sobajic, “Learning and generalization characteristics of the random vector functional-link net,” Neurocomputing 6, 163–180 (1994).
[Crossref]

G. B. Huang, Q. Y. Zhu, and C. K. Siew, “Extreme learning machine: theory and applications,” Neurocomputing 70, 489–501 (2006).
[Crossref]

C. Gallicchio, A. Micheli, and L. Pedrelli, “Deep reservoir computing: a critical experimental analysis,” Neurocomputing 268, 87–99 (2017).
[Crossref]

Opt. Express (3)

Opt. Lett. (1)

Optica (7)

Phys. Rev. Appl. (1)

D. Pierangeli, M. Rafayelyan, C. Conti, and S. Gigan, “Scalable spin-glass optical simulator,” Phys. Rev. Appl. 15, 034087 (2021).
[Crossref]

Phys. Rev. Lett. (3)

G. Marcucci, D. Pierangeli, and C. Conti, “Theory of neuromorphic computing by waves: machine learning by rogue waves, dispersive shocks, and solitons,” Phys. Rev. Lett. 125, 093901 (2020).
[Crossref]

D. Pierangeli, G. Marcucci, and C. Conti, “Large-scale photonic Ising machine by spatial light modulation,” Phys. Rev. Lett. 122, 213902 (2019).
[Crossref]

T. Yan, J. Wu, T. Zhou, H. Xie, F. Xu, J. Fan, L. Fang, X. Lin, and Q. Dai, “Fourier-space diffractive deep neural network,” Phys. Rev. Lett. 123, 023901 (2019).
[Crossref]

Phys. Rev. X (3)

L. Larger, A. Baylón-Fuentes, R. Martinenghi, V. S. Udaltsov, Y. K. Chembo, and M. Jacquot, “High-speed photonic reservoir computing using a time-delay based architecture: million words per second classification,” Phys. Rev. X 7, 011015 (2017).
[Crossref]

R. Hamerly, L. Bernstein, A. Sludds, M. Soljačić, and D. Englund, “Large-scale optical neural networks based on photoelectric multiplication,” Phys. Rev. X 9, 021032 (2019).
[Crossref]

M. Rafayelyan, J. Dong, Y. Tan, F. Krzakala, and S. Gigan, “Large-scale optical reservoir computing for spatiotemporal chaotic systems prediction,” Phys. Rev. X 10, 041037 (2020).
[Crossref]

Proc. IEEE (1)

Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86, 2278–2324 (1998).
[Crossref]

Rep. Prog. Phys. (1)

D. Piccinotti, K. F. MacDonald, S. Gregory, I. Youngs, and N. I. Zheludev, “Artificial intelligence for photonics and photonic materials,” Rep. Prog. Phys. 84, 012401 (2021).
[Crossref]

Sci. Adv. (2)

S. H. Rudy, S. L. Brunton, J. L. Proctor, and J. N. Kutz, “Data-driven discovery of partial differential equations,” Sci. Adv. 3, e1602614 (2017).
[Crossref]

T. W. Hughes, I. A. Williamson, M. Minkov, and S. Fan, “Wave physics as an analog recurrent neural network,” Sci. Adv. 5, eaay6946 (2019).
[Crossref]

Sci. Rep. (2)

J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, “Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification,” Sci. Rep. 8, 12324 (2018).
[Crossref]

A. N. Tait, T. F. de Lima, E. Zhou, A. X. Wu, M.-A. Nahmias, B. J. Shastri, and P. R. Prucnal, “Neuromorphic photonic networks using silicon photonic weight banks,” Sci. Rep. 7, 7430 (2017).
[Crossref]

Science (4)

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361, 1004–1008 (2018).
[Crossref]

N. M. Estakhri, B. Edwards, and N. Engheta, “Inverse-designed metastructures that solve equations,” Science 363, 1333–1338 (2019).
[Crossref]

P. L. McMahon, A. Marandi, Y. Haribara, R. Hamerly, C. Langrock, S. Tamate, T. Inagaki, H. Takesue, S. Utsunomiya, K. Aihara, R. L. Byer, M. M. Fejer, H. Mabuchi, and Y. Yamamoto, “A fully-programmable 100-spin coherent Ising machine with all-to-all connections,” Science 354, 614–617 (2016).
[Crossref]

T. Inagaki, Y. Haribara, K. Igarashi, T. Sonobe, S. Tamate, T. Honjo, A. Marandi, P. L. McMahon, T. Umeki, K. Enbutsu, O. Tadanaga, H. Takenouchi, K. Aihara, K.-I. Kawarabayashi, K. Inoue, S. Utsunomiya, and H. Takesue, “A coherent Ising machine for 2000-node optimization problems,” Science 354, 603–606 (2016).
[Crossref]

Other (9)

U. Tegin, M. Yildirim, I. Oguz, C. Moser, and D. Psaltis, “Scalable optical learning operator,” arXiv:2012.12404 (2020).

S. An, W. Liu, and S. Venkatesh, “Face recognition using kernel ridge regression,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2007), pp. 1–7.

A. Saade, F. Caltagirone, I. Carron, L. Daudet, A. Dremeau, S. Gigan, and F. Krzakala, “Random projections through multiple optical scattering: approximating kernels at the speed of light,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2016), pp. 6215–6219.

J. Shawe-Taylor and N. Cristianini, Kernel Methods for Pattern Analysis (Cambridge University, 2004).

J. W. Goodman, Introduction to Fourier Optics (Roberts & Company, 2005).

C. Denz, Optical Neural Networks (Springer, 1998).

S. Haykin, Neural Networks and Learning Machines (Pearson Prentice Hall, 2008).

E. Strubell, A. Ganesh, and A. McCallum, “Energy and policy considerations for deep learning in NLP,” arXiv:1906.02243 (2019).

W. F. Schmidt, M. A. Kraaijveld, and R. P. Duin, “Feed forward neural networks with random weights,” in International Conference on Pattern Recognition (IEEE Computer Society, 1992).

Data Availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. Schematic architecture of the photonic extreme learning machine (PELM). (a) General ELM scheme with the input data set X, which is fed into a reservoir and gives out the hidden-layer output matrix H. The trainable readout weights βi determine the network output Y=Y(H;β). (b) In the optical case, the input (a mushroom in the example) is encoded on the optical field, and hidden neurons have been replaced by modes that interact during propagation. Training of the photonic classifier is enabled by M detection channels.
Fig. 2.
Fig. 2. Learning ability of the PELM architecture. The optical computing scheme is evaluated on the MNIST data set by varying the encoding properties and feature space. (a) Input digit and 2D phase mask showing its encoding by noise embedding: the input signal overlaps with a disordered matrix. PELM training and testing error for noise embedding when varying the (b) noise amplitude and (c) its correlation length, for M=1600. (d) Input vector encoded over a carrier signal (Fourier embedding). (e) Classification error versus the number of frequencies of the embedding signal. (f) Minimum testing error with the increasing number of features M. The indicated accuracies are the best ones reported with random ELM (rELM) [47], random projections (RP) [41], and kernel ELM (kELM) [47] on the same task.
Fig. 3.
Fig. 3. Experimental implementation. (a) Sketch of the optical setup. A phase-only spatial light modulator (SLM) encodes data on the wavefront of a 532 nm continuous-wave laser. The far field in the lens focal plane is imaged on a camera. Insets show a false-color embedding matrix and training data encoded as phase blocks, respectively. (b) Detected spatial intensity distribution for a given input sample. White-colored areas reveal camera saturation in high-intensity regions, which provides the network nonlinear function. Pink boxes show some of the M spatial modes (blocks of pixels) that are used as readout channels. (c) Example of an input data in a feature space of dimension M=256, as projected by the optical device. Each bar represents an output channel, and training consists in finding the vector that properly tunes all the bar heights.
Fig. 4.
Fig. 4. Experimental performance of the PELM on classification and regression tasks. Confusion matrices on the MNIST data set for a free-space PELM, which makes use of (a) Fourier and (b) random embedding (92.18% and 92.06% accuracy, M=4096). (c) Performance on the mushroom binary classification problem (95.4%). (d) Optical predictions and true values for the abalone data set. (e) Classification and (f) regression error as a function of the number of features. Rapid convergence to optimal performance is found. Experimental results are compared with numerical simulations and training errors.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

β=(HTH+cI)1HTT,
Hji=gi(Xj)=G[M·p(Xj)·q(W)]i,
Y=K(K+cI)1T,
Y=Hβ=j=1MHj(X)βj,
Y=H(HTH+cI)1HTT.