Abstract

Reservoir computing is a new bio-inspired computation paradigm. It exploits a dynamical system driven by a time-dependent input to carry out computation. For efficient information processing, only a few parameters of the reservoir needs to be tuned, which makes it a promising framework for hardware implementation. Recently, electronic, opto-electronic and all-optical experimental reservoir computers were reported. In those implementations, the nonlinear response of the reservoir is provided by active devices such as optoelectronic modulators or optical amplifiers. By contrast, we propose here the first reservoir computer based on a fully passive nonlinearity, namely the saturable absorption of a semiconductor mirror. Our experimental setup constitutes an important step towards the development of ultrafast low-consumption analog computers.

© 2014 Optical Society of America

1. Introduction

The term reservoir computing refers to a new class of recurrent neural networks introduced simultaneously by H. Jaeger [1,2] and W. Maas [3] a dozen years ago. A reservoir is a nonlinear dynamical system driven by the time-dependent input that one wants to process. The output of the reservoir is obtained through a linear combination of its many internal states. Contrary to other recurrent neural networks, a reservoir is only trained at its output: the only parameters that are optimized to train the reservoir are the weights determining the linear combination of the output. In order for a reservoir to be able to process information in an efficient way, only a few internal parameters of the dynamical system need to be tuned [1, 2, 4].

These characteristics make reservoir computing a unique paradigm for hardware implementation. Several experimental reservoir computers have recently been reported, including electronic [5], opto-electronic [6, 7], all-optical [8, 9] versions, all based on nonlinear delayed feedback structures. Those experimental setups exhibit performances that are comparable to digital implementations of reservoir computing.

For decades, optics has been viewed as a promising framework for implementing computation. Optics indeed naturally offers parallelism (frequency or space multiplexing) and speed. Moreover, since the advent of laser technologies, the field of nonlinear optics has witnessed a tremendous development. Consequently, a large number of optical nonlinearities are known and mastered. However, there has only been very limited success in building an optical computer because, despite huge progresses, optical nonlinearities remain weak, difficult to harness and to convert into logic gates.

For these reasons, reservoir computing constitutes an interesting framework to implement optical computing. Indeed there is much more freedom in the architecture because reservoir computing does not rely on logic gates. Recently, we reported the first all-optical reservoir computer [8]. It was based on a delayed feedback dynamical system and the nonlinearity was provided by the saturation of the gain of a semiconductor optical amplifier. This off-the-shelf component introduced noise in the reservoir, which led to a degradation of the performances as compared to our previous opto-electronic reservoir computer [6].

In ref. [9], an all-optical reservoir computer has been reported. It is based on a semiconductor laser submitted to feedback. Contrary to our all-optical implementation, the input of this dynamical system was synchronized with the delay time and it used the intrinsic time scale of the laser dynamics as coupling between the internal states. Optical reservoir computers based on networks of SOA’s or of micro ring resonators have also been studied in simulation in refs. [1012]. More recently, a passive photonics chip-based reservoir computer has been demonstrated [13]. The reservoir itself is linear and is operated in the coherent regime to enrich its dynamics. The states of the reservoir undergo a nonlinear transformation at the readout level before the output is computed.

Here we report the first all-optical reservoir computer based on a passive nonlinear element, namely a semiconductor saturable absorber mirror (SESAM) whose structure is described in refs. [14, 15]. We tested our reservoir on benchmark tasks: performances are comparable or better than the ones reported with our previous experimental reservoir computers [6, 8] and comparable to state-of-the-art results of digital implementations.

The interest of this work is multifold: first, the nonlinearity that we use exhibits a fundamentally different behavior from the ones used in previous works. Up to now, experimental reservoir computers exploited nonlinear devices that mimic the node-inspired hyperbolic tangent response of the nodes in the standard digital implementations of reservoir computers (see e.g. [2, 4]). This should be opposed with the SESAM that exhibits a nonlinear behavior at low power and gets linear at high power. As a consequence, the present work shows that there is more freedom than originally anticipated for the choice of nonlinear functions to be exploited in reservoir computers.

Second, our work constitutes a first important step towards an entirely passive reservoir computer, a very desirable goal as it would eliminate the need to feed energy into the reservoir, while simultaneously suppressing an important source of noise in the reservoir itself.

Third, saturable absorption in semiconductors can be tuned through impurity concentration [16], and recovery times as small as a few picoseconds can be easily reached [15]. Our study thus opens up the route towards the realization of ultrafast all-optical reservoir computers.

Note however that, because of excessive losses in the setup, we had to include an amplifier in our nonlinear delay loop and our reservoir is thus not fully passive. Nevertheless, the amplifier has been used far from its saturation regime and its role was thus limited to pure linear loss compensation, which allows us to present our result as a genuine demonstration of the possibility to implement passive ultrafast all-optical reservoir.

In the next section we describe our experimental reservoir computer and all the devices used in our experiments. In section 3 we present our experimental results and compare our reservoir performances to state-of-the-art digital and experimental results. In section 4, we discuss the results and the interest of the passive nonlinearity for all-optical reservoir computing.

2. Hardware implementation

2.1. Reservoir computing

A reservoir computer consists of a recurrent dynamical system described by some internal states xi (typically called ”nodes” or, for historical reasons, ”neurons”) driven by a time dependent input sequence u(n). The core idea behind reservoir computing is that the perturbation that is brought in the dynamical system by the temporal features of u(n) can easily be picked up by a linear output layer. More specifically, the system evolves in discrete time according to Eq.

xi(n+1)=FNL(αj=1NAijxj(n)+βmiu(n+1))
where N is the number of internal variables, FNL is a nonlinear function (the tanh function or other sigmoid functions are often used in digital implementations [2, 4]), α is the feedback gain, β is the input gain and A is the interconnection matrix. The node-dependent coefficients mi, collectively called the “input mask” serve the purpose of breaking the symmetries in the evolution of the node states, by dealing a different version of the input u(n) to each node xi.

The time-dependent output y(n) from the reservoir is a linear combination of the internal states of the reservoir:

y(n)=i=1NWixi(n)
A reservoir computer is typically used to process information contained in its inputs u(n). To do this, the reservoir is first trained by determining the output weight vector Wi (for i = 1, 2,...,N) ensuring that the output y(n) of the reservoir is as close as possible to a target output ŷ(n). This is done by minimizing the normalized mean square error NMSE, defined by
NMSE=(yy^)2n(yy^n)2n
where 〈〉n denotes the average on all discrete time steps. Once the weights are computed, they are left unchanged and the performances of the reservoir can be evaluated by feeding it with other input sequences and comparing the actual output to the desired output. The training of a reservoir computer is only performed at the level of the output weights Wi, although the parameters α and β must often be adjusted to obtain optimal performance.

The interconnection matrix A can be sparse and randomly generated [4]. However, the spectral radius of αA (i.e. the largest absolute value of the eigenvalues of αA) cannot be too high to avoid stability issues. In the case where FNL(x) = tanh(x), this value is generally kept less than one.

It has been theoretically shown in refs. [17, 18] that a very simple interconnexion matrix A coupling a node xi only to its direct neighbor xi−1 is sufficient to obtain interesting computation abilities. Such a matrix is easily implemented on hardware with the time-multiplexed approach that we used in this work and that is presented in the next section.

As discussed in the supplementary material of [6], many dynamical systems can be used as reservoir computers, and consequently there is a lot of freedom in designing good reservoirs, which is particularly interesting from the point of view of experimental implementations.

2.2. Principle of the experimental implementation

The hardware implementation that we use is based on a delay loop allowing a time-multiplexing of the reservoir’s internal variable xi. The desynchronization of the reservoir loop with respect to the input and output layers of the reservoir computer allows for the coupling of each internal variable xi to to one of its neighbors xik. This configuration has already been extensively described in refs. [6, 8]. It differs from the one used in refs. [5, 7, 9] which is fully synchronized but uses bandwidth limitation of an element of the reservoir loop to connect neighboring nodes together.

To implement an experimental reservoir computer, we need to switch from the discrete time n used in Eq. 1 to continuous time t. As in previous works [59], each input is held for a time T = through a sample and hold procedure. Each interval of length T is subdivided into N intervals of length θ (the node time). The i-th interval of duration θ is associated with the input value miu(n) where mi is the value of the input mask corresponding to the i-th node. These values miu(n) drive the reservoir one after another.

Figure 1 presents a sketch of how time multiplexed dynamical systems can be used as reservoir computers. Coupling between internal variables is obtained by desynchronizing the round-trip time T′ in the loop with respect to the input time T. By choosing T′ = (N + k)θ, one obtains a coupling between xi and xik. This defines an interconnection matrix A similar to that proposed in refs. [17, 18]. We choose k = 1 for all the experiments presented in this work. The corresponding evolution Eqs. are given by

xi(n+1)={FNL(αxi1(n)+βmiu(n+1))if2iNFNL(αxN+i1(n1)+βmiu(n+1))ifi=1
which replace Eq. (1) in the case of reservoirs based on delay dynamical systems with desynchronization, see [6].

 

Fig. 1 Principle of the delay dynamical system reservoir computer. The states xi(n) of the reservoir are multiplexed in time. Each input u(n) is held for a time T = and is divided in N time windows of length θ during which it is multiplied by the value mi. All these inputs miu(n) are multiplied by the input gain β and fed into the delay loop where they are processed by an element applying a nonlinear transformation FNL. Part of the signal is extracted for readout while the remaining signal is sent back into the dynamical system after multiplication by the feedback gain α. The period of the loop T′ is desynchronized with respect to the input time T through the relation T′ = (N + k)θ, which allows for coupling between neighboring states. In this work, we choose k = 1 which means that each internal state is coupled to its direct neighbor.

Download Full Size | PPT Slide | PDF

2.3. SESAM structure used as nonlinear element

In our experiment, the nonlinear function FNL of Eq. (1) is provided by the saturation of the absorption of a SESAM device. This nonlinearity is qualitatively different from the ones used previously in experimental reservoir computers, namely the sine nonlinearity of a Mach-Zehnder intensity modulator [6, 7] and the saturation of the gain of a semiconductor optical amplifier (SOA) [8]. In Fig. 2, we present a comparison between the nonlinear behavior of the SOA used in ref. [8] and the nonlinear behavior of the SESAM used in the present work. For low input power, the SOA reacts in a linear way and becomes nonlinear at high powers (gain saturation). The SESAM exhibits an opposite behavior: it is nonlinear for low values of its input power and gets linear at higher input power. As a result, the response of the SESAM exhibits a desaturating positive curvature instead of the stabilizing negative curvature traditionally used in reservoir computers.

 

Fig. 2 Comparison of the experimentally measured output-input nonlinear relations of the SOA used in our previous work [8] pumped by a 200 mA current (blue, top and left axes) and of the SESAM used in the current work (red, bottom and right axes). The SOA exhibits a nonlinear behavior at high input power and is mainly linear at low input powers. The situation is reversed for the SESAM. Dashed lines show where a linear approximation holds, i.e. for high input and low input powers for the SESAM and the SOA, respectively.

Download Full Size | PPT Slide | PDF

Our InP-based SESAM consists of a layered structure initially developed for extinction ratio enhancement of optical pulses [14] and all-optical 2R regeneration [15]. Its structure is depicted in Fig. 3. Compared to previous all-optical reservoirs [8, 9], this nonlinear element is passive, i.e. it is not fed by any external energy source. This represents a first step towards the design of all optical passive reservoir computers.

 

Fig. 3 The SESAM structure consists of 4 different layers deposited on a copper substrate. A saturable absorber layer (InGaAs) is sandwiched between two InP phase layers. More information about this type of structure can be found in ref. [15].

Download Full Size | PPT Slide | PDF

In Fig. 4 we present the experimentally measured evolution of the reflectivity R = Pout/Pin of the SESAM as a function of its input power and we show the range of powers for which good results have been obtained on benchmark tasks. The SESAM used in our experiment presents linear losses of approximately 30% and an approximate recovery time of 2.5 nanoseconds.

 

Fig. 4 Reflectivity of the SESAM structure as a function of its input power. The reflectivity is measured relatively to a gold mirror to remove the intrinsic losses of the experimental setup. For input powers lower than −13dBm (50μW), the reflectivity is almost constant and the response of the SESAM is essentially linear. For input powers varying between −13dBm (50μW) and 10dBm (10 mW), there is a strong variation of the reflectivity with power. Consequently, the absorber is nonlinear in this region. For higher input powers, the reflectivity is stable around a value of 0.7, meaning that the absorber acts again as a linear medium. Note the logarithmic axis used for the input power. Arrows show the average powers focused on the absorber for which interesting performances were obtained: (a) −9.20 dBm (120 μW) for nonlinear memory capacities and channel equalization, (b) −2.2 dBm (600 μW) for the radar task and (c) 0.80 dBm (1.2 mW) for the linear memory capacity and the radar task.

Download Full Size | PPT Slide | PDF

2.4. All optical implementation using semiconductor saturable absorber mirror (SESAM)

Our experimental set up depicted in Fig. 5 is closely inspired by the one described in [8]. The internal variables xi(n) of the dynamical systems are given by the optical power circulating inside the fiber. The reservoir itself consists of an erbium-doped fiber amplifier (Pritel PMFA-15), a semiconductor saturable absorber mirror (SESAM), a polarization controller and a fiber spool of 1.6 km.

 

Fig. 5 Schematic of the experimental setup of the all-optical reservoir. Optical components are depicted in red whereas electronic components are depicted in green. The all-optical loop is driven by the input optical signal. A superluminescent light emitting diode (SLED) generates a 40nm-wide spectrum centered around 1560 nm. An electronic signal corresponding to the time dependent input multiplied by the input mask is generated by the Arbitrary Waveform Generator (AWG). This electronic signal drives an integrated Lithium niobate Mach-Zehnder intensity modulator (MZ), which produces a time dependent input optical signal whose intensity is adjusted with a variable attenuator. The input optical signal is injected into the cavity by means of a 90/10 fiber coupler. The cavity itself consists of an erbium-doped fiber amplifier, a circulator, a SESAM and a fiber spool used as a delay line. A 80/20 fiber coupler is used to send 20% of the cavity intensity to the readout photodiode and then to a digitizer. Two polarization controllers are used to match the polarizations input and feedback signals with the polarization state of the amplifier. The amplifier is used in a linear regime (no saturation) to compensate for the losses in the cavity.

Download Full Size | PPT Slide | PDF

Optical power is inserted in the reservoir by a superluminescent light emitting diode (SLED-Denselight DL-CS5254A) of 40 nm bandwidth centered at a wavelength of 1560 nm. This use of an incoherent source ensures that there are no interferences in the system [8]. An arbitrary waveform generator (AWG, National Instrument model PXI-5422) drives a Mach-Zehnder modulator (MZM, Photline model MXAN-LN-10) that modifies the power inserted into the reservoir, therefore creating the time-varying inputs to the reservoir. The input gain β is tuned by damping the maximum power injected in the reservoir using a controllable optical attenuator (Agilent, model 81571A).

Inside the feedback loop, we use an erbium-doped fiber amplifier to compensate for the losses (EDFA - Pritel PMFA-15). Modifying the pump current of the EDFA allows the tuning of the feedback attenuation. Input and feedback polarization controllers allows to adjust the polarization of the input and the feedback signals to match the polarization of the EDFA. The amplifier is used in a linear regime, i.e where there is no saturation of its gain. Consequently, the nonlinear response of the dynamical system is only caused by the SESAM. The round-trip time T′ in the loop is 8.0073 μs. For a reservoir of N = 50 nodes (the value used in most of the experiments below) this corresponds to a node time θ = T′/(N + 1) of 157 ns and an input time T = of 7.85 μs.

Light is focused on the SESAM using a lensed fiber. The reflected light is reinjected in the fiber using a circulator. A fraction of 20% of the signal is extracted from the loop by means of a coupler to record the evolution of the internal states of the reservoir using a photodiode (TTI TIA- 525, 125 MHz bandwidth), connected to a digitizer (National Instruments model PXI-5124) sending its output to a desktop computer for further post processing.

2.5. Pre-processing: injecting information in the dynamical system

The information driving the reservoir is inserted as optical power. Consequently, one must convert the masked input miu(n) into a positive value. To do this, this masked input sequence is first normalized to lie between −1 and 1. Then, this product is converted into a voltage driving the Mach-Zehnder modulator such that the output power from the modulator lies between 0 and a maximum power P0.

The nonlinear sine response of the MZ intensity modulator is pre-compensated by applying a voltage V=Vππsin1(miu(n)). The applied voltage consequently spans the interval [ Vπ2, Vπ2], thereby completely exploiting the range of the modulator without any nonlinearity at the level of the input. Unless otherwise stated, the input mask elements mi are uniformly distributed in the range [−1,+1].

2.6. Post-processing: offline training and output computation

Information is processed analogically inside the reservoir. However, the training of the output weights and the computation of the output sequence from the reservoir y(n) is still performed offline using a desktop computer.

The internal states are defined by normalizing the intensity recorded in such a way that it lies between −1 and 1. As in our previous work, the internal states are measured by taking a temporal average of a window of length θ/2 around the middle of each window of length θ corresponding to each of the internal states.

Each input sequence is divided in three different subsequences. A first ”warm up” sequence is used to eliminate memory effects from the previous steady state before the reservoir was driven [8]. Then a training input sequence is fed into the reservoir. The reservoir states are recorded as explained above and the outputs weights Wi are determined using a least mean square algorithm that minimizes the NMSE defined in Eq. (3). Once the training is performed, the output weights are kept constant and the reservoir performances can be evaluated: a test input sequence is fed into the reservoir and drives the dynamical system whose internal states are recorded. The output is computed using the weights determined during the training phase. Finally, one computes the error between the actual output and the desired output to estimate the reservoir performances.

2.7. Numerical simulations

A discrete-time numerical model of our reservoir was first used to validate our approach of using the nonlinearity of a SESAM in a reservoir computer. We integrated numerically the recurrence of Eqs. (4) with the nonlinear function FNL replaced by a fit to the experimental response of the SESAM. This numerical model does not take into account neither the noise introduced by the SLED and the EDFA nor the bandpass effects due to electronics. We do not quote the results of these simulations in the rest of the paper except if relevant.

3. Results on benchmark tasks

We tested our experimental reservoir computer on several benchmark tasks usually employed to evaluate reservoir performances and compared our results to those of previous experimental and digital implementations.

3.1. Memory capacities of the reservoir

The memory capacities evaluate the ability of a dynamical system to compute functions of its past inputs. For this task, the inputs u(n) are independent and identically distributed random variables in the interval [0,1]. The capacity of the reservoir to reconstruct the function y[u] of its past input u is defined as C[y] = 1 − NMSE[y].

In the case of the linear memory capacity, the function to reconstruct is simply u(nk), its input shifted k time steps in the past [19]. The linear memory capacity is obtained by summing upon all values of the delay k. Hence, the linear memory capacity simply evaluates the way the dynamical system remembers its past inputs.

More recently, nonlinear memory capacities were introduced [20]. In this case, the function to reconstruct is a nonlinear transformation of the previous inputs. Here we only consider second order polynomials of the previous inputs. As in our previous work [8], we define the quadratic memory capacity as the ability to compute the second order Legendre polynomial of the input k time steps before: yk[u] = 3u2(nk)−1. The quadratic memory capacity is obtained upon summing on all values of the delay k. Legendre polynomials constitute an orthogonal basis with respect to the L2 inner product; consequently there is no overlap between linear and nonlinear memory capacities. The cross memory capacity is obtained by considering the product of inputs at two different time steps in the past yk,k′ = u(nk) × u(nk′). The summation upon all possible couples of (k, k′) with k < k′ gives the cross memory capacity. The quadratic memory and cross memory capacities hence quantify how much the dynamical system is able to construct second order polynomials of its past inputs.

As shown in ref. [20], the total memory capacity (i.e. the sum of all the memory capacities) is bounded by the number of internal variables in the system N. Under certain conditions and by adding memory capacities of higher orders, the inequality can be saturated and the total memory capacity is then equal to the number of internal variables N.

In table 1, we present the best memory capacities we obtained with our reservoir made of N = 50 internal variables. We compare our results with the memory capacities of our previous optoelectronic and SOA based all-optical reservoir computers. Our results were obtained for a pump current of the EDFA of 110 mA. This corresponds to a feedback gain α of approximately 0.85 (This value takes into consideration all the losses in the delay loop). The optimum input gain β is determined independently for each of the computed memory capacities. The best results for the linear memory were obtained for an optical power of approximately 1.2 mW sent on the absorber. For the cross memory and quadratic memory capacities, the best results correspond to around 150 μW of power focused on the SESAM.

Tables Icon

Table 1. Comparison of the linear, quadratic, cross and total memory capacities of our optoelectronic reservoir and our two all-optical reservoirs, using N = 50 internal variables. Our new reservoir computer based on saturable absorption shows increased linear, cross and total memory capacities with respect to our first all-optical reservoir based on a SOA. For each of the different memory capacities, the optimum input gain β is determined (while α is kept fixed at 0.85, see text). The total memory capacity corresponds to the sum of the linear, quadratic and cross memory capacities for a fixed value of the parameters α, β. Only the best values are reported in the table.

Our reservoir based on saturable absorption exhibits a cross memory capacity twice as big as the one of our previous all-optical implementation. The linear memory capacity is also improved, even with respect to our optoelectronic reservoir computer of ref. [6]. Our total memory capacity is also improved with respect to our previous all-optical implementation. In our previous work, the degradation of the memory capacities of the SOA-based RC was attributed to the ASE noise generated by the amplifier and to polarization instability in the fiber loop. These effects are still present in our new reservoir based on a SESAM. However, the noise figure of the EDFA is less important than the one of the SOA. Moreover, the absorber removes part of the noise induced by the amplifier, especially for low input signals, i.e, in the exploited nonlinear range of the SESAM.

3.2. Channel equalization task

This task was introduced in the context of reservoir computing by H. Jaeger [2]. We consider the transmission of a sequence d(n) of symbols randomly generated from the set {−3,−1,1,3} through a communication channel characterized by multi-symbol interferences and nonlinear distortions. The noisy distorted outputs u(n) of the communication channel are defined by the following Eqs.

q(n)=0.08d(n+2)0.12d(n+1)+d(n)+0.18d(n1)0.1d(n2)+0.091d(n3)0.05d(n4)+0.04d(n5)+0.03d(n6)+0.01d(n7)u(n)=q(n)+0.036q2(n)0.011q3(n)+ν(n)
where ν(n) is a white noise added to make the Signal-To-Noise Ratio (SNR) vary between 12 and 32 dB. The output u(n) of the channel is used as input to the reservoir. The desired output from the reservoir is the input sequence of the channel d(n).

For this task, performances of the reservoir are evaluated using the symbol error rate (SER), defined as the fraction of misclassified symbols. In Fig. 6, we compare the results obtained by our implementation using a SESAM with previous results obtained with our optoelectronic and SOA-based reservoir computers [6, 8]. The reservoir performances are evaluated during five successive experiments. Averaged results with standard deviations are presented in Fig. 6.

 

Fig. 6 Results for channel equalization task. The signal-to-noise ratio (SNR) varies between 12 and 32 dB by steps of 4 dB. The average Symbol Error Rate (SER - the fraction of misclassified symbols) obtained on 5 experiments of 6000 inputs test sequences is presented with statistical error bars and compared with the results of our two previous opto-electronic and all-optical reservoirs of refs. [6] and [8] with the same number of internal states (N = 50).

Download Full Size | PPT Slide | PDF

For each value of the SNR, the result is the best mean SER obtained over all the values of the input gain β. Our best results were obtained when the pump current of the EDFA was set to 82 mA. This corresponds to a feedback gain of 0.7 and to an approximate power of 150 μW focused on the saturable absorber.

Our all-optical reservoir computer based on the saturation of absorption presents performances that are comparable to the ones obtained previously with our SOA-based reservoir computer. Results are even better for the highest values of the SNR. For SNR of 28 and 32 dB, our optical reservoir presents results similar to those of our optoelectronic reservoir [6] and performs as well as digital reservoirs. For instance, at 28 dB, the average SER obtained in ref. [2] is 10−4, which is equivalent to our experimental result.

3.3. Radar task

For this task, we consider the signal backscattered from the ocean surface and collected by the MacMaster University IPIX radar [21]. The reservoir is given this signal as input and the goal is to predict future values of this signal for prediction delays varying between 1 and 10. Two different samples are considered, the ”low sea state” (average wave height of 0.8 meters; maximum wave height of 1.8 meters) and the ”high sea state” (average wave height of 1.3 meters; maximum wave height of 2.9 meters). The performances on this task are evaluated using the NMSE.

For each prediction delay, we split the two thousands inputs of both samples in one train sequence and one test sequence of one thousand inputs each. Our best experimental NMSE results are presented in Fig. 7 and were obtained for a pump current of 100 mA. The feedback gain is consequently around 0.85. For each value of the prediction delay and for each of the two sea states, we find the optimum for the optical power focused on the absorber between 600 μW and 1.2 mW. Our results are comparable to the ones obtained by our previous hardware implementations of reservoir computers. They are also equivalent to results obtained by the digital implementation of ref. [18] where authors reported for the low sea state a NMSE of 1.15 · 10−3 for one-step prediction and 3.01 · 10−2 for 5-step prediction.

 

Fig. 7 Results for the radar task. The NMSE is presented for prediction delays ranging from 1 to 10. Our results are slightly better than the ones obtained with our previous opto-electronic and all-optical reservoirs for high sea state. For low sea state, our results are slightly better for small prediction delays but get slightly degraded for larger delays. On average, our results are comparable to the ones obtained by our previous hardware realizations.

Download Full Size | PPT Slide | PDF

3.4. Speech recognition

This task was introduced as a benchmark for reservoir computers in ref. [22]. The goal is to classify audio sequences. These audio sequences come from the National Institute of Standards and Technology (NIST) TI-46 corpus [23] and consist of digits between 0 and 9 pronounced by five different female speakers; each of the digits is pronounced 10 times. The audio sequence is sampled at 12.5 kHz, preprocessed using the Lyon cochlea ear model [24] and then sampled at regular intervals.

The input signal u(n) is an 86-dimensional state vector with up to 130 samplings for each sequence. For this task we used a reservoir made of N = 200 internal variables. The input mask is therefore a 200 × 86 matrix mi j whose elements are randomly chosen with equal probabilities in the set {−0.1,0.1}. The sequence driving the reservoir is thus the N-dimensional array j=186mijuj(n).

For the output layer, we train 10 different output weight vectors defining ten different output sequences yk(n), each of them being associated to one digit (k = 0, 1,...,9). The desired output corresponding to a digit is 1 if this digit is being sent to the reservoir and −1 otherwise. For each sequence, each of the ten outputs is averaged over the sequence length; using a winner-take-all approach, the highest averaged classifier is then set to 1, and all other classifiers are set to −1. The task is evaluated using the word error rate (WER), i.e. the fraction of misclassified digits.

For this task, since the dataset contains only 500 sequences, five subsets of 100 sequences each are randomly chosen; the reservoir is trained over 4 subsets and tested over the last one. This procedure is repeated 5 times rotating the subsets, so that each subset is used once for the test. The results reported here are the average WERs and corresponding standard deviations over the 5 test subsets.

The best test WER we obtained with our reservoir based on a SESAM nonlinearity is of 2.6% with a standard deviation of 1.52%. This is two orders of magnitude worse than the experimental results obtained with our previous opto-electronic reservoir of ref. [?] but comparable to the results obtained with the SOA-based reservoir [8], for which performances degradation were attributed to the ASE-noise arising in the SOA. However, in our case, best simulation results gave a WER of 4.6% (standard deviation: 1.34 %), which means that even without noise, the performances are not very good. We attribute this lack of performances to the peculiar desaturating positive curvature of the SESAM nonlinear response (see section 2.3).

4. Conclusions

In this work we have presented an evolution of the first all-optical reservoir computer reported in ref. [8]. The challenge was to change the type of nonlinear response of the dynamical system in order to make the reservoir simpler and passive. Instead of the stabilizing saturated response of an optical amplifier we exploited the desaturating nonlinearity of a SESAM.

As the nonlinear behavior of the system drastically determines the dynamics of the reservoir, the outcome was unclear and required a deep numerical and experimental investigation. We undertook this investigation with success, showing that the concept of reservoir computing is surprisingly versatile. In particular, our study suggests that reservoir computing could lead to the realization of efficient low-noise, low-consumption ultrafast analog computers. However, to compensate for large losses in our system we had to include an erbium-doped amplifier in the reservoir which is therefore not totally passive. But, being used in its linear regime, this amplifier does not play a significant role in the dynamics of the reservoir and we can state that our experiment constitutes a genuine proof of principle.

We have tested our experimental optical reservoir on benchmark tasks in order to evaluate its performances. These performances are slightly lower than the ones obtained with our first opto-electronic reservoir [6]. However, these performances are comparable and for some tasks better than the ones obtained with our previous all-optical reservoir using gain saturation of ref. [8]. Hence, we have shown that passive nonlinear elements can be used to process information on relatively complex tasks.

In the near future we will undertake the experimental study of a fully passive low-noise reservoir based on saturable absorption in SESAM. This will constitute the basis of the development of an all-optical reservoir computer in which the parallelism and speed of optics will be exploited to reach record signal processing bandwidths.

Acknowledgments

The authors acknowledge financial support from the Fonds de la Recherche Scientifique FRS-FNRS under grant no 2.4611.10 and the Interuniversity Attraction Pole program of the Belgian Science Policy Office under grant IAP P7-35 photonics@be. Antoine Dejonckheere is a F.N.R.S. fellow and Frano̧is Duport is a postdoctoral researcher of the F.N.R.S.

References and links

1. H. Jaeger, “The ’echo state’ approach to analysing and training recurrent neural networks - with an Erratum note,” GMD Report 148: German National Research Centre for Information Technology (2001).

2. H. Jaeger and H. Haas, “Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication,” Science 304(5667), 78–80 (2004). [CrossRef]   [PubMed]  

3. W. Maass, T. Natschläger, and H. Markram, “Real-time computing without stable states: a new framework for neural computations based on perturbations,” Neural Comput. 14(11), 2531–2560 (2002). [CrossRef]   [PubMed]  

4. M. Lukoševičius and H. Jaeger, “Reservoir computing approaches to recurrent neural network training,” Comput. Sci. Rev. 3127–149 (2009). [CrossRef]  

5. L. Appeltant, M.C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. Dambre, B. Schrauwen, C.R. Mirasso, and I. Fischer, “Information processing using a single dynamical node as complex system,” Nat. Commun. 2, 468 (2011). [CrossRef]   [PubMed]  

6. Y. Paquot, F. Duport, A. Smerieri, J. Dambre, B. Schrauwen, M. Haelterman, and S. Massar, “Optoelectronic reservoir computing,” Sci. Rep. 2, 468 (2012). [CrossRef]   [PubMed]  

7. L. Larger, M. C. Soriano, D. Brunner, L. Appeltant, J. M. Gutierrez, L. Pesquera, C. R. Mirasso, and I. Fischer, “Photonic information processing beyond Turing: an optoelectronic implementation of reservoir computing,” Opt. Express 20, 3241–3249 (2012). [CrossRef]   [PubMed]  

8. F. Duport, B. Schneider, A. Smerieri, M. Haelterman, and S. Massar, “All-optical reservoir computing,” Opt. Express 20, 22783–22795 (2012). [CrossRef]   [PubMed]  

9. D. Brunner, M. C. Soriano, C. R. Mirasso, and I. Fischer, “Parallel photonic information processing at gigabyte per second data rates using transient states,” Nat. Commun. 4, 1364 (2013). [CrossRef]  

10. K. Vandoorne, W. Dierckx, B. Schrauwen, D. Verstraeten, R. Baets, P. Bienstman, and J. Van Campenhout, “Toward optical signal processing using Photonic Reservoir Computing,” Opt. Express 16, 11182–11192 (2008). [CrossRef]   [PubMed]  

11. K. Vandoorne, J. Dambre, D. Verstraeten, B. Schrauwen, and P. Bienstman, “Parallel reservoir computing using optical amplifiers,” IEEE T. Neural Netw. 221469–1481 (2011). [CrossRef]  

12. C. Mesaritakis, V. Papataxiarhis, and D. Syvridis, “Micro ring resonators as building blocks for an all-optical high-speed reservoir-computing bit-pattern-recognition system,” J. Opt. Soc. Am. B 30, 3048–3055 (2013). [CrossRef]  

13. K. Vandoorne, P. Mechet, T. Van Vaerenbergh, M. Fiers, G. Morthier, D. Verstraeten, B. Schrauwen, J. Dambre, and P. BIenstman, “Experimental demonstration of reservoir computing on a silicon photonics chip,” Nat. Commun. 4, 3541 (2014).

14. D. Massoubre, J.L. Oudar, J. Fatome, S. Pitois, G. Millot, J. Decobert, and J. Landreau, “All-optical extinction ratio enhancement of a 160 Ghz pulse train by a saturable absorber vertical microcavity,” Opt. Lett. 31537–539 (2006). [CrossRef]   [PubMed]  

15. L. Bramerie, Q. Trung Le, M. Gay, A. O’Hare, S. Lobo, M. Joindot, J-C Simon, H-T. Nguyen, and J-L. Oudar, “All-optical 2R regeneration with a vertical microcavity-based saturable absorber,” IEEE J. Sel. Top. Quantum Electron. 18870–883 (2012). [CrossRef]  

16. D. Massoubre, J-L. Oudar, J. Dion, J-C Harmand, A. Shen, J. Landreau, and L. Decobert, “Scaling of the saturation energy in microcavity saturable absorber devices,” Appl. Phys. Lett. 88153513 (2006). [CrossRef]  

17. A. Rodan and P. Tiňo, “Minimum complexity echo state network,” IEEE T. Neural Netw. 22131–144 (2011). [CrossRef]  

18. A. Rodan and P. Tiňo, “Simple deterministically constructed recurrent neural networks,” in Intelligent Data Engineering and Automated Learning (IDEAL, 2010), pp. 267–274.

19. H. Jaeger, “Short-term memory in echo states networks,” GMD Report 152, German National Research Center for Information Technology (2002).

20. J. Dambre, D. Verstraeten, B. Schrauwen, and S. Massar, “Information processing capacity of dynamical systems,” Sci. Rep. 2, 514 (2012). [CrossRef]   [PubMed]  

21. http://soma.ece.mcmaster.ca/ipix/dartmouth/datasets.html

22. D. Verstraeten, B. Schrauwen, and D. Stroobandt, “Isolated word recognition using a liquid state machine,” in Proceedings of the 13th European Symposium on Artificial Neural Networks(ESANN), 435–440 (2005).

23. Texas Instruments-Developed 46-Word Speaker-Dependent Isolated Word Corpus (TI46), September 1991, NIST Speech Disc 7-1.1 (1 disc), (1991).

24. R. Lyon, “A computational model of filtering, detection, and compression in the cochlea,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, 1282–1285 (1982). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. H. Jaeger, “The ’echo state’ approach to analysing and training recurrent neural networks - with an Erratum note,” GMD Report 148: German National Research Centre for Information Technology (2001).
  2. H. Jaeger, H. Haas, “Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication,” Science 304(5667), 78–80 (2004).
    [CrossRef] [PubMed]
  3. W. Maass, T. Natschläger, H. Markram, “Real-time computing without stable states: a new framework for neural computations based on perturbations,” Neural Comput. 14(11), 2531–2560 (2002).
    [CrossRef] [PubMed]
  4. M. Lukoševičius, H. Jaeger, “Reservoir computing approaches to recurrent neural network training,” Comput. Sci. Rev. 3127–149 (2009).
    [CrossRef]
  5. L. Appeltant, M.C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. Dambre, B. Schrauwen, C.R. Mirasso, I. Fischer, “Information processing using a single dynamical node as complex system,” Nat. Commun. 2, 468 (2011).
    [CrossRef] [PubMed]
  6. Y. Paquot, F. Duport, A. Smerieri, J. Dambre, B. Schrauwen, M. Haelterman, S. Massar, “Optoelectronic reservoir computing,” Sci. Rep. 2, 468 (2012).
    [CrossRef] [PubMed]
  7. L. Larger, M. C. Soriano, D. Brunner, L. Appeltant, J. M. Gutierrez, L. Pesquera, C. R. Mirasso, I. Fischer, “Photonic information processing beyond Turing: an optoelectronic implementation of reservoir computing,” Opt. Express 20, 3241–3249 (2012).
    [CrossRef] [PubMed]
  8. F. Duport, B. Schneider, A. Smerieri, M. Haelterman, S. Massar, “All-optical reservoir computing,” Opt. Express 20, 22783–22795 (2012).
    [CrossRef] [PubMed]
  9. D. Brunner, M. C. Soriano, C. R. Mirasso, I. Fischer, “Parallel photonic information processing at gigabyte per second data rates using transient states,” Nat. Commun. 4, 1364 (2013).
    [CrossRef]
  10. K. Vandoorne, W. Dierckx, B. Schrauwen, D. Verstraeten, R. Baets, P. Bienstman, J. Van Campenhout, “Toward optical signal processing using Photonic Reservoir Computing,” Opt. Express 16, 11182–11192 (2008).
    [CrossRef] [PubMed]
  11. K. Vandoorne, J. Dambre, D. Verstraeten, B. Schrauwen, P. Bienstman, “Parallel reservoir computing using optical amplifiers,” IEEE T. Neural Netw. 221469–1481 (2011).
    [CrossRef]
  12. C. Mesaritakis, V. Papataxiarhis, D. Syvridis, “Micro ring resonators as building blocks for an all-optical high-speed reservoir-computing bit-pattern-recognition system,” J. Opt. Soc. Am. B 30, 3048–3055 (2013).
    [CrossRef]
  13. K. Vandoorne, P. Mechet, T. Van Vaerenbergh, M. Fiers, G. Morthier, D. Verstraeten, B. Schrauwen, J. Dambre, P. BIenstman, “Experimental demonstration of reservoir computing on a silicon photonics chip,” Nat. Commun. 4, 3541 (2014).
  14. D. Massoubre, J.L. Oudar, J. Fatome, S. Pitois, G. Millot, J. Decobert, J. Landreau, “All-optical extinction ratio enhancement of a 160 Ghz pulse train by a saturable absorber vertical microcavity,” Opt. Lett. 31537–539 (2006).
    [CrossRef] [PubMed]
  15. L. Bramerie, Q. Trung Le, M. Gay, A. O’Hare, S. Lobo, M. Joindot, J-C Simon, H-T. Nguyen, J-L. Oudar, “All-optical 2R regeneration with a vertical microcavity-based saturable absorber,” IEEE J. Sel. Top. Quantum Electron. 18870–883 (2012).
    [CrossRef]
  16. D. Massoubre, J-L. Oudar, J. Dion, J-C Harmand, A. Shen, J. Landreau, L. Decobert, “Scaling of the saturation energy in microcavity saturable absorber devices,” Appl. Phys. Lett. 88153513 (2006).
    [CrossRef]
  17. A. Rodan, P. Tiňo, “Minimum complexity echo state network,” IEEE T. Neural Netw. 22131–144 (2011).
    [CrossRef]
  18. A. Rodan, P. Tiňo, “Simple deterministically constructed recurrent neural networks,” in Intelligent Data Engineering and Automated Learning (IDEAL, 2010), pp. 267–274.
  19. H. Jaeger, “Short-term memory in echo states networks,” GMD Report 152, German National Research Center for Information Technology (2002).
  20. J. Dambre, D. Verstraeten, B. Schrauwen, S. Massar, “Information processing capacity of dynamical systems,” Sci. Rep. 2, 514 (2012).
    [CrossRef] [PubMed]
  21. http://soma.ece.mcmaster.ca/ipix/dartmouth/datasets.html
  22. D. Verstraeten, B. Schrauwen, D. Stroobandt, “Isolated word recognition using a liquid state machine,” in Proceedings of the 13th European Symposium on Artificial Neural Networks(ESANN), 435–440 (2005).
  23. Texas Instruments-Developed 46-Word Speaker-Dependent Isolated Word Corpus (TI46), September 1991, NIST Speech Disc 7-1.1 (1 disc), (1991).
  24. R. Lyon, “A computational model of filtering, detection, and compression in the cochlea,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, 1282–1285 (1982).
    [CrossRef]

2014 (1)

K. Vandoorne, P. Mechet, T. Van Vaerenbergh, M. Fiers, G. Morthier, D. Verstraeten, B. Schrauwen, J. Dambre, P. BIenstman, “Experimental demonstration of reservoir computing on a silicon photonics chip,” Nat. Commun. 4, 3541 (2014).

2013 (2)

C. Mesaritakis, V. Papataxiarhis, D. Syvridis, “Micro ring resonators as building blocks for an all-optical high-speed reservoir-computing bit-pattern-recognition system,” J. Opt. Soc. Am. B 30, 3048–3055 (2013).
[CrossRef]

D. Brunner, M. C. Soriano, C. R. Mirasso, I. Fischer, “Parallel photonic information processing at gigabyte per second data rates using transient states,” Nat. Commun. 4, 1364 (2013).
[CrossRef]

2012 (5)

Y. Paquot, F. Duport, A. Smerieri, J. Dambre, B. Schrauwen, M. Haelterman, S. Massar, “Optoelectronic reservoir computing,” Sci. Rep. 2, 468 (2012).
[CrossRef] [PubMed]

L. Larger, M. C. Soriano, D. Brunner, L. Appeltant, J. M. Gutierrez, L. Pesquera, C. R. Mirasso, I. Fischer, “Photonic information processing beyond Turing: an optoelectronic implementation of reservoir computing,” Opt. Express 20, 3241–3249 (2012).
[CrossRef] [PubMed]

F. Duport, B. Schneider, A. Smerieri, M. Haelterman, S. Massar, “All-optical reservoir computing,” Opt. Express 20, 22783–22795 (2012).
[CrossRef] [PubMed]

L. Bramerie, Q. Trung Le, M. Gay, A. O’Hare, S. Lobo, M. Joindot, J-C Simon, H-T. Nguyen, J-L. Oudar, “All-optical 2R regeneration with a vertical microcavity-based saturable absorber,” IEEE J. Sel. Top. Quantum Electron. 18870–883 (2012).
[CrossRef]

J. Dambre, D. Verstraeten, B. Schrauwen, S. Massar, “Information processing capacity of dynamical systems,” Sci. Rep. 2, 514 (2012).
[CrossRef] [PubMed]

2011 (3)

A. Rodan, P. Tiňo, “Minimum complexity echo state network,” IEEE T. Neural Netw. 22131–144 (2011).
[CrossRef]

K. Vandoorne, J. Dambre, D. Verstraeten, B. Schrauwen, P. Bienstman, “Parallel reservoir computing using optical amplifiers,” IEEE T. Neural Netw. 221469–1481 (2011).
[CrossRef]

L. Appeltant, M.C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. Dambre, B. Schrauwen, C.R. Mirasso, I. Fischer, “Information processing using a single dynamical node as complex system,” Nat. Commun. 2, 468 (2011).
[CrossRef] [PubMed]

2009 (1)

M. Lukoševičius, H. Jaeger, “Reservoir computing approaches to recurrent neural network training,” Comput. Sci. Rev. 3127–149 (2009).
[CrossRef]

2008 (1)

2006 (2)

D. Massoubre, J.L. Oudar, J. Fatome, S. Pitois, G. Millot, J. Decobert, J. Landreau, “All-optical extinction ratio enhancement of a 160 Ghz pulse train by a saturable absorber vertical microcavity,” Opt. Lett. 31537–539 (2006).
[CrossRef] [PubMed]

D. Massoubre, J-L. Oudar, J. Dion, J-C Harmand, A. Shen, J. Landreau, L. Decobert, “Scaling of the saturation energy in microcavity saturable absorber devices,” Appl. Phys. Lett. 88153513 (2006).
[CrossRef]

2004 (1)

H. Jaeger, H. Haas, “Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication,” Science 304(5667), 78–80 (2004).
[CrossRef] [PubMed]

2002 (1)

W. Maass, T. Natschläger, H. Markram, “Real-time computing without stable states: a new framework for neural computations based on perturbations,” Neural Comput. 14(11), 2531–2560 (2002).
[CrossRef] [PubMed]

Appeltant, L.

L. Larger, M. C. Soriano, D. Brunner, L. Appeltant, J. M. Gutierrez, L. Pesquera, C. R. Mirasso, I. Fischer, “Photonic information processing beyond Turing: an optoelectronic implementation of reservoir computing,” Opt. Express 20, 3241–3249 (2012).
[CrossRef] [PubMed]

L. Appeltant, M.C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. Dambre, B. Schrauwen, C.R. Mirasso, I. Fischer, “Information processing using a single dynamical node as complex system,” Nat. Commun. 2, 468 (2011).
[CrossRef] [PubMed]

Baets, R.

BIenstman, P.

K. Vandoorne, P. Mechet, T. Van Vaerenbergh, M. Fiers, G. Morthier, D. Verstraeten, B. Schrauwen, J. Dambre, P. BIenstman, “Experimental demonstration of reservoir computing on a silicon photonics chip,” Nat. Commun. 4, 3541 (2014).

K. Vandoorne, J. Dambre, D. Verstraeten, B. Schrauwen, P. Bienstman, “Parallel reservoir computing using optical amplifiers,” IEEE T. Neural Netw. 221469–1481 (2011).
[CrossRef]

K. Vandoorne, W. Dierckx, B. Schrauwen, D. Verstraeten, R. Baets, P. Bienstman, J. Van Campenhout, “Toward optical signal processing using Photonic Reservoir Computing,” Opt. Express 16, 11182–11192 (2008).
[CrossRef] [PubMed]

Bramerie, L.

L. Bramerie, Q. Trung Le, M. Gay, A. O’Hare, S. Lobo, M. Joindot, J-C Simon, H-T. Nguyen, J-L. Oudar, “All-optical 2R regeneration with a vertical microcavity-based saturable absorber,” IEEE J. Sel. Top. Quantum Electron. 18870–883 (2012).
[CrossRef]

Brunner, D.

Dambre, J.

K. Vandoorne, P. Mechet, T. Van Vaerenbergh, M. Fiers, G. Morthier, D. Verstraeten, B. Schrauwen, J. Dambre, P. BIenstman, “Experimental demonstration of reservoir computing on a silicon photonics chip,” Nat. Commun. 4, 3541 (2014).

J. Dambre, D. Verstraeten, B. Schrauwen, S. Massar, “Information processing capacity of dynamical systems,” Sci. Rep. 2, 514 (2012).
[CrossRef] [PubMed]

Y. Paquot, F. Duport, A. Smerieri, J. Dambre, B. Schrauwen, M. Haelterman, S. Massar, “Optoelectronic reservoir computing,” Sci. Rep. 2, 468 (2012).
[CrossRef] [PubMed]

K. Vandoorne, J. Dambre, D. Verstraeten, B. Schrauwen, P. Bienstman, “Parallel reservoir computing using optical amplifiers,” IEEE T. Neural Netw. 221469–1481 (2011).
[CrossRef]

L. Appeltant, M.C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. Dambre, B. Schrauwen, C.R. Mirasso, I. Fischer, “Information processing using a single dynamical node as complex system,” Nat. Commun. 2, 468 (2011).
[CrossRef] [PubMed]

Danckaert, J.

L. Appeltant, M.C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. Dambre, B. Schrauwen, C.R. Mirasso, I. Fischer, “Information processing using a single dynamical node as complex system,” Nat. Commun. 2, 468 (2011).
[CrossRef] [PubMed]

Decobert, J.

Decobert, L.

D. Massoubre, J-L. Oudar, J. Dion, J-C Harmand, A. Shen, J. Landreau, L. Decobert, “Scaling of the saturation energy in microcavity saturable absorber devices,” Appl. Phys. Lett. 88153513 (2006).
[CrossRef]

Dierckx, W.

Dion, J.

D. Massoubre, J-L. Oudar, J. Dion, J-C Harmand, A. Shen, J. Landreau, L. Decobert, “Scaling of the saturation energy in microcavity saturable absorber devices,” Appl. Phys. Lett. 88153513 (2006).
[CrossRef]

Duport, F.

Y. Paquot, F. Duport, A. Smerieri, J. Dambre, B. Schrauwen, M. Haelterman, S. Massar, “Optoelectronic reservoir computing,” Sci. Rep. 2, 468 (2012).
[CrossRef] [PubMed]

F. Duport, B. Schneider, A. Smerieri, M. Haelterman, S. Massar, “All-optical reservoir computing,” Opt. Express 20, 22783–22795 (2012).
[CrossRef] [PubMed]

Fatome, J.

Fiers, M.

K. Vandoorne, P. Mechet, T. Van Vaerenbergh, M. Fiers, G. Morthier, D. Verstraeten, B. Schrauwen, J. Dambre, P. BIenstman, “Experimental demonstration of reservoir computing on a silicon photonics chip,” Nat. Commun. 4, 3541 (2014).

Fischer, I.

D. Brunner, M. C. Soriano, C. R. Mirasso, I. Fischer, “Parallel photonic information processing at gigabyte per second data rates using transient states,” Nat. Commun. 4, 1364 (2013).
[CrossRef]

L. Larger, M. C. Soriano, D. Brunner, L. Appeltant, J. M. Gutierrez, L. Pesquera, C. R. Mirasso, I. Fischer, “Photonic information processing beyond Turing: an optoelectronic implementation of reservoir computing,” Opt. Express 20, 3241–3249 (2012).
[CrossRef] [PubMed]

L. Appeltant, M.C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. Dambre, B. Schrauwen, C.R. Mirasso, I. Fischer, “Information processing using a single dynamical node as complex system,” Nat. Commun. 2, 468 (2011).
[CrossRef] [PubMed]

Gay, M.

L. Bramerie, Q. Trung Le, M. Gay, A. O’Hare, S. Lobo, M. Joindot, J-C Simon, H-T. Nguyen, J-L. Oudar, “All-optical 2R regeneration with a vertical microcavity-based saturable absorber,” IEEE J. Sel. Top. Quantum Electron. 18870–883 (2012).
[CrossRef]

Gutierrez, J. M.

Haas, H.

H. Jaeger, H. Haas, “Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication,” Science 304(5667), 78–80 (2004).
[CrossRef] [PubMed]

Haelterman, M.

Y. Paquot, F. Duport, A. Smerieri, J. Dambre, B. Schrauwen, M. Haelterman, S. Massar, “Optoelectronic reservoir computing,” Sci. Rep. 2, 468 (2012).
[CrossRef] [PubMed]

F. Duport, B. Schneider, A. Smerieri, M. Haelterman, S. Massar, “All-optical reservoir computing,” Opt. Express 20, 22783–22795 (2012).
[CrossRef] [PubMed]

Harmand, J-C

D. Massoubre, J-L. Oudar, J. Dion, J-C Harmand, A. Shen, J. Landreau, L. Decobert, “Scaling of the saturation energy in microcavity saturable absorber devices,” Appl. Phys. Lett. 88153513 (2006).
[CrossRef]

Jaeger, H.

M. Lukoševičius, H. Jaeger, “Reservoir computing approaches to recurrent neural network training,” Comput. Sci. Rev. 3127–149 (2009).
[CrossRef]

H. Jaeger, H. Haas, “Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication,” Science 304(5667), 78–80 (2004).
[CrossRef] [PubMed]

H. Jaeger, “The ’echo state’ approach to analysing and training recurrent neural networks - with an Erratum note,” GMD Report 148: German National Research Centre for Information Technology (2001).

H. Jaeger, “Short-term memory in echo states networks,” GMD Report 152, German National Research Center for Information Technology (2002).

Joindot, M.

L. Bramerie, Q. Trung Le, M. Gay, A. O’Hare, S. Lobo, M. Joindot, J-C Simon, H-T. Nguyen, J-L. Oudar, “All-optical 2R regeneration with a vertical microcavity-based saturable absorber,” IEEE J. Sel. Top. Quantum Electron. 18870–883 (2012).
[CrossRef]

Landreau, J.

D. Massoubre, J-L. Oudar, J. Dion, J-C Harmand, A. Shen, J. Landreau, L. Decobert, “Scaling of the saturation energy in microcavity saturable absorber devices,” Appl. Phys. Lett. 88153513 (2006).
[CrossRef]

D. Massoubre, J.L. Oudar, J. Fatome, S. Pitois, G. Millot, J. Decobert, J. Landreau, “All-optical extinction ratio enhancement of a 160 Ghz pulse train by a saturable absorber vertical microcavity,” Opt. Lett. 31537–539 (2006).
[CrossRef] [PubMed]

Larger, L.

Lobo, S.

L. Bramerie, Q. Trung Le, M. Gay, A. O’Hare, S. Lobo, M. Joindot, J-C Simon, H-T. Nguyen, J-L. Oudar, “All-optical 2R regeneration with a vertical microcavity-based saturable absorber,” IEEE J. Sel. Top. Quantum Electron. 18870–883 (2012).
[CrossRef]

Lukoševicius, M.

M. Lukoševičius, H. Jaeger, “Reservoir computing approaches to recurrent neural network training,” Comput. Sci. Rev. 3127–149 (2009).
[CrossRef]

Lyon, R.

R. Lyon, “A computational model of filtering, detection, and compression in the cochlea,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, 1282–1285 (1982).
[CrossRef]

Maass, W.

W. Maass, T. Natschläger, H. Markram, “Real-time computing without stable states: a new framework for neural computations based on perturbations,” Neural Comput. 14(11), 2531–2560 (2002).
[CrossRef] [PubMed]

Markram, H.

W. Maass, T. Natschläger, H. Markram, “Real-time computing without stable states: a new framework for neural computations based on perturbations,” Neural Comput. 14(11), 2531–2560 (2002).
[CrossRef] [PubMed]

Massar, S.

Y. Paquot, F. Duport, A. Smerieri, J. Dambre, B. Schrauwen, M. Haelterman, S. Massar, “Optoelectronic reservoir computing,” Sci. Rep. 2, 468 (2012).
[CrossRef] [PubMed]

J. Dambre, D. Verstraeten, B. Schrauwen, S. Massar, “Information processing capacity of dynamical systems,” Sci. Rep. 2, 514 (2012).
[CrossRef] [PubMed]

F. Duport, B. Schneider, A. Smerieri, M. Haelterman, S. Massar, “All-optical reservoir computing,” Opt. Express 20, 22783–22795 (2012).
[CrossRef] [PubMed]

L. Appeltant, M.C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. Dambre, B. Schrauwen, C.R. Mirasso, I. Fischer, “Information processing using a single dynamical node as complex system,” Nat. Commun. 2, 468 (2011).
[CrossRef] [PubMed]

Massoubre, D.

D. Massoubre, J-L. Oudar, J. Dion, J-C Harmand, A. Shen, J. Landreau, L. Decobert, “Scaling of the saturation energy in microcavity saturable absorber devices,” Appl. Phys. Lett. 88153513 (2006).
[CrossRef]

D. Massoubre, J.L. Oudar, J. Fatome, S. Pitois, G. Millot, J. Decobert, J. Landreau, “All-optical extinction ratio enhancement of a 160 Ghz pulse train by a saturable absorber vertical microcavity,” Opt. Lett. 31537–539 (2006).
[CrossRef] [PubMed]

Mechet, P.

K. Vandoorne, P. Mechet, T. Van Vaerenbergh, M. Fiers, G. Morthier, D. Verstraeten, B. Schrauwen, J. Dambre, P. BIenstman, “Experimental demonstration of reservoir computing on a silicon photonics chip,” Nat. Commun. 4, 3541 (2014).

Mesaritakis, C.

Millot, G.

Mirasso, C. R.

Mirasso, C.R.

L. Appeltant, M.C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. Dambre, B. Schrauwen, C.R. Mirasso, I. Fischer, “Information processing using a single dynamical node as complex system,” Nat. Commun. 2, 468 (2011).
[CrossRef] [PubMed]

Morthier, G.

K. Vandoorne, P. Mechet, T. Van Vaerenbergh, M. Fiers, G. Morthier, D. Verstraeten, B. Schrauwen, J. Dambre, P. BIenstman, “Experimental demonstration of reservoir computing on a silicon photonics chip,” Nat. Commun. 4, 3541 (2014).

Natschläger, T.

W. Maass, T. Natschläger, H. Markram, “Real-time computing without stable states: a new framework for neural computations based on perturbations,” Neural Comput. 14(11), 2531–2560 (2002).
[CrossRef] [PubMed]

Nguyen, H-T.

L. Bramerie, Q. Trung Le, M. Gay, A. O’Hare, S. Lobo, M. Joindot, J-C Simon, H-T. Nguyen, J-L. Oudar, “All-optical 2R regeneration with a vertical microcavity-based saturable absorber,” IEEE J. Sel. Top. Quantum Electron. 18870–883 (2012).
[CrossRef]

O’Hare, A.

L. Bramerie, Q. Trung Le, M. Gay, A. O’Hare, S. Lobo, M. Joindot, J-C Simon, H-T. Nguyen, J-L. Oudar, “All-optical 2R regeneration with a vertical microcavity-based saturable absorber,” IEEE J. Sel. Top. Quantum Electron. 18870–883 (2012).
[CrossRef]

Oudar, J.L.

Oudar, J-L.

L. Bramerie, Q. Trung Le, M. Gay, A. O’Hare, S. Lobo, M. Joindot, J-C Simon, H-T. Nguyen, J-L. Oudar, “All-optical 2R regeneration with a vertical microcavity-based saturable absorber,” IEEE J. Sel. Top. Quantum Electron. 18870–883 (2012).
[CrossRef]

D. Massoubre, J-L. Oudar, J. Dion, J-C Harmand, A. Shen, J. Landreau, L. Decobert, “Scaling of the saturation energy in microcavity saturable absorber devices,” Appl. Phys. Lett. 88153513 (2006).
[CrossRef]

Papataxiarhis, V.

Paquot, Y.

Y. Paquot, F. Duport, A. Smerieri, J. Dambre, B. Schrauwen, M. Haelterman, S. Massar, “Optoelectronic reservoir computing,” Sci. Rep. 2, 468 (2012).
[CrossRef] [PubMed]

Pesquera, L.

Pitois, S.

Rodan, A.

A. Rodan, P. Tiňo, “Minimum complexity echo state network,” IEEE T. Neural Netw. 22131–144 (2011).
[CrossRef]

A. Rodan, P. Tiňo, “Simple deterministically constructed recurrent neural networks,” in Intelligent Data Engineering and Automated Learning (IDEAL, 2010), pp. 267–274.

Schneider, B.

Schrauwen, B.

K. Vandoorne, P. Mechet, T. Van Vaerenbergh, M. Fiers, G. Morthier, D. Verstraeten, B. Schrauwen, J. Dambre, P. BIenstman, “Experimental demonstration of reservoir computing on a silicon photonics chip,” Nat. Commun. 4, 3541 (2014).

J. Dambre, D. Verstraeten, B. Schrauwen, S. Massar, “Information processing capacity of dynamical systems,” Sci. Rep. 2, 514 (2012).
[CrossRef] [PubMed]

Y. Paquot, F. Duport, A. Smerieri, J. Dambre, B. Schrauwen, M. Haelterman, S. Massar, “Optoelectronic reservoir computing,” Sci. Rep. 2, 468 (2012).
[CrossRef] [PubMed]

K. Vandoorne, J. Dambre, D. Verstraeten, B. Schrauwen, P. Bienstman, “Parallel reservoir computing using optical amplifiers,” IEEE T. Neural Netw. 221469–1481 (2011).
[CrossRef]

L. Appeltant, M.C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. Dambre, B. Schrauwen, C.R. Mirasso, I. Fischer, “Information processing using a single dynamical node as complex system,” Nat. Commun. 2, 468 (2011).
[CrossRef] [PubMed]

K. Vandoorne, W. Dierckx, B. Schrauwen, D. Verstraeten, R. Baets, P. Bienstman, J. Van Campenhout, “Toward optical signal processing using Photonic Reservoir Computing,” Opt. Express 16, 11182–11192 (2008).
[CrossRef] [PubMed]

D. Verstraeten, B. Schrauwen, D. Stroobandt, “Isolated word recognition using a liquid state machine,” in Proceedings of the 13th European Symposium on Artificial Neural Networks(ESANN), 435–440 (2005).

Shen, A.

D. Massoubre, J-L. Oudar, J. Dion, J-C Harmand, A. Shen, J. Landreau, L. Decobert, “Scaling of the saturation energy in microcavity saturable absorber devices,” Appl. Phys. Lett. 88153513 (2006).
[CrossRef]

Simon, J-C

L. Bramerie, Q. Trung Le, M. Gay, A. O’Hare, S. Lobo, M. Joindot, J-C Simon, H-T. Nguyen, J-L. Oudar, “All-optical 2R regeneration with a vertical microcavity-based saturable absorber,” IEEE J. Sel. Top. Quantum Electron. 18870–883 (2012).
[CrossRef]

Smerieri, A.

Y. Paquot, F. Duport, A. Smerieri, J. Dambre, B. Schrauwen, M. Haelterman, S. Massar, “Optoelectronic reservoir computing,” Sci. Rep. 2, 468 (2012).
[CrossRef] [PubMed]

F. Duport, B. Schneider, A. Smerieri, M. Haelterman, S. Massar, “All-optical reservoir computing,” Opt. Express 20, 22783–22795 (2012).
[CrossRef] [PubMed]

Soriano, M. C.

Soriano, M.C.

L. Appeltant, M.C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. Dambre, B. Schrauwen, C.R. Mirasso, I. Fischer, “Information processing using a single dynamical node as complex system,” Nat. Commun. 2, 468 (2011).
[CrossRef] [PubMed]

Stroobandt, D.

D. Verstraeten, B. Schrauwen, D. Stroobandt, “Isolated word recognition using a liquid state machine,” in Proceedings of the 13th European Symposium on Artificial Neural Networks(ESANN), 435–440 (2005).

Syvridis, D.

Tino, P.

A. Rodan, P. Tiňo, “Minimum complexity echo state network,” IEEE T. Neural Netw. 22131–144 (2011).
[CrossRef]

A. Rodan, P. Tiňo, “Simple deterministically constructed recurrent neural networks,” in Intelligent Data Engineering and Automated Learning (IDEAL, 2010), pp. 267–274.

Trung Le, Q.

L. Bramerie, Q. Trung Le, M. Gay, A. O’Hare, S. Lobo, M. Joindot, J-C Simon, H-T. Nguyen, J-L. Oudar, “All-optical 2R regeneration with a vertical microcavity-based saturable absorber,” IEEE J. Sel. Top. Quantum Electron. 18870–883 (2012).
[CrossRef]

Van Campenhout, J.

Van der Sande, G.

L. Appeltant, M.C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. Dambre, B. Schrauwen, C.R. Mirasso, I. Fischer, “Information processing using a single dynamical node as complex system,” Nat. Commun. 2, 468 (2011).
[CrossRef] [PubMed]

Van Vaerenbergh, T.

K. Vandoorne, P. Mechet, T. Van Vaerenbergh, M. Fiers, G. Morthier, D. Verstraeten, B. Schrauwen, J. Dambre, P. BIenstman, “Experimental demonstration of reservoir computing on a silicon photonics chip,” Nat. Commun. 4, 3541 (2014).

Vandoorne, K.

K. Vandoorne, P. Mechet, T. Van Vaerenbergh, M. Fiers, G. Morthier, D. Verstraeten, B. Schrauwen, J. Dambre, P. BIenstman, “Experimental demonstration of reservoir computing on a silicon photonics chip,” Nat. Commun. 4, 3541 (2014).

K. Vandoorne, J. Dambre, D. Verstraeten, B. Schrauwen, P. Bienstman, “Parallel reservoir computing using optical amplifiers,” IEEE T. Neural Netw. 221469–1481 (2011).
[CrossRef]

K. Vandoorne, W. Dierckx, B. Schrauwen, D. Verstraeten, R. Baets, P. Bienstman, J. Van Campenhout, “Toward optical signal processing using Photonic Reservoir Computing,” Opt. Express 16, 11182–11192 (2008).
[CrossRef] [PubMed]

Verstraeten, D.

K. Vandoorne, P. Mechet, T. Van Vaerenbergh, M. Fiers, G. Morthier, D. Verstraeten, B. Schrauwen, J. Dambre, P. BIenstman, “Experimental demonstration of reservoir computing on a silicon photonics chip,” Nat. Commun. 4, 3541 (2014).

J. Dambre, D. Verstraeten, B. Schrauwen, S. Massar, “Information processing capacity of dynamical systems,” Sci. Rep. 2, 514 (2012).
[CrossRef] [PubMed]

K. Vandoorne, J. Dambre, D. Verstraeten, B. Schrauwen, P. Bienstman, “Parallel reservoir computing using optical amplifiers,” IEEE T. Neural Netw. 221469–1481 (2011).
[CrossRef]

K. Vandoorne, W. Dierckx, B. Schrauwen, D. Verstraeten, R. Baets, P. Bienstman, J. Van Campenhout, “Toward optical signal processing using Photonic Reservoir Computing,” Opt. Express 16, 11182–11192 (2008).
[CrossRef] [PubMed]

D. Verstraeten, B. Schrauwen, D. Stroobandt, “Isolated word recognition using a liquid state machine,” in Proceedings of the 13th European Symposium on Artificial Neural Networks(ESANN), 435–440 (2005).

Appl. Phys. Lett. (1)

D. Massoubre, J-L. Oudar, J. Dion, J-C Harmand, A. Shen, J. Landreau, L. Decobert, “Scaling of the saturation energy in microcavity saturable absorber devices,” Appl. Phys. Lett. 88153513 (2006).
[CrossRef]

Comput. Sci. Rev. (1)

M. Lukoševičius, H. Jaeger, “Reservoir computing approaches to recurrent neural network training,” Comput. Sci. Rev. 3127–149 (2009).
[CrossRef]

IEEE J. Sel. Top. Quantum Electron. (1)

L. Bramerie, Q. Trung Le, M. Gay, A. O’Hare, S. Lobo, M. Joindot, J-C Simon, H-T. Nguyen, J-L. Oudar, “All-optical 2R regeneration with a vertical microcavity-based saturable absorber,” IEEE J. Sel. Top. Quantum Electron. 18870–883 (2012).
[CrossRef]

IEEE T. Neural Netw. (2)

A. Rodan, P. Tiňo, “Minimum complexity echo state network,” IEEE T. Neural Netw. 22131–144 (2011).
[CrossRef]

K. Vandoorne, J. Dambre, D. Verstraeten, B. Schrauwen, P. Bienstman, “Parallel reservoir computing using optical amplifiers,” IEEE T. Neural Netw. 221469–1481 (2011).
[CrossRef]

J. Opt. Soc. Am. B (1)

Nat. Commun. (1)

L. Appeltant, M.C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. Dambre, B. Schrauwen, C.R. Mirasso, I. Fischer, “Information processing using a single dynamical node as complex system,” Nat. Commun. 2, 468 (2011).
[CrossRef] [PubMed]

Nat. Commun. (2)

D. Brunner, M. C. Soriano, C. R. Mirasso, I. Fischer, “Parallel photonic information processing at gigabyte per second data rates using transient states,” Nat. Commun. 4, 1364 (2013).
[CrossRef]

K. Vandoorne, P. Mechet, T. Van Vaerenbergh, M. Fiers, G. Morthier, D. Verstraeten, B. Schrauwen, J. Dambre, P. BIenstman, “Experimental demonstration of reservoir computing on a silicon photonics chip,” Nat. Commun. 4, 3541 (2014).

Neural Comput. (1)

W. Maass, T. Natschläger, H. Markram, “Real-time computing without stable states: a new framework for neural computations based on perturbations,” Neural Comput. 14(11), 2531–2560 (2002).
[CrossRef] [PubMed]

Opt. Express (3)

Opt. Lett. (1)

Sci. Rep. (2)

Y. Paquot, F. Duport, A. Smerieri, J. Dambre, B. Schrauwen, M. Haelterman, S. Massar, “Optoelectronic reservoir computing,” Sci. Rep. 2, 468 (2012).
[CrossRef] [PubMed]

J. Dambre, D. Verstraeten, B. Schrauwen, S. Massar, “Information processing capacity of dynamical systems,” Sci. Rep. 2, 514 (2012).
[CrossRef] [PubMed]

Science (1)

H. Jaeger, H. Haas, “Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication,” Science 304(5667), 78–80 (2004).
[CrossRef] [PubMed]

Other (7)

H. Jaeger, “The ’echo state’ approach to analysing and training recurrent neural networks - with an Erratum note,” GMD Report 148: German National Research Centre for Information Technology (2001).

A. Rodan, P. Tiňo, “Simple deterministically constructed recurrent neural networks,” in Intelligent Data Engineering and Automated Learning (IDEAL, 2010), pp. 267–274.

H. Jaeger, “Short-term memory in echo states networks,” GMD Report 152, German National Research Center for Information Technology (2002).

http://soma.ece.mcmaster.ca/ipix/dartmouth/datasets.html

D. Verstraeten, B. Schrauwen, D. Stroobandt, “Isolated word recognition using a liquid state machine,” in Proceedings of the 13th European Symposium on Artificial Neural Networks(ESANN), 435–440 (2005).

Texas Instruments-Developed 46-Word Speaker-Dependent Isolated Word Corpus (TI46), September 1991, NIST Speech Disc 7-1.1 (1 disc), (1991).

R. Lyon, “A computational model of filtering, detection, and compression in the cochlea,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, 1282–1285 (1982).
[CrossRef]

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1

Principle of the delay dynamical system reservoir computer. The states xi(n) of the reservoir are multiplexed in time. Each input u(n) is held for a time T = and is divided in N time windows of length θ during which it is multiplied by the value mi. All these inputs miu(n) are multiplied by the input gain β and fed into the delay loop where they are processed by an element applying a nonlinear transformation FNL. Part of the signal is extracted for readout while the remaining signal is sent back into the dynamical system after multiplication by the feedback gain α. The period of the loop T′ is desynchronized with respect to the input time T through the relation T′ = (N + k)θ, which allows for coupling between neighboring states. In this work, we choose k = 1 which means that each internal state is coupled to its direct neighbor.

Fig. 2
Fig. 2

Comparison of the experimentally measured output-input nonlinear relations of the SOA used in our previous work [8] pumped by a 200 mA current (blue, top and left axes) and of the SESAM used in the current work (red, bottom and right axes). The SOA exhibits a nonlinear behavior at high input power and is mainly linear at low input powers. The situation is reversed for the SESAM. Dashed lines show where a linear approximation holds, i.e. for high input and low input powers for the SESAM and the SOA, respectively.

Fig. 3
Fig. 3

The SESAM structure consists of 4 different layers deposited on a copper substrate. A saturable absorber layer (InGaAs) is sandwiched between two InP phase layers. More information about this type of structure can be found in ref. [15].

Fig. 4
Fig. 4

Reflectivity of the SESAM structure as a function of its input power. The reflectivity is measured relatively to a gold mirror to remove the intrinsic losses of the experimental setup. For input powers lower than −13dBm (50μW), the reflectivity is almost constant and the response of the SESAM is essentially linear. For input powers varying between −13dBm (50μW) and 10dBm (10 mW), there is a strong variation of the reflectivity with power. Consequently, the absorber is nonlinear in this region. For higher input powers, the reflectivity is stable around a value of 0.7, meaning that the absorber acts again as a linear medium. Note the logarithmic axis used for the input power. Arrows show the average powers focused on the absorber for which interesting performances were obtained: (a) −9.20 dBm (120 μW) for nonlinear memory capacities and channel equalization, (b) −2.2 dBm (600 μW) for the radar task and (c) 0.80 dBm (1.2 mW) for the linear memory capacity and the radar task.

Fig. 5
Fig. 5

Schematic of the experimental setup of the all-optical reservoir. Optical components are depicted in red whereas electronic components are depicted in green. The all-optical loop is driven by the input optical signal. A superluminescent light emitting diode (SLED) generates a 40nm-wide spectrum centered around 1560 nm. An electronic signal corresponding to the time dependent input multiplied by the input mask is generated by the Arbitrary Waveform Generator (AWG). This electronic signal drives an integrated Lithium niobate Mach-Zehnder intensity modulator (MZ), which produces a time dependent input optical signal whose intensity is adjusted with a variable attenuator. The input optical signal is injected into the cavity by means of a 90/10 fiber coupler. The cavity itself consists of an erbium-doped fiber amplifier, a circulator, a SESAM and a fiber spool used as a delay line. A 80/20 fiber coupler is used to send 20% of the cavity intensity to the readout photodiode and then to a digitizer. Two polarization controllers are used to match the polarizations input and feedback signals with the polarization state of the amplifier. The amplifier is used in a linear regime (no saturation) to compensate for the losses in the cavity.

Fig. 6
Fig. 6

Results for channel equalization task. The signal-to-noise ratio (SNR) varies between 12 and 32 dB by steps of 4 dB. The average Symbol Error Rate (SER - the fraction of misclassified symbols) obtained on 5 experiments of 6000 inputs test sequences is presented with statistical error bars and compared with the results of our two previous opto-electronic and all-optical reservoirs of refs. [6] and [8] with the same number of internal states (N = 50).

Fig. 7
Fig. 7

Results for the radar task. The NMSE is presented for prediction delays ranging from 1 to 10. Our results are slightly better than the ones obtained with our previous opto-electronic and all-optical reservoirs for high sea state. For low sea state, our results are slightly better for small prediction delays but get slightly degraded for larger delays. On average, our results are comparable to the ones obtained by our previous hardware realizations.

Tables (1)

Tables Icon

Table 1 Comparison of the linear, quadratic, cross and total memory capacities of our optoelectronic reservoir and our two all-optical reservoirs, using N = 50 internal variables. Our new reservoir computer based on saturable absorption shows increased linear, cross and total memory capacities with respect to our first all-optical reservoir based on a SOA. For each of the different memory capacities, the optimum input gain β is determined (while α is kept fixed at 0.85, see text). The total memory capacity corresponds to the sum of the linear, quadratic and cross memory capacities for a fixed value of the parameters α, β. Only the best values are reported in the table.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

x i ( n + 1 ) = F N L ( α j = 1 N A i j x j ( n ) + β m i u ( n + 1 ) )
y ( n ) = i = 1 N W i x i ( n )
N M S E = ( y y ^ ) 2 n ( y y ^ n ) 2 n
x i ( n + 1 ) = { F N L ( α x i 1 ( n ) + β m i u ( n + 1 ) ) if 2 i N F N L ( α x N + i 1 ( n 1 ) + β m i u ( n + 1 ) ) if i = 1
q ( n ) = 0.08 d ( n + 2 ) 0.12 d ( n + 1 ) + d ( n ) + 0.18 d ( n 1 ) 0.1 d ( n 2 ) + 0.091 d ( n 3 ) 0.05 d ( n 4 ) + 0.04 d ( n 5 ) + 0.03 d ( n 6 ) + 0.01 d ( n 7 ) u ( n ) = q ( n ) + 0.036 q 2 ( n ) 0.011 q 3 ( n ) + ν ( n )

Metrics