Abstract

In this paper, we present: (i) a novel analog silicon retina featuring auto-adaptive pixels that obey the Michaelis-Menten law, i.e. V=VmInIn+σn; (ii) a method of characterizing silicon retinas, which makes it possible to accurately assess the pixels’ response to transient luminous changes in a ±3-decade range, as well as changes in the initial steady-state intensity in a 7-decade range. The novel pixel, called M2APix, which stands for Michaelis-Menten Auto-Adaptive Pixel, can auto-adapt in a 7-decade range and responds appropriately to step changes up to ±3 decades in size without causing any saturation of the Very Large Scale Integration (VLSI) transistors. Thanks to the intrinsic properties of the Michaelis-Menten equation, the pixel output always remains within a constant limited voltage range. The range of the Analog to Digital Converter (ADC) was therefore adjusted so as to obtain a Least Significant Bit (LSB) voltage of 2.35mV and an effective resolution of about 9 bits. The results presented here show that the M2APix produced a quasi-linear contrast response once it had adapted to the average luminosity. Differently to what occurs in its biological counterparts, neither the sensitivity to changes in light nor the contrast response of the M2APix depend on the mean luminosity (i.e. the ambient lighting conditions). Lastly, a full comparison between the M2APix and the Delbrück auto-adaptive pixel is provided.

© 2015 Optical Society of America

1. Introduction

During the last few decades, research in the field of robotics has advanced considerably, but there still exist very few sighted robots which are able to behave appropriately, regardless of changes in the illuminance (see Fig. 5 in [1], for instance), such as those which occur outdoors. One of the reasons for this lack is that it is difficult to design pixels that combine high sensitivity with a wide luminosity range.

 figure: Fig. 5

Fig. 5 (a) Simulated example of the adaptive pixel’s response (blue line) to a step change in the light intensity (light to dark gray stripe). The red and magenta lines stand for the photodiode current and the average current, respectively. (b) Theoretical S-shaped curve based on equation (4), giving the peak values of the pixel’s response Vi to step changes in the photodiode current from Iph0 to Iphi.

Download Full Size | PPT Slide | PDF

A large variety of Wide-Dynamic-Range (WDR) CMOS image sensors has been proposed throughout the years [2], trying to widen the operating luminosity range as much as the visible spectrum while keeping sensitivity to small changes for every average luminosity in the operating range. Although the WDR image sensors capture images in a luminosity range of up to 7 decades, they provide different contrast sensitivities at different average luminosities. Vision applications such as event-based and bio-medical applications often require high and constant sensitivity in a large luminosity range, in order to detect small temporal and/or spatial changes in the intensity in several lighting conditions [36]. One possible solution to this problem can be found by looking at the auto-adaptive response of human and animal photoreceptors.

In their physiological studies on fish, Naka and Rushton established for the first time that a vertebrate retina obeys a process of adaptation whereby each photoreceptor’s response is normalized by a representative value of the average local luminosity [7], in line with the Michaelis-Menten equation [8]. In the light of these and subsequent findings [7, 912], many efforts have been made to mimic the Outer Plexiform Layer (OPL) circuitry in silicon retinas [3, 1319], or to implement the model in software for image processing [2023]. In the latter case, the normalization giving the adaptation is implemented numerically after digitalization which gives rise to noise amplification, especially in dark scenes. Consequently, considerable interest has focused during the last twenty years on developing a silicon retina giving an OPL-like response over the entire visible spectrum, also thanks to the latest advances in retinal implant systems [24, 25].

The first example of an auto-adaptive silicon retina was presented in [13], where a logarithmic photoreceptor was used to handle transient changes in light in a 1-decade range, while light adaptation within a 1-decade range was obtained by “subtracting” a local spatio-temporal average. This circuit improved the contrast resolution of equally illuminated areas in comparison with standard logarithmic photoreceptor retinas, but there was no improvement in the low signal-to-noise ratio inherent to the logarithmic amplification. In [14], a modified version of this chip was compared with the OPL response described in Necturus [26] (see Section 2), showing light adaptation in a 5-decade range but sensitivity to luminous changes within a range of only 0.5 decades.

To overcome these limitations, a more biologically inspired solution was subsequently developed in a study by [15], which consisted in modulating the synaptic strengths locally to control the sensitivity and including cone-to-cone gap junctions to attenuate the noise. Although the sensitivity was improved in this way from 0.5 to 2 decades, the adaptation to light was not satisfactory because of the circuit deviations resulting from the increasing inter-receptor coupling strength. A good compromise between contrast sensitivity and light adaptation was reached in [16], which gave light adaptation in a 6-decade range and sensitivity in a 1-decade range. However, the steady-state response of this pixel was found to increase with the light intensity (i.e., the photodiode current) and the transient response was not always monotonic when large lighting variations occurred (see Fig. 2.13 in [27]). The Delbrück adaptive pixel was also found in studies on optic flow measurements to have little practical use in situations where changes in the light greater than 1 decade are liable to occur (see Figs. 7(b),7(d),7(j) and 7(l) from [28]). In [29], the Gamma correction method presented in [21] for local tone-mapping purposes was improved by digitally normalizing the pixel output directly in VLSI in line with the Michaelis-Menten law: only a few preliminary results on light adaptation and contrast sensitivity were presented, however, in that study.

 figure: Fig. 7

Fig. 7 (a) Pictures and (b) exploded view of the Lighting Box composed of a PCB with a red LED (λ ≈ 618nm) and an optical filter support. The direct control of the LED current makes the illuminance vary in a 3-decade range. Additional optical filters (neutral density filters) were used to drastically increase the mean luminosity range from 3 to 7 decades.

Download Full Size | PPT Slide | PDF

Other solutions not involving the use of auto-adaptive elements have been suggested. In [3], a subretinal stimulator was endowed with light adaptation by shifting the dynamic input range via an externally generated signal. The results presented showed that photoreceptor adaptation was achieved in a 7-decade range. In [19], a silicon retina was provided with a wide dynamic operating range and a high contrast sensitivity by applying a spatial and temporal filtering process based on resistive networks. The results obtained showed that the pixels could deal with 4 decades of luminosity changes, but their contrast responses depended on the external voltage controlling a reset transistor.

As far as we know, no artificial retinas have ever been endowed up to now with pixels with the following features at once: (i) auto-adaptation to the mean local luminosity over a range as wide as the visible spectrum; (ii) constant sensitivity to luminous changes, i.e. contrasts, at any average luminosity in the operating range; (iii) reliable response even in the presence of sudden large changes in the luminosity (i.e., without causing circuit saturations or deviations).

In this paper, it is proposed to present: (i) a novel analog silicon retina featuring auto-adaptive pixels that obey the Michaelis-Menten law faithfully in a 7-decade range without causing any saturation of the VLSI transistors, while keeping an effective resolution of the integrated analog-to-digital conversion of about 9 bits; (ii) a method of characterizing silicon retinas, which can be used to accurately assess the pixels’ response to transient luminous changes within a ±3-decade range and to changes in the steady-state intensity within a 7-decade range. We have called this novel pixel the M2APix, which stands for the Michaelis-Menten Auto-adaptive Pixel.

The present artificial retina consists of a 2 × 2mm CMOS circuit comprising four lines of six auto-adaptive pixels, and a digital interface giving a fast serial read-out of up to 1MHz connecting the retina directly to an external microprocessor or microcontroller. The adaptation time constant of the M2APix can be changed by means of an external capacitor, providing additional flexibility to eventually meet the application’s requirements in term of preferred bandwidth.

The biological background to this study is presented in Section 2. The chip implementation is presented in Section 3, and a detailed description of our auto-adaptive pixel is provided in Section 4. The method of characterization used is presented in Section 5, and the results obtained using this method are presented and discussed in Section 6. A comparison between the M2APix and the Delbrück pixel present on the same silicon retina is proposed in Section 7. Some conclusions are reached in the last section.

2. Biological background

Light adaptation of the photoreceptors present in human and animal retinas has been extensively studied in a large number of species since the early 50’s, using both intra and extracellular methods [7, 9, 11, 3034]. In all these studies, the relationship between light stimuli and photoreceptor responses has been documented, both in the dark and with background illumination, via an adaptation process described by the so-called Michaelis-Menten equation [8]:

V=VmInIn+σn,
where V stands for the photoreceptor’s response and Vm is its maximum value; I denotes the light intensity and n usually ranges from 0.7 to 1; σ is the adaptation parameter, corresponding to the light intensity giving half of the maximum response.

The first micro-electrode recordings of rod and cone responses were obtained on saltwater fish (Gerridae) by Svaetichin in 1953 [35]. In his pioneering study, Svaetichin discovered the S-potentials, as they were subsequently called by Oikawa et al. in [36], which stands for “slow potentials”, referring to the slow adaptation process which occurs in the photoreceptor potentials when they are exposed to flash lights against a steady background.

However, the first mathematical description of the cone response given by equation (1) was provided by Naka and Rushton in the case of the freshwater fish (Cyprinidae) [7]. The equation (1) where n = 1 is therefore also known as the Naka-Rushton law. The same model was subsequently validated and applied to turtles’ cones by Baylor et al. [30] and to monkeys’ cones by Boynton and Whitten [31], who introduced the exponent n < 1 for the first time. Many studies were then carried out on vertebrates and invertebrates, all confirming the equation in (1) with various values of n and sometimes with different interpretations of the adaptation parameter σ (in the salamander [9], gecko [32], frog [33], locust, fly and dragonfly [34], for instance, and in the human fovea [11]).

Figure 1 shows the responses of dark- and light-adapted red cone photoreceptors recorded intracellularly in the retina of the turtle (Pseduemys Scripta Elegans) by Normann and Perlman [10]. As can be seen from this figure, the function V (I) defined in (1) gives rise in the Log(I) domain to curves with a fairly smooth “S” shape (continuous curves), where the slope of the “S” is given by the value of n (n = 1 in that case) and the lateral shift by the value of σ.

Based on the S-shaped curves shown in Fig. 1, two main features of the light-adaptation behavior can be described by the incident-light model [37]:

  • as the background lighting changes, the entire S-shaped curve shifts along the light intensity axis, which corresponds to a change in the sensitivity of the photoreceptor in the neighborhood of the background light. In fact, after reaching a peak value caused by an increase/decrease in the intensity of the light (data points), the potential V gradually returns to a steady-state value, reflecting its adaptation to the background. This decrease/increase in V corresponds to a “slow” increase/decrease in the parameter σ (see equation (1));
  • as the background illumination increases, the operating point of the photoreceptor increases correspondingly (small horizontal lines in Fig. 1), which means that because of the non-linearity of the curve, the response to a given increment/decrement in the stimulus becomes smaller/larger at higher background levels. This process known as “response compression” was introduced for the first time by Boynton and Whitten in [31]. The slope of the curve around the operating point defines the contrast sensitivity.

 figure: Fig. 1

Fig. 1 S-shaped curves corresponding to dark- and light-adapted response curves recorded in a red cone of the turtle. The peak of either the incremental or decremental response measured from the dark-adapted potential recorded before the background onset (dashed line) is plotted as a function of the log of the test pulse intensity which elicited each response. The steady hyperpolarization produced by each background lighting condition is given by the intersection between the intensity-response curve and the small horizontal line. The continuous curves were drawn from a single template which describes the function V=VmII+σ. Adapted from [10].

Download Full Size | PPT Slide | PDF

3. Chip implementation

In this section, we present our 2-D photosensor array, featuring a silicon retina composed of 24 auto-adaptive pixels of two different kinds. A picture of the chip package, with a zoom on the retina, is presented in Fig. 2.

 figure: Fig. 2

Fig. 2 (a) The silicon retina in its 9 × 9mm package; (b) Magnified view of the silicon retina composed of 12 Michaelis-Menten pixels presented in this study, and 12 additional Delbrück pixels; (c) Magnified view of 3 Michaelis-Menten pixels giving the photodiode’s dimensions and the inter-receptor distance.

Download Full Size | PPT Slide | PDF

The retina consists of a 2 × 2mm CMOS circuit designed using the 350nm XFAB standard CMOS process. To facilitate the integration of the chip into custom-made Printed Circuit Boards (PCBs), the circuit was encapsulated in a standard 9 × 9mm (LCC24) package with 24 pins. Four rows of auto-adaptive pixels with a 254 μm diameter N-well/P-substrate photodiode were implemented in the PCB, as shown in Fig. 2(a). The photodiode is mostly sensitive to red light (λ ≈ 650nm) and its sensitivity Sph is equal to 1.1×108Am2W. The retina is completely self-biased and can be read out by means of digital serial read-out architecture implemented directly on the chip. The main functional blocks implemented on the chip can therefore be summarized as follows:

  • two rows of six Michaelis-Menten auto-adaptive pixels, that we called M2APix (see Section 4 for details);
  • a low-pass filtered current-averaging cell and the reference voltage, required by the Michaelis-Menten architecture (see Section 4);
  • a bias generator providing the polarization currents required for the circuit to operate properly;
  • a reference voltage for the analog-to-digital converter (ADC);
  • a digital serial interface, which includes the digitizing of the pixel output signals via a 10-bit ADC, and a direct serial communication bus (see details below).
In addition, two rows of six pixels of the Delbrück type (see [16] for more details) were implemented on the same chip.

The photodiodes were aligned on two horizontally staggered rows so that the hexagons fit together like a puzzle, recalling the shape and the arrangement of insects’ hexagonal ommatidia (Fig. 2). This pattern of alignment of the photodiodes, which is particularly suitable for detecting luminous contrasts in the main direction, also makes it possible to sense light variations in any other direction.

The analog signals originating from each pixel were digitized on-chip so that they could be directly processed by an external microprocessor or microcontroller, saving the power consumption and the computational resources of the latter for further data processing. Since the pixel type is selected by a digital input, only one set of pixels can be converted at a time. The twelve M2APix outputs are low-pass filtered with a cut-off frequency of 300Hz before being digitally converted, giving a minimum sampling frequency of 600Hz in order to prevent the occurrence of aliasing. An integrated DC reference voltage was connected to the ADC as an additional pixel output for testing and calibration purposes. To improve the LSB voltage, the dynamic input range of the ADC can be reduced by using an external voltage source. A synchronous direct-connection protocol similar to that used in the artificial compound eye CurvACE [38] was adopted, as it provides a very simple, robust solution.

A functional diagram of the interface protocol is given in Fig. 3. For the serial communications, a similar internal state machine working with a maximum frequency of 1MHz to that adopted in the CurvACE sensor [38] was used to transfer the data to an external device.

 figure: Fig. 3

Fig. 3 Block diagram of the retina’s serial synchronous read-out interface (see [39]).

Download Full Size | PPT Slide | PDF

4. M2APix: Michaelis-Menten auto-adaptive pixel

In this section, we present our novel auto-adaptive pixel implementing the Michaelis-Menten model in analog VLSI, as described in Section 2. The theoretical basis of the analog circuit is first presented, and an example of the auto-adaptive pixel’s response is then given to illustrate the behavior of the M2APix and show how the pixel is related to S-shaped curves such as those presented in Fig. 1.

4.1. Circuit description

The block scheme depicted in Fig. 4(a) gives an overview of the Michaelis-Menten pixel implementation. All the blocks in the area delimited by the dashed lines belong to a single pixel.

 figure: Fig. 4

Fig. 4 (a) Block diagram of the M2APix: Michaelis-Menten Auto-adaptive Pixel. The blocks in the dashed-line area, which are replicated 12 times, belong to a single pixel, whereas the two blocks outside the dashed-line area are common to all twelve pixels. (b) Hardware implementation in VLSI of an elementary auto-adaptive pixel (photodiode and current normalizer), the output signal of which is noted Iouti. A switch S can be used to select either I0i as the mean current Imeani provided by the built-in averaging circuit, or an external current Iexti provided by an external circuit. (c) Hardware implementation in VLSI of the filtering and averaging circuit computing the mean current of the 12 mirrored photodiode currents (I′phi) produced by the 12 normalizer circuits.

Download Full Size | PPT Slide | PDF

These blocks are therefore replicated twelve times, whereas the two blocks outside the dashed-line area are common to all twelve pixels.

To implement the Michaelis-Menten function in (1), we adopted the current normalizer model presented in [40] (Chapter 6, pp. 148–150), with an arbitrary number of current inputs, and patented in [41] in the context of photosensing. In the present application, two current inputs are required: one for the photodiode current Iphi and one for the current I0i corresponding to the average illuminance. The functional scheme of the current normalizer is presented in Fig. 4(b). A switch S can be used to select either I0i as the mean current Imeani provided by the builtin averaging circuit or an external current Iexti provided by an external circuit. In what follows, we take I0i = Imeani.

The scheme adopted here improves the functioning of the current normalizer by adjusting the Vref voltage to a different value from that of Vdd (3.3V). The auto-adaptive pixels we designed work efficiently in a wide range of luminosities corresponding to a photodiode current ranging from about 20 pA to 20 μA. The Vref voltage optimizes the functioning of the system at low currents by preventing the current source Mb transistor from saturating.

Operating principle of the normalizer

If the transistors M1,..., M4 have the same dimensions and are working in their sub-threshold region, the current output Iouti can be expressed as follows:

Iouti=IbIphiIphi+Imeani,
where Ib = 50nA and the index i = 1...12 gives the number of pixels. For the sake of simplicity, we will drop the index in what follows.

To obtain the same auto-adaptation to light as that which occurs in animals’ eyes, Imean has to be a representative value of the background luminosity perceived by the artificial retina (see Section 1 and 2). Accordingly, the computation of Imean must reflect only the low-frequency changes in the light perceived by all the photoreceptors in the retina. The current Imean is therefore the average value of copies of all 12 photodiode currents (I′phi) filtered with a first order low-pass filter. As shown in Fig. 4(c), the low-pass filter is provided by a gm-C structure using an operational transconductance amplifier (OTA) Gm and an external capacitor Cm. In particular, if the external capacity is set at 100nF, as done in our tests, the cut-off frequency will be 150mHz.

As shown in Fig. 4(a), the output current of the normalizer is converted into a voltage via a high gain transimpedance amplifier (TIA). This high-factor current-to-voltage conversion Rf is obtained using a low-transconductance OTA in the feedback loop of an operational amplifier stage. The output voltage Vout can therefore be expressed as follows:

Vout=RfIout,
where Rf is set at 17.5MΩ via the OTA transconductance.

To prevent the occurrence of aliasing due to the sampling frequency of the digital conversion, a first-order low-pass filter with a gm-C structure is added to each pixel. A cut-off frequency of 300Hz is achieved by means of a low-transconductance OTA and an internal capacitance.

Lastly, a voltage follower helps the pixel to drive the input sampling capacitance of the ADC.

The output pixel voltage can then be written as follows:

Vout=RfIbIphIph+Imean+VBG,
where VBG ≈ 2.3V denotes the band-gap voltage due to the intrinsic functioning of the various stages.

4.2. M2APix response

The fact that the term Imean in equation (4) corresponds to the average luminosity constitutes a key point in the adaptive behavior of the pixel.

Let us assume that in the absence of any optical lenses placed on the retina, all the pixels are exposed to the same light intensity. If no changes or only very slow changes in the luminosity occur, all the photodiode currents and their average will be identical. Therefore by substituting Imean = Iph into (4), the steady-state value of the pixel’s output can be obtained:

Vout0=RfIb2+VBG,
which is a constant value depending only on the operating current Ib and not on the photodiode currents.

This feature is the main difference with respect to the biological findings, which on the contrary, show the existence of a logarithmic increase in the steady-state response with respect to the luminosity, giving rise to the so called “response compression” (see Section 2). In fact, while the contrast sensitivity of the OPL varies, depending on the average luminosity (see for instance the slope around the small horizontal line of the full-triangle curve in comparison with the empty-triangle one in Fig. 1), our silicon retina shows the same contrast sensitivity whatever the average luminosity. In other words, an object showing a contrast of 10% will generate the same signal amplitude at the output of the M2APix under both low and high luminosity levels.

In order to explain this behavior more clearly and show how it is related to S-shaped curves such as those presented in Fig. 1, an example of the pixel’s response was plotted as shown in Fig. 5(a).

At the time t0, all the photodiode currents and the pixel’s output are stabilized at Iph0 and Vout0, respectively. Then, after the time Ts, a step change in the luminosity occurs in front of the silicon retina, making all the photodiode currents “rapidly” change from Iph0 to Iphi (red line), while the average current changes much more slowly (magenta line). During the stabilization of the average current, the pixel’s output first increases from Vout0 to the peak value Vouti, and then decreases until it reaches its stable baseline value Vout0 (blue line) again. We take rise time (Tr) to denote the time taken by Vout to go from Vout0 to 90% of (VoutiVout0) and fall time (Tf) the time taken by the signal to go from Vouti to 90% of (Vout0Vouti). It can be shown that Tr and Tf mainly depend on the photodiode and transistors time-constant and the averaging block time-constant ( τm=CmGm), respectively.

As Trτm, it can be assumed that when Vout = Vouti, Imean does not change (ImeanIph0) independently from Iphi. Therefore, the points defining the S-shaped curve correspond to the peak values Vouti reached at all the step values Iphi after the same initial value Iph0 (Fig. 5(b)). It is worth noting that when the value of Iphi reaches up to ±1 decades of Iph0, the pixel shows a logarithmic sensitivity to changes in the lighting conditions (see the sloping part of the curve around the operating point), while the sensitivity decreases drastically in response to greater changes in the light.

Thanks to the intrinsic properties of the normalization, the peak values depend only on the ratio IphiIph0, resulting in a horizontal shift of the S-shaped curve depending on Iphi in the same way as the curves in Fig. 1.

In addition, as Iph is linearly proportional to the luminous intensity, we can assume the presence of a luminous contrast defined by the Michelson formula: ci=IphiIph0Iphi+Iph0. Substituting the inverse of this formula into equation (4), i.e. Iphi=1+ci1ciIph0, we obtain:

Vouti=RfIbci+12+VBG.

The pixel can therefore be said to give a linear contrast response, and the contrast resolution is given by the coefficient of ci in equation (6).

Lastly, an AC noise simulation was performed with a white noise at the input (corresponding to the shot noise of the photodiode), to obtain the Root Mean Square (RMS) of the output noise and consequently the minimum detectable contrast for different values of the average luminosity, i.e. the DC photodiode current. The RMS values are given by integrating the output noise in [10−5, 108]Hz. The minimum detectable contrast can be defined as the contrast that gives rise to a transient response of the output signal ±6-fold the RMS noise.

Figure 6 shows the RMS of the simulated output noise (blue) and the minimum detectable contrast (red) with respect to the photodiode current. We can notice that the noise is decreasing with the DC photodiode current. At very low background luminosity, the noise is dominated by the input white noise of the photodiode which is amplified by the gain of the circuit. At higher luminosity, this gain is smaller, so the simulated input noise becomes negligible and the output noise is nearly equal to the transistors’ noise. In any case, the RMS values obtained are always very low compared to the output variations (VoutiVout0) in the transient response, as shown in the simulated response in Fig. 5. Consequently, the minimum detectable contrast is very low over the entire operating range of the average luminosity, varying from ±1.1% at 1Lux to ±0.4% at 105 Lux.

 figure: Fig. 6

Fig. 6 Root Mean Square (RMS) of the simulated output noise (blue) and corresponding minimum detectable contrast (red) with respect to the photodiode current. The RMS values are given by integrating in [10−5, 108]Hz the output noise obtained with an AC noise simulation which takes into account all the transistor noises and a white noise for the input photodiode. The minimum detectable contrast can be defined as the contrast that gives rise to a transient response of the output signal ±6-fold the RMS noise.

Download Full Size | PPT Slide | PDF

5. Method of characterization

In the studies presented in Section 1, the light adaptation and contrast sensitivity of silicon retinas were often tested by applying a series of lighting steps (AC light) in addition to various background lights (DC light). As a result, the pixels’ responses have often been described in terms of the stimulus intensity, and direct comparisons can therefore be made with the biological findings (see Section 2). However, the method used to characterize pixels’ responses is sometimes not clear or has not even been described at all, which makes comparisons with other results very difficult.

A standard method is presented here for accurately characterizing pixels’ responses to luminous changes of up to ±3 decades in a 7-decade mean luminosity range by implementing a single light source, which has been called the Lighting Box (Fig. 7).

The Lighting Box consists of a 50 × 25 × 25mm 3-D printed box with a 10 × 10mm aperture. The Lighting Box also includes a Printed Circuit Board (PCB) which accurately controls the light intensity of a red Light Emitting Diode (LED) (TLWR7600, Vishay Semiconductors) by means of a specific digital current driver (ADN8810, Analog Devices). The PCB with the LED and the PCB supporting the retina to be characterized are fixed to each side of the box so that both the LED and the silicon retina can fit into the aperture, facing each other inside the box (see Fig. 7(b)). The box also contains an optical filter support, which can be inserted between the LED and the retina in order to attenuate the LED’s intensity and thus characterize the pixel’s response in the illuminance range of interest.

The following main tools were used for this purpose:

  • The Lighting Box described above.
  • A NI Single-Board RIO-9683 Acquisition Device provided by National Instruments. The board features a 400MHz real-time processor with 128 MB DRAM and includes an integrated real-time controller and a 2-Million-Gate reconfigurable FPGA programmed using LabVIEW software including Real-Time and FPGA modules.
  • An Explorer 16 development board provided by Microchip, which includes a dsPIC 33FJ128GP804 micro-controller working at a sampling rate of 2kHz. This device was programmed using Matlab/Simulink with a toolbox specifically developed for use with Microchip dsPIC micro-controllers.
The block diagram in Fig. 8 gives an overview of the hardware setup and communication flow involved in the method of characterization.

 figure: Fig. 8

Fig. 8 Block diagram of the hardware setup and communication flow involved in the pixel characterization procedure.

Download Full Size | PPT Slide | PDF

Characterization procedure

The 12 pixels’ output signals were acquired while they were being exposed to step changes in the luminous intensity, as described in Section 4.2. During the overall acquisition process, the FPGA acts as the master component handling and synchronizing the communications with the Lighting Box in order to drive the intensity of the LED, and with the Explorer 16 in order to acquire data from the chip. Based on Fig. 5 and Fig. 8, the i-th step in the procedure can be described as follows:

  1. The FPGA sends the Lighting Box a packet of four bytes (Led Data) containing the information about the initial value (ILED0) and the step value (ILEDi) of the LED intensity.
  2. As soon as the Lighting Box receives the packet, it sets the LED intensity at the initial value ILED0.
  3. After waiting for a time (Twait), which is usually the time required for the pixel output to reach its steady-state value, the FPGA sends Explorer 16 a trigger signal (Trigger 1) making it start sending the data acquired. The steady-state value of the pixels is thus acquired before the lighting change occurs.
  4. A short instant later (Ts), the FPGA sends the Lighting Box a second trigger signal (Trigger 2) making it switch the LED intensity from ILED0 to ILEDi (see Fig. 5).
  5. The Explorer 16 sends the FPGA the appropriate number of samples (Pixel Data), depending on the sampling frequency.
  6. The FPGA stores the data acquired in a FTP server and goes back to step 1 to deal with the next pair ILED0, ILEDi.
The current-irradiance characteristic of the LED was assessed by measuring the LED’s irradiance with a radiometer (ILT1700, International Light Technologies). To obtain a good idea of what the photodiodes perceive, the radiometer was placed in front of the LED at the same distance as the chip. Therefore, without any loss of generality, ILED can be taken to stand for the LED’s irradiance instead of the LED current. Since the photodiode current is linearly proportional to the irradiance via the sensitivity Sph, as defined in Section 3, equation (4) does not have to be changed even if we take Vout to be a function of ILED.

As we were interested in characterizing our auto-adaptive pixel in 7 decades of photodiode current (see Section 4.1) using a LED covering three decades, four neutral optical filters (NG filters, Schott) were used: the 1 and 2mm NG3 type for dealing with 1- and 2-decade attenuation, respectively, and the 2 and 3mm NG9 type for dealing with 3- and 4-decade attenuation, respectively. To check the full set of S-shaped curves within ±3 decades about Iph0 (see Fig. 5(b), for example), a complete pixel characterization was carried out by merging the data obtained with the various optical filters. In particular, as the latter give contiguous 1-decade attenuation, S-shaped curves were obtained by merging and averaging the peak values Vouti acquired at the same initial irradiance with several filters. For instance, the S-shaped curve centered in 0.1Wm2 (Fig. 5(b)) was obtained by merging the peak values obtained with ILED0 = 0.1mA without any filter, ILED0 = 1mA with a 1-decade attenuation filter, ILED0 = 10mA with a 2-decade attenuation filter and ILED0 = 100mA with a 3-decade attenuation filter.

6. M2APix characterization results

In this section, it is proposed to present and discuss the experimental data obtained when our auto-adaptive silicon retina was exposed to changes in the light, as described in Section 5.

To characterize the pixel’s response in the light-adapted condition, the LED light changes were triggered when the pixel’s output signal reached its steady-state value. In other words, by referring to the steps in the characterization procedure presented in Section 5, the waiting time Twait in Step 3 was equal to 5Tf, where Tf denotes the fall time, as described in Section 4.2.

6.1. M2APix transient response

As mentioned in Section 4.2, we were interested in measuring the peak values Vouti of the twelve pixels with each pair of photodiode currents Iph0, Iphi corresponding to a step change in the light intensity (Fig. 5). Pixel output signals were recorded with various pairs of LED irradiance values ILED0, ILEDi in the [10−5, 102] Wm2 range at a sampling frequency fs = 2kHz, in order to accurately determine the instant at which the peak value occurred, using the method described in Section 5. The set of all the mean values obtained when ILEDi = ILED0 corresponds to the steady-state response of the pixel and constitutes an important auto-adaptive characteristic of the M2APix, which will be discussed below.

Some examples of the pixel’s response with ILED0=1Wm2 are presented in Fig. 9.

 figure: Fig. 9

Fig. 9 (a) Example of M2APix responses to step changes in the LED irradiance (ILEDi), starting with the same initial irradiance ( ILED0=1Wm2). (b) Zoom of the temporal pixel responses shown in Fig. 9(a) ranging between 0 and 50ms. To be able to distinguish more clearly between the steady-state responses and the transient responses, the step changes were delayed by 10ms (Ts) after the acquisition procedure had started. The black circles amount to 90% of the peak values Vouti and give qualitative information about the rise time (Tr). The contrast values are given by the Michelson formula ci=ILEDiILED0ILEDi+ILED0.

Download Full Size | PPT Slide | PDF

Figure 9 shows the auto-adaptation of the pixel’s time response. According to the simulated example presented in Section 4.2, the pixel’s output rapidly increases from the steady value Vout0 to a peak value Vouti and then returns slowly to the steady value regardless of the contrast, which was obtained here by making step changes of ILEDi. However, as shown in Fig. 9(b), the rise time (Tr), namely the time required to reach 90% of the peak value (black circles), is nearly constant with positive contrasts (ILEDi > ILED0), but depends on the contrast with negative ones (ILEDi < ILED0).

Figure 10 shows the mean rise time over the 12 pixels plotted with respect to the contrast. Each point corresponds to the time (Tr) required to reach 90% of the peak value Vouti, as depicted in Fig. 9(b), for every step change (ILED0, ILEDi).

 figure: Fig. 10

Fig. 10 Average rise time with respect to luminous contrast. Each point corresponds to the time (Tr) required to reach 90% of the peak value Vouti, as depicted in Fig. 9(b). Each color refers to a different initial irradiance value ILED0: red about 0.001Wm2, pink about 0.01Wm2, dark blue about 0.1Wm2, light blue about 1Wm2, cyan about 10Wm2, green about 100Wm2.

Download Full Size | PPT Slide | PDF

By looking at Fig. 10, we can notice that the rise time depends on both the contrast and the average luminosity. This coupling between contrast and rise time is directly due to the current-mode functioning of the system. A high negative contrast corresponds to a normalizer’s output current tending to zero, as we can see by considering IphiImeani in equation (2). The time constants of the output transistors will therefore be higher and the output signal will be slower in this case. Furthermore, the time constant of the photodiode and the transistors increases for low current, i.e. low average luminosity, explaining the higher rise times for the pink and red data points. It is worth noting that the mean rise time for positive contrasts at medium/high average luminosity (blue and green data points) is about 1ms because of the anti-aliasing low-pass filter (Fc = 300Hz, see Section 4.1).

The rise time (Tr) and the fall time (Tf) depends on the bandwidth of the output signal that is determined mainly by the photodiode and transistors time constant and the current-averaging block time constant (τm). Such a bandwidth can be modified by modifying the external capacitor Cm, since τm=CmGm (see Section 4.1). As the fall time Tf determines the time the pixel’s output takes to reach the maximum of its contrast sensitivity, i.e. the sensitivity at steady-state, it might be useful to set Tf as small as possible. If we consider an irradiance higher than 0.1Wm2 (about 100 Lux), the fall time Tf can be reasonably reduced up to 0.1s by changing the value of the external capacitor Cm to 5nF, because the rise time Tr is lower than 0.01s for any contrast in this irradiance range. In fact, Tf should be about one decade greater than Tr to guarantee the correct functioning of the pixel in this luminosity range.

6.2. M2APix S-shaped and steady-state response

The peak values obtained in response to step changes (ILEDi) starting with several initial irradiances (ILED0) are presented in Fig. 11, with respect to (a) the LED irradiance and (b) the Michelson contrast ci, defined as follows ci=ILEDiILED0ILEDi+ILED0.

 figure: Fig. 11

Fig. 11 (a) S-shaped curves and steady-state responses of the 12 pixels to LED irradiance changes in a 7-decade range. Each color refers to a different initial irradiance value ILED0 (red about 0.001Wm2, pink about 0.01Wm2, dark blue about 0.1Wm2, light blue about 1Wm2, cyan about 10Wm2, green about 100Wm2, same color and marker as Fig. 10), and the data points correspond to the average peak value Vouti reached by the 12 pixels in response to a step change in the irradiance ILEDi, as shown in Fig. 5. The steady-state response (black points) was obtained with ILEDi = ILED0 at several values of ILED0, whereas the initial values of the irradiance ILED0 are indicated by large black circles. The shaded areas were obtained by plotting the minimum to maximum values of the mean pixel output voltages. The average dispersion of each curve (σmean) ranged from 37.2mV in the case of the green one, to 85.4mV, in that of the red one. (b) Average peak response of the 12 pixels versus the contrast. The various curves correspond to the S-shaped curves in Fig. 11(a) (same colors and markers). The contrast is given by the Michelson formula: ci=ILEDiILED0ILEDi+ILED0.

Download Full Size | PPT Slide | PDF

By comparing the S-shaped curves in Fig. 11(a) with those in Fig. 1, it can be seen in the first place that our silicon retina shows in qualitative terms the same adaptation process as that observed in the OPL. In particular, the pixels auto-adapt to the average light in a 7-decade range while keeping a sensitivity of nearly 600mVLog(I) in a range of about 2 decades (corresponding to the linear part of the curves on the log scale). Each S-shaped curve obtained was well defined within a range of 6 decades, which means that the circuit did not deviate, but remained consistent with the model, even when the light changed suddenly by anything up to ±3 decades, which would correspond, for instance, to shifting from a very dark overcast sky to direct sunlight (see the lower half of the dark-blue curve). In addition, in line with equation (5), the steady-state response (black points) was almost constant throughout the 7 decades, giving the same contrast sensitivity whatever the average luminosity. Lastly, upon setting the high reference of the ADC at 2.4V (see Section 3 for further details about the integrated ADC), we obtained a LSB voltage of VLSB=2.4210=2.35mV, which corresponds to half of the voltage we would have obtained by setting the high reference of the ADC at Vdd. As the pixel output signal was limited to [1, 2.4]V, we obtained a effective resolution of the analog-to-digital conversion of 1.4V2.35mV=596=29.22, i.e. about 9 bits.

The fact that the left part of the pink and red S-shaped curves is lower than that of the other ones was due to the dark current of the photodiodes. The photodiode current can be divided into 2 components: the background current Ilbg, which depends on the irradiance, and the dark current Idark, which has a low constant value that does not depend on the irradiance. Therefore, as long as IlbgIdark, we can assume that IphIlgb and equation (4) still holds if we substitute ILED into Iph, whereas when IlbgIdark, we have IphIdark, and Vout takes a constant value regardless of ILED.

In addition, the dispersion of the S-shaped curves from one pixel to another can be seen in the shaded areas in Fig. 11(a). It is worth noting that this dispersion seems to be higher in the case of positive contrasts. This pattern is mainly due to the generation of the current Ib and not to the pixels themselves. As Ib is entirely conveyed to the normalizer’s output current Iout in the case of positive contrasts, its dispersion is directly transmitted as well. This behavior can be improved by improving the dispersion of the Ib generation cell.

In Fig. 11(b), the data points in Fig. 11(a) have been plotted versus the contrast. It can be seen from this figure that the pixel’s response decreased almost linearly with the luminous contrast regardless of the average luminosity, as predicted by equation (6). The non-linearity observed with highly negative contrasts was due to the non-linearity of the VLSI current-to-voltage converter. The contrast resolution Kc can be defined as the coefficient of ci in (6) divided by 100, i.e. Kc=RfIb2×100=4.4mV%. As the noise level is about 5mV (i.e. 2×LSB), contrasts as low as 2% can be detected. In addition, as the LSB voltage VLSB is about 2.35mV, it is worth noting that a 1% contrast gave rise to a 2-bit change.

6.3. Faithfulness of the M2APix characterization to the Michaelis-Menten model

The absolute value of the errors observed between the peak values Vouti in Fig. 11(a) and the theoretical values Vouti*, based on the model for the circuit defined in Section 4.1, is presented in Fig. 12. As post-layout simulations of the circuit showed that the current-to-voltage converter gave a non-linear response at low current values, the theoretical values Vouti* were computed by applying a look-up table of the current-to-voltage converter to equation (2), with Iph = ILED instead of using equation (4).

 figure: Fig. 12

Fig. 12 Absolute value of the error between the M2APix responses measured and those predicted by the model with respect to the LED irradiance. As post-layout simulations of the circuit showed that the current-to-voltage converter gave a non-linear response at low current values, the theoretical values Vouti* were computed by applying a look-up table of the current-to-voltage converter to (2), with Iph = ILED instead of using equation (4). (Same color and marker as previous figures)

Download Full Size | PPT Slide | PDF

It can be noted that on each S-shaped curve (each of which is presented in a different color), the difference between the pixel output signals and the model outputs ranges from 0.1% to 8% of the whole output range, showing a good match with the Michaelis-Menten function. Possibly due to the existence of a mismatch between the actual response of the current-to-voltage converter and the simulated post-layout response, the error decreased almost monotonically with the irradiance. In addition, the existence of a greater error in the case of the red curve ( ILED0=0.001Wm2) was due to the presence of the photodiode dark current (Idark), as explained above.

7. Comparison between M2APix and Delbrück pixels

In this section, it is proposed to present a quantitative comparison between the M2APix and the Delbrück pixel [16] implemented in the same silicon retina (see Fig. 2) when tested under the same conditions.

Table 1 lists all the values discussed in Section 6 for the M2APix and the corresponding values obtained for the Delbrück pixel.

Tables Icon

Table 1. Specifications of the M2APix and Delbrück pixels. The main advantage is that M2APix responds monotonically over very wide illuminance range without any loss of sensitivity and contrast resolution.

The Delbrück pixel reproduced non-monotonic responses with respect to the LED intensity for medium-high luminosity, showing unreliable responses for changes higher than ±1 decade at ILED0>1Wm2. Furthermore, the steady-state response increased with the luminosity, resulting in a higher DC output variation and a lower effective output range. Finally, the adaptation time constant of the M2APix, which determines the bandwidth of the output signal, can be potentially modified externally.

To show how the two types of pixel respond to small contrasts while they are still adapting to the average luminosity, we applied the procedure presented in Section 5, but contrary to the static case (see Section 6), they were exposed to repeated sequences of 0.5s-long changes in the LED irradiance (corresponding to contrasts ranging from −10 to 10%) while the average irradiance was increased by 1 or 2 decades every 5s.

Figure 13 shows the time responses of one M2APix and one Delbrück pixel when exposed to two different step sequences. Both types of pixel responded quickly to small changes (small contrasts) while adapting to the average luminosity, regardless of the average luminosity and the changes in the luminosity (amounting to 1 or 2 decades in this example). However, the two pixels’ responses presented some remarkable differences:

  • The M2APix (Fig. 13(b)) consistently responded to any change in the light up to ±2 decades, while the Delbrück pixel (Fig. 13(c)) responded asymmetrically for positive and negative changes (see the blue line in the dotted circled area at 10s) and nearly did not respond to a −2-decade change (see the red line in the dotted circled area at 15s).
  • The M2APix always reached the same steady-state value Vout0 independently to the average luminosity, while the Delbrück pixel reached different steady-state values for different average luminosity.
  • The contrast response produced was not the same under both light-adapted and light-adapting conditions: it depended on the change in the light. For both types of pixel, even very small contrasts (e.g. a 2% contrast) were accurately detected when the pixel had adapted to the average luminosity (see the blue line in the left-hand zoomed part of Figs. 13(b) and 13(c)). Conversely, when adapting to a sudden large change in the light, the M2APix failed to detect the same 2% contrast (see the blue line in the right-hand zoomed part of Fig. 13(b)) due to the logarithmic compression imposed by the Michaelis-Menten function and the high fall time (Tf). During this fall time (Tf), the contrast sensitivity of the pixel increases as the slope around the operating point increases when moving from Vouti to Vout0 (see the blue circles in Fig. 5). In the same condition, the Delbrück pixel shows a very different response: the first of the two consecutive 2% contrasts is detected with a much higher gain than in the light-adapted condition while the second one is nearly not detected (see the blue line in the right-hand zoomed part of Fig. 13(c)).

 figure: Fig. 13

Fig. 13 (a) Two sequences of small contrasts in a 2-decade irradiance range. Time responses of (b) M2APix and (c) Delbrück pixel [16], implemented in the same silicon retina (see Fig. 2), when exposed to the irradiance sequences in Fig. 13(a). The sequences were obtained by repeating: ±2% and ±4% contrasts (blue line), ±6% and ±12% contrasts (red line). The steps in the sequence were triggered every 0.5 s and the changes in the average irradiance every 5s. For the sake of clarity, the timing of the sequence has been slightly shifted. (The spurious peaks, such as that which occurred at 10.5 s, may have been due to some error in the data transmission)

Download Full Size | PPT Slide | PDF

It is worth noting that the M2APix responded appropriately to a 6% contrast just after a 1-decade change (see the red line in the right-hand zoomed part of Fig. 13(b)), which still corresponds to a good contrast sensitivity under light-adapting conditions. Moreover, such contrast sensitivity can be improved by decreasing the time constant of the current averaging block by reducing the value of the external capacitor Cm (see Section 4.1 for details). In this way, the fall time Tf would be lower and the pixel’s sensitivity would increase faster (see Section 4.2 for details).

8. Conclusion

The results presented here show that our Michaelis-Menten analog auto-adaptive pixel (called M2APix) can adapt automatically to any irradiance from 10−5 to 102Wm2, which would correspond, if the LED light was green, to an illuminance of about 7 × 10−3 and 7 × 104 Lux respectively, that is from half moon on a clear night to nearly direct sunlight. At the same time, the results obtained showed that the circuit does not deviate from the model, even when the light suddenly changes by up to ±3 decades, which would correspond, for instance, to shifting from a very dark overcast sky to direct sunlight.

In short, our auto-adaptive pixel can be said to have the following noteworthy features:

  • adaptation to light in a 7-decade range, while remaining sensitive to changes in the light of up to about 2 decades;
  • quasi-constant steady-state response in a 7-decade range: it produces the same contrast response whatever the average luminosity;
  • no circuit deviations from the model within a ±3-decade range of the operating current;
  • constant limited-range responses at any average luminosity, resulting in a lower LSB voltage and therefore in a higher contrast resolution;
  • minimum detectable contrast of 2% in the light-adapted condition and 6% in the light-adapting condition.

Some further improvements could be made in a future version of the M2APix in the conversion stage, by subtracting 1V from the pixel output signal before the conversion or increasing the resistance Rf of the current-to-voltage converter in order to eventually increase the conversion range to [0, 2.4]V. This would increase the contrast resolution nearly two-fold, giving a 4-bit change in response to a 1% contrast, corresponding to a minimum detectable contrast of 1% in the light-adapted condition and 3% in the light-adapting condition.

Since the M2APix makes a satisfying compromise between a high sensitivity and a wide dynamic range, it should provide a useful tool for motion detection and optical flow processing in a very large range of lighting conditions, from half-moonlight to full daylight. Thus, our auto-adaptive silicon retina, or retinas composed of larger arrays of M2APix, could be employed in several fields, from event-based applications to bio-robotics and bio-medical applications.

In the future, we plan to test our silicon retina outdoors with a suitable optical lens both at night and in the daytime, by using it to measure the optic flow, for instance. In particular, we are willing to mount one or more M2APix-based sensors onboard mobile and aerial robots (as in [42]) as aids to navigation under various luminosity conditions, as well as on vehicles in the case of automotive applications.

Acknowledgments

We thank the referees for their fruitful comments. We are most grateful to R. Potheau, A. Calzas and P. Breugnon for their expertise in VLSI designing and testing, to R. Leitel for the packaging and to J. Diperi for his involvement in the mechanical design. We thank F. Roubieu, A. Manecy, F. Colonnier, and R. Goulard for their helpful suggestions and comments during this study, A. Servel and S. Allano for their assistance and J. Blanc for correcting and improving the English manuscript. This research was supported partly by the CNRS (Life Science; Information and Engineering Science and Technology), Aix-Marseille University, the French National Research Agency -ANR- (EVA project under ANR-ContInt grant number ANR-08-CORD-007-04 and IRIS project under ANR-INS grant number ANR-12-INSE-0009). This research was also funded by a PhD grant from ANRT (Association Nationale de la Recherche et de la Technologie) as well as by PSA Peugeot Citroën via the OpenLab agreement with Aix-Marseille University and CNRS entitled “Automotive Motion Lab”.

References and links

1. F. Expert and F. Ruffier, “Flying over uneven moving terrain based on optic-flow cues without any need for reference frames or accelerometers,” Bioinspir. Biomim. (to be published in February 2015).

2. A. Spivak, A. Belenky, A. Fish, and O. Yadid-Pecht, “Wide-Dynamic-Range CMOS Image Sensors - Comparative Performance Analysis,” IEEE Trans. Electron Devices 56, 2446–2461 (2009). [CrossRef]  

3. L. Liu, J. Wiinschmann, N. P. Aryan, A. Zohny, M. Fischer, S. Kibbel, and A. Rothermel, “An ambient light adaptive subretinal stimulator,” in Proc. ESSCIRC ’09 (IEEE, 2009), pp. 420–423. [CrossRef]  

4. D. Drazen, P. Lichtsteiner, P. Häfliger, T. Delbrück, and A. Jensen, “Toward real-time particle tracking using an event-based dynamic vision sensor,” Exp. Fluids 51, 1465–1469 (2011). [CrossRef]  

5. J. Carneiro, S.-H. Ieng, C. Posch, and R. Benosman, “Event-based 3D reconstruction from neuromorphic retinas,” Neural Networks 45, 27–38 (2013). [CrossRef]   [PubMed]  

6. C. Pacoret and S. Régnier, “A review of haptic optical tweezers for an interactive microworld exploration,” Rev. Sci. Instrum. 84, 081301 (2013). [CrossRef]  

7. K. I. Naka and W. A. H. Rushton, “S-potentials from luminosity units in the retina of fish (Cyprinidae),” J. Physiol. 185, 536–555 (1966). [CrossRef]   [PubMed]  

8. L. Michaelis and M. L. Menten, “The Kinetics of Invertase Action (Die Kinetik der Invertinwirkung),” Biochemische Zeitschrift 49, 333–369 (1913).

9. F. S. Werblin, “Adaptation in a vertebrate retina: intracellular recording in Necturus,” J. Neurophysiol. 34, 228–241 (1971). [PubMed]  

10. R. A. Normann and I. Perlman, “The effects of background illumination on the photoresponses of red and green cones,” J. Physiol. 286, 491–507 (1979). [CrossRef]   [PubMed]  

11. W. S. Geisler, “Effects of bleaching and backgrounds on the flash response of the cone system,” J. Physiol. 50, 413–434 (1981). [CrossRef]  

12. S. Laughlin and M. Weckstrom, “Fast and slow photoreceptors - a comparative study of the functional diversity of coding and conductances in the Diptera,” J. Comp. Physiol., A 172, 593–609 (1993). [CrossRef]  

13. C. A. Mead and M. Mahowald, “A silicon model of early visual processing,” Neural Networks 1, 91–97 (1988). [CrossRef]  

14. T. Delbrück and C. A. Mead, “An electronic photoreceptor sensitive to small changes in intensity,” in Adv. Neural Inf. Process. Syst. 1, D. S. Touretzky, ed. (Morgan Kaufmann, 1989), pp. 720–727.

15. K. A. Boahen and A. G. Andreou, “A Contrast Sensitive Silicon Retina with Reciprocal Synapses,” in Adv. Neural Inf. Process. Syst. 4, D. S. Touretzky, ed. (Morgan Kaufmann, 1991), pp. 764–772.

16. T. Delbrück and C. A. Mead, “Analog VLSI Adaptive, Logarithmic, Wide-Dynamic-Range Photoreceptor,” in IEEE Int. Symp. Circuits Syst. (ISCAS ’94), (IEEE, 1994), pp. 339–342. [CrossRef]  

17. K. A. Zaghloul and K. Boahen, “A silicon retina that reproduces signals in the optic nerve,” J. Neural Eng. 3, 257–267 (2006). [CrossRef]   [PubMed]  

18. P. Lichtsteiner, C. Posch, and T. Delbrück, “A 128×128 120 dB 15 μs Latency Asynchronous Temporal Contrast Vision Sensor,” IEEE J. Solid-State Circuits 43, 566–576 (2008). [CrossRef]  

19. K. Shimonomura, S. Kameda, A. Iwata, and T. Yagi, “Wide-dynamic-range APS-based silicon retina with brightness constancy,” IEEE Trans. Neural Networks 22, 1482–93 (2011). [CrossRef]  

20. E. Reinhard and K. Devlin, “Dynamic range reduction inspired by photoreceptor physiology,” IEEE Trans. Vis. Comput. Graphics 11, 13–24 (2005). [CrossRef]  

21. L. Meylan, D. Alleysson, and S. Süsstrunk, “Model of retinal local adaptation for the tone mapping of color filter array images,” J. Opt. Soc. Am. A 24, 2807–2816 (2007). [CrossRef]  

22. N.-S. Vu and A. Caplier, “Illumination-robust face recognition using retina modeling,” in IEEE Int. Conf. Image Process. (ICIP ’09), (IEEE, 2009), pp. 3289–3292.

23. S. Ferradans, M. Bertalmio, E. Provenzi, and V. Caselles, “An analysis of visual adaptation and contrast perception for tone mapping,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 2002–2012 (2011). [CrossRef]  

24. E. Zrenner, “Will Retinal Implants Restore Vision?,” Science 295, 1022–1025 (2002). [CrossRef]   [PubMed]  

25. K. Stingl, K. U. Bartz-Schmidt, D. Besch, A. Braun, A. Bruckmann, F. Gekeler, U. Greppmaier, S. Hipp, G. Hörtdörfer, C. Kernstock, H. Koitschev, A. Kusnyerik, H. Sachs, A. Schatz, K. T. Stingl, T. Peters, B. Wilhelm, and E. Zrenner, “Artificial vision with wirelessly powered subretinal electronic implant alpha-IMS,” Proc. R. Soc. London, Ser. B 280, (2013). [CrossRef]  

26. F. S. Werblin, “Control of Retinal Sensitivity II: Lateral Interactions at the Outer Plexiform Layer,” J. Gen. Physiol. 63, 62–87 (1974). [CrossRef]   [PubMed]  

27. T. Delbrück, “Investigations of visual transduction and motion processing,” Ph.D. thesis, California Inst. Tech., Pasadena, California (1993).

28. F. Expert, S. Viollet, and F. Ruffier, “Outdoor field performances of insect-based visual motion sensors,” J. Field Robot. 28, 529–541 (2012). [CrossRef]  

29. G. Sicard, H. Abbas, H. Amhaz, H. Zimouche, R. Rolland, and D. Alleysson, “A CMOS HDR Imager with an Analog Local Adaptation,” in Int. Image Sensor Workshop (IISW ’13), (2013), pp. 1–4.

30. D. A. Baylor and M. G. F. Fuortes, “Electrical responses of single cones in the retina of the turtle,” J. Physiol. 207, 77–92 (1970). [CrossRef]   [PubMed]  

31. R. M. Boynton and D. N. Whitten, “Visual Adaptation in Monkey Cones: Recordings of Late Receptor Potentials,” Science 170, 1423–1426 (1970). [CrossRef]   [PubMed]  

32. J. Kleinschmidt and J. E. Dowling, “Intracellular Recordings from Gecko Photoreceptors during Light and Dark Adaptation,” J. Gen. Physiol. 66, 617–648 (1975). [CrossRef]   [PubMed]  

33. D. C. Hood and P. A. Hock, “Light adaptation of the receptors: Increment threshold functions for the frog’s rods and cones,” Vision Res. 15, 545–553 (1975). [CrossRef]   [PubMed]  

34. S. B. Laughlin and R. C. Hardie, “Common strategies for light adaptation in the peripheral visual systems of fly and dragonfly,” J. Comp. Physiol., A 128, 319–340 (1978). [CrossRef]  

35. G. Svaetichin, “The cone action potential,” Acta Physiol. 29, 565–600 (1953).

36. T. Oikawa, T. Ogawa, and K. Motokawa, “Origin of so-called cone action potential,” J. Neurophysiol. 22, 102– 111 (1959). [PubMed]  

37. J. M. Valeton, “Photoreceptor light adaptation models: an evaluation,” Vision Res. 23, 1549–1554 (1983). [CrossRef]   [PubMed]  

38. D. Floreano, R. Pericet-Camara, S. Viollet, F. Ruffier, A. Brückner, R. Leitel, W. Buss, M. Menouni, F. Expert, R. Juston, M. K. Dobrzynski, G. L’Eplattenier, F. Recktenwald, H. a. Mallot, and N. Franceschini, “Miniature curved artificial compound eyes,” Proc. Nat. Acad. Sci. U. S. A. 110, 9267–72 (2013). [CrossRef]  

39. S. Viollet, S. Godiot, R. Leitel, W. Buss, P. Breugnon, M. Menouni, R. Juston, F. Expert, F. Colonnier, G. L’Eplattenier, A. Brückner, F. Kraze, H. Mallot, N. Franceschini, R. Pericet-Camara, F. Ruffier, and D. Floreano D., “Hardware Architecture and Cutting-Edge Assembly Process of a Tiny Curved Compound Eye,” Sensors 110, 21702–21721 (2014). [CrossRef]  

40. S.-C. Liu, J. Kramer, G. Indiveri, T. Delbrück, and R. Douglas, Analog VLSI: Circuits and Principles (MIT Press, 2002).

41. P. Venier and X. Arreguit, “Réseau de cellules photosensibles et capteur d’images comportant un tel réseau,” French Patent, Patent No.: EP0792063A1 (1997).

42. G. Sabiron, P. Chavent, T. Raharijaona, P. Fabiani, and F. Ruffier, “Low-speed optic-flow sensor onboard an unmanned helicopter flying outside over fields,” in IEEE Int. Conf. Robot. Autom. (ICRA ’13), (IEEE, 2013), pp. 1742–1749. [CrossRef]  

References

  • View by:

  1. F. Expert and F. Ruffier, “Flying over uneven moving terrain based on optic-flow cues without any need for reference frames or accelerometers,” Bioinspir. Biomim. (to be published in February 2015).
  2. A. Spivak, A. Belenky, A. Fish, and O. Yadid-Pecht, “Wide-Dynamic-Range CMOS Image Sensors - Comparative Performance Analysis,” IEEE Trans. Electron Devices 56, 2446–2461 (2009).
    [Crossref]
  3. L. Liu, J. Wiinschmann, N. P. Aryan, A. Zohny, M. Fischer, S. Kibbel, and A. Rothermel, “An ambient light adaptive subretinal stimulator,” in Proc. ESSCIRC ’09 (IEEE, 2009), pp. 420–423.
    [Crossref]
  4. D. Drazen, P. Lichtsteiner, P. Häfliger, T. Delbrück, and A. Jensen, “Toward real-time particle tracking using an event-based dynamic vision sensor,” Exp. Fluids 51, 1465–1469 (2011).
    [Crossref]
  5. J. Carneiro, S.-H. Ieng, C. Posch, and R. Benosman, “Event-based 3D reconstruction from neuromorphic retinas,” Neural Networks 45, 27–38 (2013).
    [Crossref] [PubMed]
  6. C. Pacoret and S. Régnier, “A review of haptic optical tweezers for an interactive microworld exploration,” Rev. Sci. Instrum. 84, 081301 (2013).
    [Crossref]
  7. K. I. Naka and W. A. H. Rushton, “S-potentials from luminosity units in the retina of fish (Cyprinidae),” J. Physiol. 185, 536–555 (1966).
    [Crossref] [PubMed]
  8. L. Michaelis and M. L. Menten, “The Kinetics of Invertase Action (Die Kinetik der Invertinwirkung),” Biochemische Zeitschrift 49, 333–369 (1913).
  9. F. S. Werblin, “Adaptation in a vertebrate retina: intracellular recording in Necturus,” J. Neurophysiol. 34, 228–241 (1971).
    [PubMed]
  10. R. A. Normann and I. Perlman, “The effects of background illumination on the photoresponses of red and green cones,” J. Physiol. 286, 491–507 (1979).
    [Crossref] [PubMed]
  11. W. S. Geisler, “Effects of bleaching and backgrounds on the flash response of the cone system,” J. Physiol. 50, 413–434 (1981).
    [Crossref]
  12. S. Laughlin and M. Weckstrom, “Fast and slow photoreceptors - a comparative study of the functional diversity of coding and conductances in the Diptera,” J. Comp. Physiol., A 172, 593–609 (1993).
    [Crossref]
  13. C. A. Mead and M. Mahowald, “A silicon model of early visual processing,” Neural Networks 1, 91–97 (1988).
    [Crossref]
  14. T. Delbrück and C. A. Mead, “An electronic photoreceptor sensitive to small changes in intensity,” in Adv. Neural Inf. Process. Syst. 1, D. S. Touretzky, ed. (Morgan Kaufmann, 1989), pp. 720–727.
  15. K. A. Boahen and A. G. Andreou, “A Contrast Sensitive Silicon Retina with Reciprocal Synapses,” in Adv. Neural Inf. Process. Syst. 4, D. S. Touretzky, ed. (Morgan Kaufmann, 1991), pp. 764–772.
  16. T. Delbrück and C. A. Mead, “Analog VLSI Adaptive, Logarithmic, Wide-Dynamic-Range Photoreceptor,” in IEEE Int. Symp. Circuits Syst. (ISCAS ’94), (IEEE, 1994), pp. 339–342.
    [Crossref]
  17. K. A. Zaghloul and K. Boahen, “A silicon retina that reproduces signals in the optic nerve,” J. Neural Eng. 3, 257–267 (2006).
    [Crossref] [PubMed]
  18. P. Lichtsteiner, C. Posch, and T. Delbrück, “A 128×128 120 dB 15 μs Latency Asynchronous Temporal Contrast Vision Sensor,” IEEE J. Solid-State Circuits 43, 566–576 (2008).
    [Crossref]
  19. K. Shimonomura, S. Kameda, A. Iwata, and T. Yagi, “Wide-dynamic-range APS-based silicon retina with brightness constancy,” IEEE Trans. Neural Networks 22, 1482–93 (2011).
    [Crossref]
  20. E. Reinhard and K. Devlin, “Dynamic range reduction inspired by photoreceptor physiology,” IEEE Trans. Vis. Comput. Graphics 11, 13–24 (2005).
    [Crossref]
  21. L. Meylan, D. Alleysson, and S. Süsstrunk, “Model of retinal local adaptation for the tone mapping of color filter array images,” J. Opt. Soc. Am. A 24, 2807–2816 (2007).
    [Crossref]
  22. N.-S. Vu and A. Caplier, “Illumination-robust face recognition using retina modeling,” in IEEE Int. Conf. Image Process. (ICIP ’09), (IEEE, 2009), pp. 3289–3292.
  23. S. Ferradans, M. Bertalmio, E. Provenzi, and V. Caselles, “An analysis of visual adaptation and contrast perception for tone mapping,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 2002–2012 (2011).
    [Crossref]
  24. E. Zrenner, “Will Retinal Implants Restore Vision?,” Science 295, 1022–1025 (2002).
    [Crossref] [PubMed]
  25. K. Stingl, K. U. Bartz-Schmidt, D. Besch, A. Braun, A. Bruckmann, F. Gekeler, U. Greppmaier, S. Hipp, G. Hörtdörfer, C. Kernstock, H. Koitschev, A. Kusnyerik, H. Sachs, A. Schatz, K. T. Stingl, T. Peters, B. Wilhelm, and E. Zrenner, “Artificial vision with wirelessly powered subretinal electronic implant alpha-IMS,” Proc. R. Soc. London, Ser. B 280, (2013).
    [Crossref]
  26. F. S. Werblin, “Control of Retinal Sensitivity II: Lateral Interactions at the Outer Plexiform Layer,” J. Gen. Physiol. 63, 62–87 (1974).
    [Crossref] [PubMed]
  27. T. Delbrück, “Investigations of visual transduction and motion processing,” Ph.D. thesis, California Inst. Tech., Pasadena, California (1993).
  28. F. Expert, S. Viollet, and F. Ruffier, “Outdoor field performances of insect-based visual motion sensors,” J. Field Robot. 28, 529–541 (2012).
    [Crossref]
  29. G. Sicard, H. Abbas, H. Amhaz, H. Zimouche, R. Rolland, and D. Alleysson, “A CMOS HDR Imager with an Analog Local Adaptation,” in Int. Image Sensor Workshop (IISW ’13), (2013), pp. 1–4.
  30. D. A. Baylor and M. G. F. Fuortes, “Electrical responses of single cones in the retina of the turtle,” J. Physiol. 207, 77–92 (1970).
    [Crossref] [PubMed]
  31. R. M. Boynton and D. N. Whitten, “Visual Adaptation in Monkey Cones: Recordings of Late Receptor Potentials,” Science 170, 1423–1426 (1970).
    [Crossref] [PubMed]
  32. J. Kleinschmidt and J. E. Dowling, “Intracellular Recordings from Gecko Photoreceptors during Light and Dark Adaptation,” J. Gen. Physiol. 66, 617–648 (1975).
    [Crossref] [PubMed]
  33. D. C. Hood and P. A. Hock, “Light adaptation of the receptors: Increment threshold functions for the frog’s rods and cones,” Vision Res. 15, 545–553 (1975).
    [Crossref] [PubMed]
  34. S. B. Laughlin and R. C. Hardie, “Common strategies for light adaptation in the peripheral visual systems of fly and dragonfly,” J. Comp. Physiol., A 128, 319–340 (1978).
    [Crossref]
  35. G. Svaetichin, “The cone action potential,” Acta Physiol. 29, 565–600 (1953).
  36. T. Oikawa, T. Ogawa, and K. Motokawa, “Origin of so-called cone action potential,” J. Neurophysiol. 22, 102– 111 (1959).
    [PubMed]
  37. J. M. Valeton, “Photoreceptor light adaptation models: an evaluation,” Vision Res. 23, 1549–1554 (1983).
    [Crossref] [PubMed]
  38. D. Floreano, R. Pericet-Camara, S. Viollet, F. Ruffier, A. Brückner, R. Leitel, W. Buss, M. Menouni, F. Expert, R. Juston, M. K. Dobrzynski, G. L’Eplattenier, F. Recktenwald, H. a. Mallot, and N. Franceschini, “Miniature curved artificial compound eyes,” Proc. Nat. Acad. Sci. U. S. A. 110, 9267–72 (2013).
    [Crossref]
  39. S. Viollet, S. Godiot, R. Leitel, W. Buss, P. Breugnon, M. Menouni, R. Juston, F. Expert, F. Colonnier, G. L’Eplattenier, A. Brückner, F. Kraze, H. Mallot, N. Franceschini, R. Pericet-Camara, F. Ruffier, and D. Floreano D., “Hardware Architecture and Cutting-Edge Assembly Process of a Tiny Curved Compound Eye,” Sensors 110, 21702–21721 (2014).
    [Crossref]
  40. S.-C. Liu, J. Kramer, G. Indiveri, T. Delbrück, and R. Douglas, Analog VLSI: Circuits and Principles (MIT Press, 2002).
  41. P. Venier and X. Arreguit, “Réseau de cellules photosensibles et capteur d’images comportant un tel réseau,” French Patent, Patent No.: EP0792063A1 (1997).
  42. G. Sabiron, P. Chavent, T. Raharijaona, P. Fabiani, and F. Ruffier, “Low-speed optic-flow sensor onboard an unmanned helicopter flying outside over fields,” in IEEE Int. Conf. Robot. Autom. (ICRA ’13), (IEEE, 2013), pp. 1742–1749.
    [Crossref]

2014 (1)

S. Viollet, S. Godiot, R. Leitel, W. Buss, P. Breugnon, M. Menouni, R. Juston, F. Expert, F. Colonnier, G. L’Eplattenier, A. Brückner, F. Kraze, H. Mallot, N. Franceschini, R. Pericet-Camara, F. Ruffier, and D. Floreano D., “Hardware Architecture and Cutting-Edge Assembly Process of a Tiny Curved Compound Eye,” Sensors 110, 21702–21721 (2014).
[Crossref]

2013 (4)

D. Floreano, R. Pericet-Camara, S. Viollet, F. Ruffier, A. Brückner, R. Leitel, W. Buss, M. Menouni, F. Expert, R. Juston, M. K. Dobrzynski, G. L’Eplattenier, F. Recktenwald, H. a. Mallot, and N. Franceschini, “Miniature curved artificial compound eyes,” Proc. Nat. Acad. Sci. U. S. A. 110, 9267–72 (2013).
[Crossref]

K. Stingl, K. U. Bartz-Schmidt, D. Besch, A. Braun, A. Bruckmann, F. Gekeler, U. Greppmaier, S. Hipp, G. Hörtdörfer, C. Kernstock, H. Koitschev, A. Kusnyerik, H. Sachs, A. Schatz, K. T. Stingl, T. Peters, B. Wilhelm, and E. Zrenner, “Artificial vision with wirelessly powered subretinal electronic implant alpha-IMS,” Proc. R. Soc. London, Ser. B 280, (2013).
[Crossref]

J. Carneiro, S.-H. Ieng, C. Posch, and R. Benosman, “Event-based 3D reconstruction from neuromorphic retinas,” Neural Networks 45, 27–38 (2013).
[Crossref] [PubMed]

C. Pacoret and S. Régnier, “A review of haptic optical tweezers for an interactive microworld exploration,” Rev. Sci. Instrum. 84, 081301 (2013).
[Crossref]

2012 (1)

F. Expert, S. Viollet, and F. Ruffier, “Outdoor field performances of insect-based visual motion sensors,” J. Field Robot. 28, 529–541 (2012).
[Crossref]

2011 (3)

S. Ferradans, M. Bertalmio, E. Provenzi, and V. Caselles, “An analysis of visual adaptation and contrast perception for tone mapping,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 2002–2012 (2011).
[Crossref]

D. Drazen, P. Lichtsteiner, P. Häfliger, T. Delbrück, and A. Jensen, “Toward real-time particle tracking using an event-based dynamic vision sensor,” Exp. Fluids 51, 1465–1469 (2011).
[Crossref]

K. Shimonomura, S. Kameda, A. Iwata, and T. Yagi, “Wide-dynamic-range APS-based silicon retina with brightness constancy,” IEEE Trans. Neural Networks 22, 1482–93 (2011).
[Crossref]

2009 (1)

A. Spivak, A. Belenky, A. Fish, and O. Yadid-Pecht, “Wide-Dynamic-Range CMOS Image Sensors - Comparative Performance Analysis,” IEEE Trans. Electron Devices 56, 2446–2461 (2009).
[Crossref]

2008 (1)

P. Lichtsteiner, C. Posch, and T. Delbrück, “A 128×128 120 dB 15 μs Latency Asynchronous Temporal Contrast Vision Sensor,” IEEE J. Solid-State Circuits 43, 566–576 (2008).
[Crossref]

2007 (1)

2006 (1)

K. A. Zaghloul and K. Boahen, “A silicon retina that reproduces signals in the optic nerve,” J. Neural Eng. 3, 257–267 (2006).
[Crossref] [PubMed]

2005 (1)

E. Reinhard and K. Devlin, “Dynamic range reduction inspired by photoreceptor physiology,” IEEE Trans. Vis. Comput. Graphics 11, 13–24 (2005).
[Crossref]

2002 (1)

E. Zrenner, “Will Retinal Implants Restore Vision?,” Science 295, 1022–1025 (2002).
[Crossref] [PubMed]

1997 (1)

P. Venier and X. Arreguit, “Réseau de cellules photosensibles et capteur d’images comportant un tel réseau,” French Patent, Patent No.: EP0792063A1 (1997).

1993 (1)

S. Laughlin and M. Weckstrom, “Fast and slow photoreceptors - a comparative study of the functional diversity of coding and conductances in the Diptera,” J. Comp. Physiol., A 172, 593–609 (1993).
[Crossref]

1988 (1)

C. A. Mead and M. Mahowald, “A silicon model of early visual processing,” Neural Networks 1, 91–97 (1988).
[Crossref]

1983 (1)

J. M. Valeton, “Photoreceptor light adaptation models: an evaluation,” Vision Res. 23, 1549–1554 (1983).
[Crossref] [PubMed]

1981 (1)

W. S. Geisler, “Effects of bleaching and backgrounds on the flash response of the cone system,” J. Physiol. 50, 413–434 (1981).
[Crossref]

1979 (1)

R. A. Normann and I. Perlman, “The effects of background illumination on the photoresponses of red and green cones,” J. Physiol. 286, 491–507 (1979).
[Crossref] [PubMed]

1978 (1)

S. B. Laughlin and R. C. Hardie, “Common strategies for light adaptation in the peripheral visual systems of fly and dragonfly,” J. Comp. Physiol., A 128, 319–340 (1978).
[Crossref]

1975 (2)

J. Kleinschmidt and J. E. Dowling, “Intracellular Recordings from Gecko Photoreceptors during Light and Dark Adaptation,” J. Gen. Physiol. 66, 617–648 (1975).
[Crossref] [PubMed]

D. C. Hood and P. A. Hock, “Light adaptation of the receptors: Increment threshold functions for the frog’s rods and cones,” Vision Res. 15, 545–553 (1975).
[Crossref] [PubMed]

1974 (1)

F. S. Werblin, “Control of Retinal Sensitivity II: Lateral Interactions at the Outer Plexiform Layer,” J. Gen. Physiol. 63, 62–87 (1974).
[Crossref] [PubMed]

1971 (1)

F. S. Werblin, “Adaptation in a vertebrate retina: intracellular recording in Necturus,” J. Neurophysiol. 34, 228–241 (1971).
[PubMed]

1970 (2)

D. A. Baylor and M. G. F. Fuortes, “Electrical responses of single cones in the retina of the turtle,” J. Physiol. 207, 77–92 (1970).
[Crossref] [PubMed]

R. M. Boynton and D. N. Whitten, “Visual Adaptation in Monkey Cones: Recordings of Late Receptor Potentials,” Science 170, 1423–1426 (1970).
[Crossref] [PubMed]

1966 (1)

K. I. Naka and W. A. H. Rushton, “S-potentials from luminosity units in the retina of fish (Cyprinidae),” J. Physiol. 185, 536–555 (1966).
[Crossref] [PubMed]

1959 (1)

T. Oikawa, T. Ogawa, and K. Motokawa, “Origin of so-called cone action potential,” J. Neurophysiol. 22, 102– 111 (1959).
[PubMed]

1953 (1)

G. Svaetichin, “The cone action potential,” Acta Physiol. 29, 565–600 (1953).

1913 (1)

L. Michaelis and M. L. Menten, “The Kinetics of Invertase Action (Die Kinetik der Invertinwirkung),” Biochemische Zeitschrift 49, 333–369 (1913).

Abbas, H.

G. Sicard, H. Abbas, H. Amhaz, H. Zimouche, R. Rolland, and D. Alleysson, “A CMOS HDR Imager with an Analog Local Adaptation,” in Int. Image Sensor Workshop (IISW ’13), (2013), pp. 1–4.

Alleysson, D.

L. Meylan, D. Alleysson, and S. Süsstrunk, “Model of retinal local adaptation for the tone mapping of color filter array images,” J. Opt. Soc. Am. A 24, 2807–2816 (2007).
[Crossref]

G. Sicard, H. Abbas, H. Amhaz, H. Zimouche, R. Rolland, and D. Alleysson, “A CMOS HDR Imager with an Analog Local Adaptation,” in Int. Image Sensor Workshop (IISW ’13), (2013), pp. 1–4.

Amhaz, H.

G. Sicard, H. Abbas, H. Amhaz, H. Zimouche, R. Rolland, and D. Alleysson, “A CMOS HDR Imager with an Analog Local Adaptation,” in Int. Image Sensor Workshop (IISW ’13), (2013), pp. 1–4.

Andreou, A. G.

K. A. Boahen and A. G. Andreou, “A Contrast Sensitive Silicon Retina with Reciprocal Synapses,” in Adv. Neural Inf. Process. Syst. 4, D. S. Touretzky, ed. (Morgan Kaufmann, 1991), pp. 764–772.

Arreguit, X.

P. Venier and X. Arreguit, “Réseau de cellules photosensibles et capteur d’images comportant un tel réseau,” French Patent, Patent No.: EP0792063A1 (1997).

Aryan, N. P.

L. Liu, J. Wiinschmann, N. P. Aryan, A. Zohny, M. Fischer, S. Kibbel, and A. Rothermel, “An ambient light adaptive subretinal stimulator,” in Proc. ESSCIRC ’09 (IEEE, 2009), pp. 420–423.
[Crossref]

Bartz-Schmidt, K. U.

K. Stingl, K. U. Bartz-Schmidt, D. Besch, A. Braun, A. Bruckmann, F. Gekeler, U. Greppmaier, S. Hipp, G. Hörtdörfer, C. Kernstock, H. Koitschev, A. Kusnyerik, H. Sachs, A. Schatz, K. T. Stingl, T. Peters, B. Wilhelm, and E. Zrenner, “Artificial vision with wirelessly powered subretinal electronic implant alpha-IMS,” Proc. R. Soc. London, Ser. B 280, (2013).
[Crossref]

Baylor, D. A.

D. A. Baylor and M. G. F. Fuortes, “Electrical responses of single cones in the retina of the turtle,” J. Physiol. 207, 77–92 (1970).
[Crossref] [PubMed]

Belenky, A.

A. Spivak, A. Belenky, A. Fish, and O. Yadid-Pecht, “Wide-Dynamic-Range CMOS Image Sensors - Comparative Performance Analysis,” IEEE Trans. Electron Devices 56, 2446–2461 (2009).
[Crossref]

Benosman, R.

J. Carneiro, S.-H. Ieng, C. Posch, and R. Benosman, “Event-based 3D reconstruction from neuromorphic retinas,” Neural Networks 45, 27–38 (2013).
[Crossref] [PubMed]

Bertalmio, M.

S. Ferradans, M. Bertalmio, E. Provenzi, and V. Caselles, “An analysis of visual adaptation and contrast perception for tone mapping,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 2002–2012 (2011).
[Crossref]

Besch, D.

K. Stingl, K. U. Bartz-Schmidt, D. Besch, A. Braun, A. Bruckmann, F. Gekeler, U. Greppmaier, S. Hipp, G. Hörtdörfer, C. Kernstock, H. Koitschev, A. Kusnyerik, H. Sachs, A. Schatz, K. T. Stingl, T. Peters, B. Wilhelm, and E. Zrenner, “Artificial vision with wirelessly powered subretinal electronic implant alpha-IMS,” Proc. R. Soc. London, Ser. B 280, (2013).
[Crossref]

Boahen, K.

K. A. Zaghloul and K. Boahen, “A silicon retina that reproduces signals in the optic nerve,” J. Neural Eng. 3, 257–267 (2006).
[Crossref] [PubMed]

Boahen, K. A.

K. A. Boahen and A. G. Andreou, “A Contrast Sensitive Silicon Retina with Reciprocal Synapses,” in Adv. Neural Inf. Process. Syst. 4, D. S. Touretzky, ed. (Morgan Kaufmann, 1991), pp. 764–772.

Boynton, R. M.

R. M. Boynton and D. N. Whitten, “Visual Adaptation in Monkey Cones: Recordings of Late Receptor Potentials,” Science 170, 1423–1426 (1970).
[Crossref] [PubMed]

Braun, A.

K. Stingl, K. U. Bartz-Schmidt, D. Besch, A. Braun, A. Bruckmann, F. Gekeler, U. Greppmaier, S. Hipp, G. Hörtdörfer, C. Kernstock, H. Koitschev, A. Kusnyerik, H. Sachs, A. Schatz, K. T. Stingl, T. Peters, B. Wilhelm, and E. Zrenner, “Artificial vision with wirelessly powered subretinal electronic implant alpha-IMS,” Proc. R. Soc. London, Ser. B 280, (2013).
[Crossref]

Breugnon, P.

S. Viollet, S. Godiot, R. Leitel, W. Buss, P. Breugnon, M. Menouni, R. Juston, F. Expert, F. Colonnier, G. L’Eplattenier, A. Brückner, F. Kraze, H. Mallot, N. Franceschini, R. Pericet-Camara, F. Ruffier, and D. Floreano D., “Hardware Architecture and Cutting-Edge Assembly Process of a Tiny Curved Compound Eye,” Sensors 110, 21702–21721 (2014).
[Crossref]

Bruckmann, A.

K. Stingl, K. U. Bartz-Schmidt, D. Besch, A. Braun, A. Bruckmann, F. Gekeler, U. Greppmaier, S. Hipp, G. Hörtdörfer, C. Kernstock, H. Koitschev, A. Kusnyerik, H. Sachs, A. Schatz, K. T. Stingl, T. Peters, B. Wilhelm, and E. Zrenner, “Artificial vision with wirelessly powered subretinal electronic implant alpha-IMS,” Proc. R. Soc. London, Ser. B 280, (2013).
[Crossref]

Brückner, A.

S. Viollet, S. Godiot, R. Leitel, W. Buss, P. Breugnon, M. Menouni, R. Juston, F. Expert, F. Colonnier, G. L’Eplattenier, A. Brückner, F. Kraze, H. Mallot, N. Franceschini, R. Pericet-Camara, F. Ruffier, and D. Floreano D., “Hardware Architecture and Cutting-Edge Assembly Process of a Tiny Curved Compound Eye,” Sensors 110, 21702–21721 (2014).
[Crossref]

D. Floreano, R. Pericet-Camara, S. Viollet, F. Ruffier, A. Brückner, R. Leitel, W. Buss, M. Menouni, F. Expert, R. Juston, M. K. Dobrzynski, G. L’Eplattenier, F. Recktenwald, H. a. Mallot, and N. Franceschini, “Miniature curved artificial compound eyes,” Proc. Nat. Acad. Sci. U. S. A. 110, 9267–72 (2013).
[Crossref]

Buss, W.

S. Viollet, S. Godiot, R. Leitel, W. Buss, P. Breugnon, M. Menouni, R. Juston, F. Expert, F. Colonnier, G. L’Eplattenier, A. Brückner, F. Kraze, H. Mallot, N. Franceschini, R. Pericet-Camara, F. Ruffier, and D. Floreano D., “Hardware Architecture and Cutting-Edge Assembly Process of a Tiny Curved Compound Eye,” Sensors 110, 21702–21721 (2014).
[Crossref]

D. Floreano, R. Pericet-Camara, S. Viollet, F. Ruffier, A. Brückner, R. Leitel, W. Buss, M. Menouni, F. Expert, R. Juston, M. K. Dobrzynski, G. L’Eplattenier, F. Recktenwald, H. a. Mallot, and N. Franceschini, “Miniature curved artificial compound eyes,” Proc. Nat. Acad. Sci. U. S. A. 110, 9267–72 (2013).
[Crossref]

Caplier, A.

N.-S. Vu and A. Caplier, “Illumination-robust face recognition using retina modeling,” in IEEE Int. Conf. Image Process. (ICIP ’09), (IEEE, 2009), pp. 3289–3292.

Carneiro, J.

J. Carneiro, S.-H. Ieng, C. Posch, and R. Benosman, “Event-based 3D reconstruction from neuromorphic retinas,” Neural Networks 45, 27–38 (2013).
[Crossref] [PubMed]

Caselles, V.

S. Ferradans, M. Bertalmio, E. Provenzi, and V. Caselles, “An analysis of visual adaptation and contrast perception for tone mapping,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 2002–2012 (2011).
[Crossref]

Chavent, P.

G. Sabiron, P. Chavent, T. Raharijaona, P. Fabiani, and F. Ruffier, “Low-speed optic-flow sensor onboard an unmanned helicopter flying outside over fields,” in IEEE Int. Conf. Robot. Autom. (ICRA ’13), (IEEE, 2013), pp. 1742–1749.
[Crossref]

Colonnier, F.

S. Viollet, S. Godiot, R. Leitel, W. Buss, P. Breugnon, M. Menouni, R. Juston, F. Expert, F. Colonnier, G. L’Eplattenier, A. Brückner, F. Kraze, H. Mallot, N. Franceschini, R. Pericet-Camara, F. Ruffier, and D. Floreano D., “Hardware Architecture and Cutting-Edge Assembly Process of a Tiny Curved Compound Eye,” Sensors 110, 21702–21721 (2014).
[Crossref]

Delbrück, T.

D. Drazen, P. Lichtsteiner, P. Häfliger, T. Delbrück, and A. Jensen, “Toward real-time particle tracking using an event-based dynamic vision sensor,” Exp. Fluids 51, 1465–1469 (2011).
[Crossref]

P. Lichtsteiner, C. Posch, and T. Delbrück, “A 128×128 120 dB 15 μs Latency Asynchronous Temporal Contrast Vision Sensor,” IEEE J. Solid-State Circuits 43, 566–576 (2008).
[Crossref]

T. Delbrück and C. A. Mead, “Analog VLSI Adaptive, Logarithmic, Wide-Dynamic-Range Photoreceptor,” in IEEE Int. Symp. Circuits Syst. (ISCAS ’94), (IEEE, 1994), pp. 339–342.
[Crossref]

T. Delbrück and C. A. Mead, “An electronic photoreceptor sensitive to small changes in intensity,” in Adv. Neural Inf. Process. Syst. 1, D. S. Touretzky, ed. (Morgan Kaufmann, 1989), pp. 720–727.

S.-C. Liu, J. Kramer, G. Indiveri, T. Delbrück, and R. Douglas, Analog VLSI: Circuits and Principles (MIT Press, 2002).

T. Delbrück, “Investigations of visual transduction and motion processing,” Ph.D. thesis, California Inst. Tech., Pasadena, California (1993).

Devlin, K.

E. Reinhard and K. Devlin, “Dynamic range reduction inspired by photoreceptor physiology,” IEEE Trans. Vis. Comput. Graphics 11, 13–24 (2005).
[Crossref]

Dobrzynski, M. K.

D. Floreano, R. Pericet-Camara, S. Viollet, F. Ruffier, A. Brückner, R. Leitel, W. Buss, M. Menouni, F. Expert, R. Juston, M. K. Dobrzynski, G. L’Eplattenier, F. Recktenwald, H. a. Mallot, and N. Franceschini, “Miniature curved artificial compound eyes,” Proc. Nat. Acad. Sci. U. S. A. 110, 9267–72 (2013).
[Crossref]

Douglas, R.

S.-C. Liu, J. Kramer, G. Indiveri, T. Delbrück, and R. Douglas, Analog VLSI: Circuits and Principles (MIT Press, 2002).

Dowling, J. E.

J. Kleinschmidt and J. E. Dowling, “Intracellular Recordings from Gecko Photoreceptors during Light and Dark Adaptation,” J. Gen. Physiol. 66, 617–648 (1975).
[Crossref] [PubMed]

Drazen, D.

D. Drazen, P. Lichtsteiner, P. Häfliger, T. Delbrück, and A. Jensen, “Toward real-time particle tracking using an event-based dynamic vision sensor,” Exp. Fluids 51, 1465–1469 (2011).
[Crossref]

Expert, F.

S. Viollet, S. Godiot, R. Leitel, W. Buss, P. Breugnon, M. Menouni, R. Juston, F. Expert, F. Colonnier, G. L’Eplattenier, A. Brückner, F. Kraze, H. Mallot, N. Franceschini, R. Pericet-Camara, F. Ruffier, and D. Floreano D., “Hardware Architecture and Cutting-Edge Assembly Process of a Tiny Curved Compound Eye,” Sensors 110, 21702–21721 (2014).
[Crossref]

D. Floreano, R. Pericet-Camara, S. Viollet, F. Ruffier, A. Brückner, R. Leitel, W. Buss, M. Menouni, F. Expert, R. Juston, M. K. Dobrzynski, G. L’Eplattenier, F. Recktenwald, H. a. Mallot, and N. Franceschini, “Miniature curved artificial compound eyes,” Proc. Nat. Acad. Sci. U. S. A. 110, 9267–72 (2013).
[Crossref]

F. Expert, S. Viollet, and F. Ruffier, “Outdoor field performances of insect-based visual motion sensors,” J. Field Robot. 28, 529–541 (2012).
[Crossref]

F. Expert and F. Ruffier, “Flying over uneven moving terrain based on optic-flow cues without any need for reference frames or accelerometers,” Bioinspir. Biomim. (to be published in February 2015).

Fabiani, P.

G. Sabiron, P. Chavent, T. Raharijaona, P. Fabiani, and F. Ruffier, “Low-speed optic-flow sensor onboard an unmanned helicopter flying outside over fields,” in IEEE Int. Conf. Robot. Autom. (ICRA ’13), (IEEE, 2013), pp. 1742–1749.
[Crossref]

Ferradans, S.

S. Ferradans, M. Bertalmio, E. Provenzi, and V. Caselles, “An analysis of visual adaptation and contrast perception for tone mapping,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 2002–2012 (2011).
[Crossref]

Fischer, M.

L. Liu, J. Wiinschmann, N. P. Aryan, A. Zohny, M. Fischer, S. Kibbel, and A. Rothermel, “An ambient light adaptive subretinal stimulator,” in Proc. ESSCIRC ’09 (IEEE, 2009), pp. 420–423.
[Crossref]

Fish, A.

A. Spivak, A. Belenky, A. Fish, and O. Yadid-Pecht, “Wide-Dynamic-Range CMOS Image Sensors - Comparative Performance Analysis,” IEEE Trans. Electron Devices 56, 2446–2461 (2009).
[Crossref]

Floreano, D.

D. Floreano, R. Pericet-Camara, S. Viollet, F. Ruffier, A. Brückner, R. Leitel, W. Buss, M. Menouni, F. Expert, R. Juston, M. K. Dobrzynski, G. L’Eplattenier, F. Recktenwald, H. a. Mallot, and N. Franceschini, “Miniature curved artificial compound eyes,” Proc. Nat. Acad. Sci. U. S. A. 110, 9267–72 (2013).
[Crossref]

Floreano D., D.

S. Viollet, S. Godiot, R. Leitel, W. Buss, P. Breugnon, M. Menouni, R. Juston, F. Expert, F. Colonnier, G. L’Eplattenier, A. Brückner, F. Kraze, H. Mallot, N. Franceschini, R. Pericet-Camara, F. Ruffier, and D. Floreano D., “Hardware Architecture and Cutting-Edge Assembly Process of a Tiny Curved Compound Eye,” Sensors 110, 21702–21721 (2014).
[Crossref]

Franceschini, N.

S. Viollet, S. Godiot, R. Leitel, W. Buss, P. Breugnon, M. Menouni, R. Juston, F. Expert, F. Colonnier, G. L’Eplattenier, A. Brückner, F. Kraze, H. Mallot, N. Franceschini, R. Pericet-Camara, F. Ruffier, and D. Floreano D., “Hardware Architecture and Cutting-Edge Assembly Process of a Tiny Curved Compound Eye,” Sensors 110, 21702–21721 (2014).
[Crossref]

D. Floreano, R. Pericet-Camara, S. Viollet, F. Ruffier, A. Brückner, R. Leitel, W. Buss, M. Menouni, F. Expert, R. Juston, M. K. Dobrzynski, G. L’Eplattenier, F. Recktenwald, H. a. Mallot, and N. Franceschini, “Miniature curved artificial compound eyes,” Proc. Nat. Acad. Sci. U. S. A. 110, 9267–72 (2013).
[Crossref]

Fuortes, M. G. F.

D. A. Baylor and M. G. F. Fuortes, “Electrical responses of single cones in the retina of the turtle,” J. Physiol. 207, 77–92 (1970).
[Crossref] [PubMed]

Geisler, W. S.

W. S. Geisler, “Effects of bleaching and backgrounds on the flash response of the cone system,” J. Physiol. 50, 413–434 (1981).
[Crossref]

Gekeler, F.

K. Stingl, K. U. Bartz-Schmidt, D. Besch, A. Braun, A. Bruckmann, F. Gekeler, U. Greppmaier, S. Hipp, G. Hörtdörfer, C. Kernstock, H. Koitschev, A. Kusnyerik, H. Sachs, A. Schatz, K. T. Stingl, T. Peters, B. Wilhelm, and E. Zrenner, “Artificial vision with wirelessly powered subretinal electronic implant alpha-IMS,” Proc. R. Soc. London, Ser. B 280, (2013).
[Crossref]

Godiot, S.

S. Viollet, S. Godiot, R. Leitel, W. Buss, P. Breugnon, M. Menouni, R. Juston, F. Expert, F. Colonnier, G. L’Eplattenier, A. Brückner, F. Kraze, H. Mallot, N. Franceschini, R. Pericet-Camara, F. Ruffier, and D. Floreano D., “Hardware Architecture and Cutting-Edge Assembly Process of a Tiny Curved Compound Eye,” Sensors 110, 21702–21721 (2014).
[Crossref]

Greppmaier, U.

K. Stingl, K. U. Bartz-Schmidt, D. Besch, A. Braun, A. Bruckmann, F. Gekeler, U. Greppmaier, S. Hipp, G. Hörtdörfer, C. Kernstock, H. Koitschev, A. Kusnyerik, H. Sachs, A. Schatz, K. T. Stingl, T. Peters, B. Wilhelm, and E. Zrenner, “Artificial vision with wirelessly powered subretinal electronic implant alpha-IMS,” Proc. R. Soc. London, Ser. B 280, (2013).
[Crossref]

Häfliger, P.

D. Drazen, P. Lichtsteiner, P. Häfliger, T. Delbrück, and A. Jensen, “Toward real-time particle tracking using an event-based dynamic vision sensor,” Exp. Fluids 51, 1465–1469 (2011).
[Crossref]

Hardie, R. C.

S. B. Laughlin and R. C. Hardie, “Common strategies for light adaptation in the peripheral visual systems of fly and dragonfly,” J. Comp. Physiol., A 128, 319–340 (1978).
[Crossref]

Hipp, S.

K. Stingl, K. U. Bartz-Schmidt, D. Besch, A. Braun, A. Bruckmann, F. Gekeler, U. Greppmaier, S. Hipp, G. Hörtdörfer, C. Kernstock, H. Koitschev, A. Kusnyerik, H. Sachs, A. Schatz, K. T. Stingl, T. Peters, B. Wilhelm, and E. Zrenner, “Artificial vision with wirelessly powered subretinal electronic implant alpha-IMS,” Proc. R. Soc. London, Ser. B 280, (2013).
[Crossref]

Hock, P. A.

D. C. Hood and P. A. Hock, “Light adaptation of the receptors: Increment threshold functions for the frog’s rods and cones,” Vision Res. 15, 545–553 (1975).
[Crossref] [PubMed]

Hood, D. C.

D. C. Hood and P. A. Hock, “Light adaptation of the receptors: Increment threshold functions for the frog’s rods and cones,” Vision Res. 15, 545–553 (1975).
[Crossref] [PubMed]

Hörtdörfer, G.

K. Stingl, K. U. Bartz-Schmidt, D. Besch, A. Braun, A. Bruckmann, F. Gekeler, U. Greppmaier, S. Hipp, G. Hörtdörfer, C. Kernstock, H. Koitschev, A. Kusnyerik, H. Sachs, A. Schatz, K. T. Stingl, T. Peters, B. Wilhelm, and E. Zrenner, “Artificial vision with wirelessly powered subretinal electronic implant alpha-IMS,” Proc. R. Soc. London, Ser. B 280, (2013).
[Crossref]

Ieng, S.-H.

J. Carneiro, S.-H. Ieng, C. Posch, and R. Benosman, “Event-based 3D reconstruction from neuromorphic retinas,” Neural Networks 45, 27–38 (2013).
[Crossref] [PubMed]

Indiveri, G.

S.-C. Liu, J. Kramer, G. Indiveri, T. Delbrück, and R. Douglas, Analog VLSI: Circuits and Principles (MIT Press, 2002).

Iwata, A.

K. Shimonomura, S. Kameda, A. Iwata, and T. Yagi, “Wide-dynamic-range APS-based silicon retina with brightness constancy,” IEEE Trans. Neural Networks 22, 1482–93 (2011).
[Crossref]

Jensen, A.

D. Drazen, P. Lichtsteiner, P. Häfliger, T. Delbrück, and A. Jensen, “Toward real-time particle tracking using an event-based dynamic vision sensor,” Exp. Fluids 51, 1465–1469 (2011).
[Crossref]

Juston, R.

S. Viollet, S. Godiot, R. Leitel, W. Buss, P. Breugnon, M. Menouni, R. Juston, F. Expert, F. Colonnier, G. L’Eplattenier, A. Brückner, F. Kraze, H. Mallot, N. Franceschini, R. Pericet-Camara, F. Ruffier, and D. Floreano D., “Hardware Architecture and Cutting-Edge Assembly Process of a Tiny Curved Compound Eye,” Sensors 110, 21702–21721 (2014).
[Crossref]

D. Floreano, R. Pericet-Camara, S. Viollet, F. Ruffier, A. Brückner, R. Leitel, W. Buss, M. Menouni, F. Expert, R. Juston, M. K. Dobrzynski, G. L’Eplattenier, F. Recktenwald, H. a. Mallot, and N. Franceschini, “Miniature curved artificial compound eyes,” Proc. Nat. Acad. Sci. U. S. A. 110, 9267–72 (2013).
[Crossref]

Kameda, S.

K. Shimonomura, S. Kameda, A. Iwata, and T. Yagi, “Wide-dynamic-range APS-based silicon retina with brightness constancy,” IEEE Trans. Neural Networks 22, 1482–93 (2011).
[Crossref]

Kernstock, C.

K. Stingl, K. U. Bartz-Schmidt, D. Besch, A. Braun, A. Bruckmann, F. Gekeler, U. Greppmaier, S. Hipp, G. Hörtdörfer, C. Kernstock, H. Koitschev, A. Kusnyerik, H. Sachs, A. Schatz, K. T. Stingl, T. Peters, B. Wilhelm, and E. Zrenner, “Artificial vision with wirelessly powered subretinal electronic implant alpha-IMS,” Proc. R. Soc. London, Ser. B 280, (2013).
[Crossref]

Kibbel, S.

L. Liu, J. Wiinschmann, N. P. Aryan, A. Zohny, M. Fischer, S. Kibbel, and A. Rothermel, “An ambient light adaptive subretinal stimulator,” in Proc. ESSCIRC ’09 (IEEE, 2009), pp. 420–423.
[Crossref]

Kleinschmidt, J.

J. Kleinschmidt and J. E. Dowling, “Intracellular Recordings from Gecko Photoreceptors during Light and Dark Adaptation,” J. Gen. Physiol. 66, 617–648 (1975).
[Crossref] [PubMed]

Koitschev, H.

K. Stingl, K. U. Bartz-Schmidt, D. Besch, A. Braun, A. Bruckmann, F. Gekeler, U. Greppmaier, S. Hipp, G. Hörtdörfer, C. Kernstock, H. Koitschev, A. Kusnyerik, H. Sachs, A. Schatz, K. T. Stingl, T. Peters, B. Wilhelm, and E. Zrenner, “Artificial vision with wirelessly powered subretinal electronic implant alpha-IMS,” Proc. R. Soc. London, Ser. B 280, (2013).
[Crossref]

Kramer, J.

S.-C. Liu, J. Kramer, G. Indiveri, T. Delbrück, and R. Douglas, Analog VLSI: Circuits and Principles (MIT Press, 2002).

Kraze, F.

S. Viollet, S. Godiot, R. Leitel, W. Buss, P. Breugnon, M. Menouni, R. Juston, F. Expert, F. Colonnier, G. L’Eplattenier, A. Brückner, F. Kraze, H. Mallot, N. Franceschini, R. Pericet-Camara, F. Ruffier, and D. Floreano D., “Hardware Architecture and Cutting-Edge Assembly Process of a Tiny Curved Compound Eye,” Sensors 110, 21702–21721 (2014).
[Crossref]

Kusnyerik, A.

K. Stingl, K. U. Bartz-Schmidt, D. Besch, A. Braun, A. Bruckmann, F. Gekeler, U. Greppmaier, S. Hipp, G. Hörtdörfer, C. Kernstock, H. Koitschev, A. Kusnyerik, H. Sachs, A. Schatz, K. T. Stingl, T. Peters, B. Wilhelm, and E. Zrenner, “Artificial vision with wirelessly powered subretinal electronic implant alpha-IMS,” Proc. R. Soc. London, Ser. B 280, (2013).
[Crossref]

L’Eplattenier, G.

S. Viollet, S. Godiot, R. Leitel, W. Buss, P. Breugnon, M. Menouni, R. Juston, F. Expert, F. Colonnier, G. L’Eplattenier, A. Brückner, F. Kraze, H. Mallot, N. Franceschini, R. Pericet-Camara, F. Ruffier, and D. Floreano D., “Hardware Architecture and Cutting-Edge Assembly Process of a Tiny Curved Compound Eye,” Sensors 110, 21702–21721 (2014).
[Crossref]

D. Floreano, R. Pericet-Camara, S. Viollet, F. Ruffier, A. Brückner, R. Leitel, W. Buss, M. Menouni, F. Expert, R. Juston, M. K. Dobrzynski, G. L’Eplattenier, F. Recktenwald, H. a. Mallot, and N. Franceschini, “Miniature curved artificial compound eyes,” Proc. Nat. Acad. Sci. U. S. A. 110, 9267–72 (2013).
[Crossref]

Laughlin, S.

S. Laughlin and M. Weckstrom, “Fast and slow photoreceptors - a comparative study of the functional diversity of coding and conductances in the Diptera,” J. Comp. Physiol., A 172, 593–609 (1993).
[Crossref]

Laughlin, S. B.

S. B. Laughlin and R. C. Hardie, “Common strategies for light adaptation in the peripheral visual systems of fly and dragonfly,” J. Comp. Physiol., A 128, 319–340 (1978).
[Crossref]

Leitel, R.

S. Viollet, S. Godiot, R. Leitel, W. Buss, P. Breugnon, M. Menouni, R. Juston, F. Expert, F. Colonnier, G. L’Eplattenier, A. Brückner, F. Kraze, H. Mallot, N. Franceschini, R. Pericet-Camara, F. Ruffier, and D. Floreano D., “Hardware Architecture and Cutting-Edge Assembly Process of a Tiny Curved Compound Eye,” Sensors 110, 21702–21721 (2014).
[Crossref]

D. Floreano, R. Pericet-Camara, S. Viollet, F. Ruffier, A. Brückner, R. Leitel, W. Buss, M. Menouni, F. Expert, R. Juston, M. K. Dobrzynski, G. L’Eplattenier, F. Recktenwald, H. a. Mallot, and N. Franceschini, “Miniature curved artificial compound eyes,” Proc. Nat. Acad. Sci. U. S. A. 110, 9267–72 (2013).
[Crossref]

Lichtsteiner, P.

D. Drazen, P. Lichtsteiner, P. Häfliger, T. Delbrück, and A. Jensen, “Toward real-time particle tracking using an event-based dynamic vision sensor,” Exp. Fluids 51, 1465–1469 (2011).
[Crossref]

P. Lichtsteiner, C. Posch, and T. Delbrück, “A 128×128 120 dB 15 μs Latency Asynchronous Temporal Contrast Vision Sensor,” IEEE J. Solid-State Circuits 43, 566–576 (2008).
[Crossref]

Liu, L.

L. Liu, J. Wiinschmann, N. P. Aryan, A. Zohny, M. Fischer, S. Kibbel, and A. Rothermel, “An ambient light adaptive subretinal stimulator,” in Proc. ESSCIRC ’09 (IEEE, 2009), pp. 420–423.
[Crossref]

Liu, S.-C.

S.-C. Liu, J. Kramer, G. Indiveri, T. Delbrück, and R. Douglas, Analog VLSI: Circuits and Principles (MIT Press, 2002).

Mahowald, M.

C. A. Mead and M. Mahowald, “A silicon model of early visual processing,” Neural Networks 1, 91–97 (1988).
[Crossref]

Mallot, H.

S. Viollet, S. Godiot, R. Leitel, W. Buss, P. Breugnon, M. Menouni, R. Juston, F. Expert, F. Colonnier, G. L’Eplattenier, A. Brückner, F. Kraze, H. Mallot, N. Franceschini, R. Pericet-Camara, F. Ruffier, and D. Floreano D., “Hardware Architecture and Cutting-Edge Assembly Process of a Tiny Curved Compound Eye,” Sensors 110, 21702–21721 (2014).
[Crossref]

Mallot, H. a.

D. Floreano, R. Pericet-Camara, S. Viollet, F. Ruffier, A. Brückner, R. Leitel, W. Buss, M. Menouni, F. Expert, R. Juston, M. K. Dobrzynski, G. L’Eplattenier, F. Recktenwald, H. a. Mallot, and N. Franceschini, “Miniature curved artificial compound eyes,” Proc. Nat. Acad. Sci. U. S. A. 110, 9267–72 (2013).
[Crossref]

Mead, C. A.

C. A. Mead and M. Mahowald, “A silicon model of early visual processing,” Neural Networks 1, 91–97 (1988).
[Crossref]

T. Delbrück and C. A. Mead, “An electronic photoreceptor sensitive to small changes in intensity,” in Adv. Neural Inf. Process. Syst. 1, D. S. Touretzky, ed. (Morgan Kaufmann, 1989), pp. 720–727.

T. Delbrück and C. A. Mead, “Analog VLSI Adaptive, Logarithmic, Wide-Dynamic-Range Photoreceptor,” in IEEE Int. Symp. Circuits Syst. (ISCAS ’94), (IEEE, 1994), pp. 339–342.
[Crossref]

Menouni, M.

S. Viollet, S. Godiot, R. Leitel, W. Buss, P. Breugnon, M. Menouni, R. Juston, F. Expert, F. Colonnier, G. L’Eplattenier, A. Brückner, F. Kraze, H. Mallot, N. Franceschini, R. Pericet-Camara, F. Ruffier, and D. Floreano D., “Hardware Architecture and Cutting-Edge Assembly Process of a Tiny Curved Compound Eye,” Sensors 110, 21702–21721 (2014).
[Crossref]

D. Floreano, R. Pericet-Camara, S. Viollet, F. Ruffier, A. Brückner, R. Leitel, W. Buss, M. Menouni, F. Expert, R. Juston, M. K. Dobrzynski, G. L’Eplattenier, F. Recktenwald, H. a. Mallot, and N. Franceschini, “Miniature curved artificial compound eyes,” Proc. Nat. Acad. Sci. U. S. A. 110, 9267–72 (2013).
[Crossref]

Menten, M. L.

L. Michaelis and M. L. Menten, “The Kinetics of Invertase Action (Die Kinetik der Invertinwirkung),” Biochemische Zeitschrift 49, 333–369 (1913).

Meylan, L.

Michaelis, L.

L. Michaelis and M. L. Menten, “The Kinetics of Invertase Action (Die Kinetik der Invertinwirkung),” Biochemische Zeitschrift 49, 333–369 (1913).

Motokawa, K.

T. Oikawa, T. Ogawa, and K. Motokawa, “Origin of so-called cone action potential,” J. Neurophysiol. 22, 102– 111 (1959).
[PubMed]

Naka, K. I.

K. I. Naka and W. A. H. Rushton, “S-potentials from luminosity units in the retina of fish (Cyprinidae),” J. Physiol. 185, 536–555 (1966).
[Crossref] [PubMed]

Normann, R. A.

R. A. Normann and I. Perlman, “The effects of background illumination on the photoresponses of red and green cones,” J. Physiol. 286, 491–507 (1979).
[Crossref] [PubMed]

Ogawa, T.

T. Oikawa, T. Ogawa, and K. Motokawa, “Origin of so-called cone action potential,” J. Neurophysiol. 22, 102– 111 (1959).
[PubMed]

Oikawa, T.

T. Oikawa, T. Ogawa, and K. Motokawa, “Origin of so-called cone action potential,” J. Neurophysiol. 22, 102– 111 (1959).
[PubMed]

Pacoret, C.

C. Pacoret and S. Régnier, “A review of haptic optical tweezers for an interactive microworld exploration,” Rev. Sci. Instrum. 84, 081301 (2013).
[Crossref]

Pericet-Camara, R.

S. Viollet, S. Godiot, R. Leitel, W. Buss, P. Breugnon, M. Menouni, R. Juston, F. Expert, F. Colonnier, G. L’Eplattenier, A. Brückner, F. Kraze, H. Mallot, N. Franceschini, R. Pericet-Camara, F. Ruffier, and D. Floreano D., “Hardware Architecture and Cutting-Edge Assembly Process of a Tiny Curved Compound Eye,” Sensors 110, 21702–21721 (2014).
[Crossref]

D. Floreano, R. Pericet-Camara, S. Viollet, F. Ruffier, A. Brückner, R. Leitel, W. Buss, M. Menouni, F. Expert, R. Juston, M. K. Dobrzynski, G. L’Eplattenier, F. Recktenwald, H. a. Mallot, and N. Franceschini, “Miniature curved artificial compound eyes,” Proc. Nat. Acad. Sci. U. S. A. 110, 9267–72 (2013).
[Crossref]

Perlman, I.

R. A. Normann and I. Perlman, “The effects of background illumination on the photoresponses of red and green cones,” J. Physiol. 286, 491–507 (1979).
[Crossref] [PubMed]

Peters, T.

K. Stingl, K. U. Bartz-Schmidt, D. Besch, A. Braun, A. Bruckmann, F. Gekeler, U. Greppmaier, S. Hipp, G. Hörtdörfer, C. Kernstock, H. Koitschev, A. Kusnyerik, H. Sachs, A. Schatz, K. T. Stingl, T. Peters, B. Wilhelm, and E. Zrenner, “Artificial vision with wirelessly powered subretinal electronic implant alpha-IMS,” Proc. R. Soc. London, Ser. B 280, (2013).
[Crossref]

Posch, C.

J. Carneiro, S.-H. Ieng, C. Posch, and R. Benosman, “Event-based 3D reconstruction from neuromorphic retinas,” Neural Networks 45, 27–38 (2013).
[Crossref] [PubMed]

P. Lichtsteiner, C. Posch, and T. Delbrück, “A 128×128 120 dB 15 μs Latency Asynchronous Temporal Contrast Vision Sensor,” IEEE J. Solid-State Circuits 43, 566–576 (2008).
[Crossref]

Provenzi, E.

S. Ferradans, M. Bertalmio, E. Provenzi, and V. Caselles, “An analysis of visual adaptation and contrast perception for tone mapping,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 2002–2012 (2011).
[Crossref]

Raharijaona, T.

G. Sabiron, P. Chavent, T. Raharijaona, P. Fabiani, and F. Ruffier, “Low-speed optic-flow sensor onboard an unmanned helicopter flying outside over fields,” in IEEE Int. Conf. Robot. Autom. (ICRA ’13), (IEEE, 2013), pp. 1742–1749.
[Crossref]

Recktenwald, F.

D. Floreano, R. Pericet-Camara, S. Viollet, F. Ruffier, A. Brückner, R. Leitel, W. Buss, M. Menouni, F. Expert, R. Juston, M. K. Dobrzynski, G. L’Eplattenier, F. Recktenwald, H. a. Mallot, and N. Franceschini, “Miniature curved artificial compound eyes,” Proc. Nat. Acad. Sci. U. S. A. 110, 9267–72 (2013).
[Crossref]

Régnier, S.

C. Pacoret and S. Régnier, “A review of haptic optical tweezers for an interactive microworld exploration,” Rev. Sci. Instrum. 84, 081301 (2013).
[Crossref]

Reinhard, E.

E. Reinhard and K. Devlin, “Dynamic range reduction inspired by photoreceptor physiology,” IEEE Trans. Vis. Comput. Graphics 11, 13–24 (2005).
[Crossref]

Rolland, R.

G. Sicard, H. Abbas, H. Amhaz, H. Zimouche, R. Rolland, and D. Alleysson, “A CMOS HDR Imager with an Analog Local Adaptation,” in Int. Image Sensor Workshop (IISW ’13), (2013), pp. 1–4.

Rothermel, A.

L. Liu, J. Wiinschmann, N. P. Aryan, A. Zohny, M. Fischer, S. Kibbel, and A. Rothermel, “An ambient light adaptive subretinal stimulator,” in Proc. ESSCIRC ’09 (IEEE, 2009), pp. 420–423.
[Crossref]

Ruffier, F.

S. Viollet, S. Godiot, R. Leitel, W. Buss, P. Breugnon, M. Menouni, R. Juston, F. Expert, F. Colonnier, G. L’Eplattenier, A. Brückner, F. Kraze, H. Mallot, N. Franceschini, R. Pericet-Camara, F. Ruffier, and D. Floreano D., “Hardware Architecture and Cutting-Edge Assembly Process of a Tiny Curved Compound Eye,” Sensors 110, 21702–21721 (2014).
[Crossref]

D. Floreano, R. Pericet-Camara, S. Viollet, F. Ruffier, A. Brückner, R. Leitel, W. Buss, M. Menouni, F. Expert, R. Juston, M. K. Dobrzynski, G. L’Eplattenier, F. Recktenwald, H. a. Mallot, and N. Franceschini, “Miniature curved artificial compound eyes,” Proc. Nat. Acad. Sci. U. S. A. 110, 9267–72 (2013).
[Crossref]

F. Expert, S. Viollet, and F. Ruffier, “Outdoor field performances of insect-based visual motion sensors,” J. Field Robot. 28, 529–541 (2012).
[Crossref]

G. Sabiron, P. Chavent, T. Raharijaona, P. Fabiani, and F. Ruffier, “Low-speed optic-flow sensor onboard an unmanned helicopter flying outside over fields,” in IEEE Int. Conf. Robot. Autom. (ICRA ’13), (IEEE, 2013), pp. 1742–1749.
[Crossref]

F. Expert and F. Ruffier, “Flying over uneven moving terrain based on optic-flow cues without any need for reference frames or accelerometers,” Bioinspir. Biomim. (to be published in February 2015).

Rushton, W. A. H.

K. I. Naka and W. A. H. Rushton, “S-potentials from luminosity units in the retina of fish (Cyprinidae),” J. Physiol. 185, 536–555 (1966).
[Crossref] [PubMed]

Sabiron, G.

G. Sabiron, P. Chavent, T. Raharijaona, P. Fabiani, and F. Ruffier, “Low-speed optic-flow sensor onboard an unmanned helicopter flying outside over fields,” in IEEE Int. Conf. Robot. Autom. (ICRA ’13), (IEEE, 2013), pp. 1742–1749.
[Crossref]

Sachs, H.

K. Stingl, K. U. Bartz-Schmidt, D. Besch, A. Braun, A. Bruckmann, F. Gekeler, U. Greppmaier, S. Hipp, G. Hörtdörfer, C. Kernstock, H. Koitschev, A. Kusnyerik, H. Sachs, A. Schatz, K. T. Stingl, T. Peters, B. Wilhelm, and E. Zrenner, “Artificial vision with wirelessly powered subretinal electronic implant alpha-IMS,” Proc. R. Soc. London, Ser. B 280, (2013).
[Crossref]

Schatz, A.

K. Stingl, K. U. Bartz-Schmidt, D. Besch, A. Braun, A. Bruckmann, F. Gekeler, U. Greppmaier, S. Hipp, G. Hörtdörfer, C. Kernstock, H. Koitschev, A. Kusnyerik, H. Sachs, A. Schatz, K. T. Stingl, T. Peters, B. Wilhelm, and E. Zrenner, “Artificial vision with wirelessly powered subretinal electronic implant alpha-IMS,” Proc. R. Soc. London, Ser. B 280, (2013).
[Crossref]

Shimonomura, K.

K. Shimonomura, S. Kameda, A. Iwata, and T. Yagi, “Wide-dynamic-range APS-based silicon retina with brightness constancy,” IEEE Trans. Neural Networks 22, 1482–93 (2011).
[Crossref]

Sicard, G.

G. Sicard, H. Abbas, H. Amhaz, H. Zimouche, R. Rolland, and D. Alleysson, “A CMOS HDR Imager with an Analog Local Adaptation,” in Int. Image Sensor Workshop (IISW ’13), (2013), pp. 1–4.

Spivak, A.

A. Spivak, A. Belenky, A. Fish, and O. Yadid-Pecht, “Wide-Dynamic-Range CMOS Image Sensors - Comparative Performance Analysis,” IEEE Trans. Electron Devices 56, 2446–2461 (2009).
[Crossref]

Stingl, K.

K. Stingl, K. U. Bartz-Schmidt, D. Besch, A. Braun, A. Bruckmann, F. Gekeler, U. Greppmaier, S. Hipp, G. Hörtdörfer, C. Kernstock, H. Koitschev, A. Kusnyerik, H. Sachs, A. Schatz, K. T. Stingl, T. Peters, B. Wilhelm, and E. Zrenner, “Artificial vision with wirelessly powered subretinal electronic implant alpha-IMS,” Proc. R. Soc. London, Ser. B 280, (2013).
[Crossref]

Stingl, K. T.

K. Stingl, K. U. Bartz-Schmidt, D. Besch, A. Braun, A. Bruckmann, F. Gekeler, U. Greppmaier, S. Hipp, G. Hörtdörfer, C. Kernstock, H. Koitschev, A. Kusnyerik, H. Sachs, A. Schatz, K. T. Stingl, T. Peters, B. Wilhelm, and E. Zrenner, “Artificial vision with wirelessly powered subretinal electronic implant alpha-IMS,” Proc. R. Soc. London, Ser. B 280, (2013).
[Crossref]

Süsstrunk, S.

Svaetichin, G.

G. Svaetichin, “The cone action potential,” Acta Physiol. 29, 565–600 (1953).

Valeton, J. M.

J. M. Valeton, “Photoreceptor light adaptation models: an evaluation,” Vision Res. 23, 1549–1554 (1983).
[Crossref] [PubMed]

Venier, P.

P. Venier and X. Arreguit, “Réseau de cellules photosensibles et capteur d’images comportant un tel réseau,” French Patent, Patent No.: EP0792063A1 (1997).

Viollet, S.

S. Viollet, S. Godiot, R. Leitel, W. Buss, P. Breugnon, M. Menouni, R. Juston, F. Expert, F. Colonnier, G. L’Eplattenier, A. Brückner, F. Kraze, H. Mallot, N. Franceschini, R. Pericet-Camara, F. Ruffier, and D. Floreano D., “Hardware Architecture and Cutting-Edge Assembly Process of a Tiny Curved Compound Eye,” Sensors 110, 21702–21721 (2014).
[Crossref]

D. Floreano, R. Pericet-Camara, S. Viollet, F. Ruffier, A. Brückner, R. Leitel, W. Buss, M. Menouni, F. Expert, R. Juston, M. K. Dobrzynski, G. L’Eplattenier, F. Recktenwald, H. a. Mallot, and N. Franceschini, “Miniature curved artificial compound eyes,” Proc. Nat. Acad. Sci. U. S. A. 110, 9267–72 (2013).
[Crossref]

F. Expert, S. Viollet, and F. Ruffier, “Outdoor field performances of insect-based visual motion sensors,” J. Field Robot. 28, 529–541 (2012).
[Crossref]

Vu, N.-S.

N.-S. Vu and A. Caplier, “Illumination-robust face recognition using retina modeling,” in IEEE Int. Conf. Image Process. (ICIP ’09), (IEEE, 2009), pp. 3289–3292.

Weckstrom, M.

S. Laughlin and M. Weckstrom, “Fast and slow photoreceptors - a comparative study of the functional diversity of coding and conductances in the Diptera,” J. Comp. Physiol., A 172, 593–609 (1993).
[Crossref]

Werblin, F. S.

F. S. Werblin, “Control of Retinal Sensitivity II: Lateral Interactions at the Outer Plexiform Layer,” J. Gen. Physiol. 63, 62–87 (1974).
[Crossref] [PubMed]

F. S. Werblin, “Adaptation in a vertebrate retina: intracellular recording in Necturus,” J. Neurophysiol. 34, 228–241 (1971).
[PubMed]

Whitten, D. N.

R. M. Boynton and D. N. Whitten, “Visual Adaptation in Monkey Cones: Recordings of Late Receptor Potentials,” Science 170, 1423–1426 (1970).
[Crossref] [PubMed]

Wiinschmann, J.

L. Liu, J. Wiinschmann, N. P. Aryan, A. Zohny, M. Fischer, S. Kibbel, and A. Rothermel, “An ambient light adaptive subretinal stimulator,” in Proc. ESSCIRC ’09 (IEEE, 2009), pp. 420–423.
[Crossref]

Wilhelm, B.

K. Stingl, K. U. Bartz-Schmidt, D. Besch, A. Braun, A. Bruckmann, F. Gekeler, U. Greppmaier, S. Hipp, G. Hörtdörfer, C. Kernstock, H. Koitschev, A. Kusnyerik, H. Sachs, A. Schatz, K. T. Stingl, T. Peters, B. Wilhelm, and E. Zrenner, “Artificial vision with wirelessly powered subretinal electronic implant alpha-IMS,” Proc. R. Soc. London, Ser. B 280, (2013).
[Crossref]

Yadid-Pecht, O.

A. Spivak, A. Belenky, A. Fish, and O. Yadid-Pecht, “Wide-Dynamic-Range CMOS Image Sensors - Comparative Performance Analysis,” IEEE Trans. Electron Devices 56, 2446–2461 (2009).
[Crossref]

Yagi, T.

K. Shimonomura, S. Kameda, A. Iwata, and T. Yagi, “Wide-dynamic-range APS-based silicon retina with brightness constancy,” IEEE Trans. Neural Networks 22, 1482–93 (2011).
[Crossref]

Zaghloul, K. A.

K. A. Zaghloul and K. Boahen, “A silicon retina that reproduces signals in the optic nerve,” J. Neural Eng. 3, 257–267 (2006).
[Crossref] [PubMed]

Zimouche, H.

G. Sicard, H. Abbas, H. Amhaz, H. Zimouche, R. Rolland, and D. Alleysson, “A CMOS HDR Imager with an Analog Local Adaptation,” in Int. Image Sensor Workshop (IISW ’13), (2013), pp. 1–4.

Zohny, A.

L. Liu, J. Wiinschmann, N. P. Aryan, A. Zohny, M. Fischer, S. Kibbel, and A. Rothermel, “An ambient light adaptive subretinal stimulator,” in Proc. ESSCIRC ’09 (IEEE, 2009), pp. 420–423.
[Crossref]

Zrenner, E.

K. Stingl, K. U. Bartz-Schmidt, D. Besch, A. Braun, A. Bruckmann, F. Gekeler, U. Greppmaier, S. Hipp, G. Hörtdörfer, C. Kernstock, H. Koitschev, A. Kusnyerik, H. Sachs, A. Schatz, K. T. Stingl, T. Peters, B. Wilhelm, and E. Zrenner, “Artificial vision with wirelessly powered subretinal electronic implant alpha-IMS,” Proc. R. Soc. London, Ser. B 280, (2013).
[Crossref]

E. Zrenner, “Will Retinal Implants Restore Vision?,” Science 295, 1022–1025 (2002).
[Crossref] [PubMed]

Acta Physiol. (1)

G. Svaetichin, “The cone action potential,” Acta Physiol. 29, 565–600 (1953).

Biochemische Zeitschrift (1)

L. Michaelis and M. L. Menten, “The Kinetics of Invertase Action (Die Kinetik der Invertinwirkung),” Biochemische Zeitschrift 49, 333–369 (1913).

Bioinspir. Biomim. (1)

F. Expert and F. Ruffier, “Flying over uneven moving terrain based on optic-flow cues without any need for reference frames or accelerometers,” Bioinspir. Biomim. (to be published in February 2015).

Exp. Fluids (1)

D. Drazen, P. Lichtsteiner, P. Häfliger, T. Delbrück, and A. Jensen, “Toward real-time particle tracking using an event-based dynamic vision sensor,” Exp. Fluids 51, 1465–1469 (2011).
[Crossref]

IEEE J. Solid-State Circuits (1)

P. Lichtsteiner, C. Posch, and T. Delbrück, “A 128×128 120 dB 15 μs Latency Asynchronous Temporal Contrast Vision Sensor,” IEEE J. Solid-State Circuits 43, 566–576 (2008).
[Crossref]

IEEE Trans. Electron Devices (1)

A. Spivak, A. Belenky, A. Fish, and O. Yadid-Pecht, “Wide-Dynamic-Range CMOS Image Sensors - Comparative Performance Analysis,” IEEE Trans. Electron Devices 56, 2446–2461 (2009).
[Crossref]

IEEE Trans. Neural Networks (1)

K. Shimonomura, S. Kameda, A. Iwata, and T. Yagi, “Wide-dynamic-range APS-based silicon retina with brightness constancy,” IEEE Trans. Neural Networks 22, 1482–93 (2011).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

S. Ferradans, M. Bertalmio, E. Provenzi, and V. Caselles, “An analysis of visual adaptation and contrast perception for tone mapping,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 2002–2012 (2011).
[Crossref]

IEEE Trans. Vis. Comput. Graphics (1)

E. Reinhard and K. Devlin, “Dynamic range reduction inspired by photoreceptor physiology,” IEEE Trans. Vis. Comput. Graphics 11, 13–24 (2005).
[Crossref]

J. Comp. Physiol., A (2)

S. Laughlin and M. Weckstrom, “Fast and slow photoreceptors - a comparative study of the functional diversity of coding and conductances in the Diptera,” J. Comp. Physiol., A 172, 593–609 (1993).
[Crossref]

S. B. Laughlin and R. C. Hardie, “Common strategies for light adaptation in the peripheral visual systems of fly and dragonfly,” J. Comp. Physiol., A 128, 319–340 (1978).
[Crossref]

J. Field Robot. (1)

F. Expert, S. Viollet, and F. Ruffier, “Outdoor field performances of insect-based visual motion sensors,” J. Field Robot. 28, 529–541 (2012).
[Crossref]

J. Gen. Physiol. (2)

F. S. Werblin, “Control of Retinal Sensitivity II: Lateral Interactions at the Outer Plexiform Layer,” J. Gen. Physiol. 63, 62–87 (1974).
[Crossref] [PubMed]

J. Kleinschmidt and J. E. Dowling, “Intracellular Recordings from Gecko Photoreceptors during Light and Dark Adaptation,” J. Gen. Physiol. 66, 617–648 (1975).
[Crossref] [PubMed]

J. Neural Eng. (1)

K. A. Zaghloul and K. Boahen, “A silicon retina that reproduces signals in the optic nerve,” J. Neural Eng. 3, 257–267 (2006).
[Crossref] [PubMed]

J. Neurophysiol. (2)

T. Oikawa, T. Ogawa, and K. Motokawa, “Origin of so-called cone action potential,” J. Neurophysiol. 22, 102– 111 (1959).
[PubMed]

F. S. Werblin, “Adaptation in a vertebrate retina: intracellular recording in Necturus,” J. Neurophysiol. 34, 228–241 (1971).
[PubMed]

J. Opt. Soc. Am. A (1)

J. Physiol. (4)

R. A. Normann and I. Perlman, “The effects of background illumination on the photoresponses of red and green cones,” J. Physiol. 286, 491–507 (1979).
[Crossref] [PubMed]

W. S. Geisler, “Effects of bleaching and backgrounds on the flash response of the cone system,” J. Physiol. 50, 413–434 (1981).
[Crossref]

D. A. Baylor and M. G. F. Fuortes, “Electrical responses of single cones in the retina of the turtle,” J. Physiol. 207, 77–92 (1970).
[Crossref] [PubMed]

K. I. Naka and W. A. H. Rushton, “S-potentials from luminosity units in the retina of fish (Cyprinidae),” J. Physiol. 185, 536–555 (1966).
[Crossref] [PubMed]

Neural Networks (2)

J. Carneiro, S.-H. Ieng, C. Posch, and R. Benosman, “Event-based 3D reconstruction from neuromorphic retinas,” Neural Networks 45, 27–38 (2013).
[Crossref] [PubMed]

C. A. Mead and M. Mahowald, “A silicon model of early visual processing,” Neural Networks 1, 91–97 (1988).
[Crossref]

Proc. Nat. Acad. Sci. U. S. A. (1)

D. Floreano, R. Pericet-Camara, S. Viollet, F. Ruffier, A. Brückner, R. Leitel, W. Buss, M. Menouni, F. Expert, R. Juston, M. K. Dobrzynski, G. L’Eplattenier, F. Recktenwald, H. a. Mallot, and N. Franceschini, “Miniature curved artificial compound eyes,” Proc. Nat. Acad. Sci. U. S. A. 110, 9267–72 (2013).
[Crossref]

Proc. R. Soc. London, Ser. B (1)

K. Stingl, K. U. Bartz-Schmidt, D. Besch, A. Braun, A. Bruckmann, F. Gekeler, U. Greppmaier, S. Hipp, G. Hörtdörfer, C. Kernstock, H. Koitschev, A. Kusnyerik, H. Sachs, A. Schatz, K. T. Stingl, T. Peters, B. Wilhelm, and E. Zrenner, “Artificial vision with wirelessly powered subretinal electronic implant alpha-IMS,” Proc. R. Soc. London, Ser. B 280, (2013).
[Crossref]

Rev. Sci. Instrum. (1)

C. Pacoret and S. Régnier, “A review of haptic optical tweezers for an interactive microworld exploration,” Rev. Sci. Instrum. 84, 081301 (2013).
[Crossref]

Science (2)

R. M. Boynton and D. N. Whitten, “Visual Adaptation in Monkey Cones: Recordings of Late Receptor Potentials,” Science 170, 1423–1426 (1970).
[Crossref] [PubMed]

E. Zrenner, “Will Retinal Implants Restore Vision?,” Science 295, 1022–1025 (2002).
[Crossref] [PubMed]

Sensors (1)

S. Viollet, S. Godiot, R. Leitel, W. Buss, P. Breugnon, M. Menouni, R. Juston, F. Expert, F. Colonnier, G. L’Eplattenier, A. Brückner, F. Kraze, H. Mallot, N. Franceschini, R. Pericet-Camara, F. Ruffier, and D. Floreano D., “Hardware Architecture and Cutting-Edge Assembly Process of a Tiny Curved Compound Eye,” Sensors 110, 21702–21721 (2014).
[Crossref]

Vision Res. (2)

D. C. Hood and P. A. Hock, “Light adaptation of the receptors: Increment threshold functions for the frog’s rods and cones,” Vision Res. 15, 545–553 (1975).
[Crossref] [PubMed]

J. M. Valeton, “Photoreceptor light adaptation models: an evaluation,” Vision Res. 23, 1549–1554 (1983).
[Crossref] [PubMed]

Other (10)

T. Delbrück, “Investigations of visual transduction and motion processing,” Ph.D. thesis, California Inst. Tech., Pasadena, California (1993).

G. Sicard, H. Abbas, H. Amhaz, H. Zimouche, R. Rolland, and D. Alleysson, “A CMOS HDR Imager with an Analog Local Adaptation,” in Int. Image Sensor Workshop (IISW ’13), (2013), pp. 1–4.

L. Liu, J. Wiinschmann, N. P. Aryan, A. Zohny, M. Fischer, S. Kibbel, and A. Rothermel, “An ambient light adaptive subretinal stimulator,” in Proc. ESSCIRC ’09 (IEEE, 2009), pp. 420–423.
[Crossref]

T. Delbrück and C. A. Mead, “An electronic photoreceptor sensitive to small changes in intensity,” in Adv. Neural Inf. Process. Syst. 1, D. S. Touretzky, ed. (Morgan Kaufmann, 1989), pp. 720–727.

K. A. Boahen and A. G. Andreou, “A Contrast Sensitive Silicon Retina with Reciprocal Synapses,” in Adv. Neural Inf. Process. Syst. 4, D. S. Touretzky, ed. (Morgan Kaufmann, 1991), pp. 764–772.

T. Delbrück and C. A. Mead, “Analog VLSI Adaptive, Logarithmic, Wide-Dynamic-Range Photoreceptor,” in IEEE Int. Symp. Circuits Syst. (ISCAS ’94), (IEEE, 1994), pp. 339–342.
[Crossref]

N.-S. Vu and A. Caplier, “Illumination-robust face recognition using retina modeling,” in IEEE Int. Conf. Image Process. (ICIP ’09), (IEEE, 2009), pp. 3289–3292.

S.-C. Liu, J. Kramer, G. Indiveri, T. Delbrück, and R. Douglas, Analog VLSI: Circuits and Principles (MIT Press, 2002).

P. Venier and X. Arreguit, “Réseau de cellules photosensibles et capteur d’images comportant un tel réseau,” French Patent, Patent No.: EP0792063A1 (1997).

G. Sabiron, P. Chavent, T. Raharijaona, P. Fabiani, and F. Ruffier, “Low-speed optic-flow sensor onboard an unmanned helicopter flying outside over fields,” in IEEE Int. Conf. Robot. Autom. (ICRA ’13), (IEEE, 2013), pp. 1742–1749.
[Crossref]

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 5
Fig. 5 (a) Simulated example of the adaptive pixel’s response (blue line) to a step change in the light intensity (light to dark gray stripe). The red and magenta lines stand for the photodiode current and the average current, respectively. (b) Theoretical S-shaped curve based on equation (4), giving the peak values of the pixel’s response Vi to step changes in the photodiode current from Iph0 to Iphi.
Fig. 7
Fig. 7 (a) Pictures and (b) exploded view of the Lighting Box composed of a PCB with a red LED (λ ≈ 618nm) and an optical filter support. The direct control of the LED current makes the illuminance vary in a 3-decade range. Additional optical filters (neutral density filters) were used to drastically increase the mean luminosity range from 3 to 7 decades.
Fig. 1
Fig. 1 S-shaped curves corresponding to dark- and light-adapted response curves recorded in a red cone of the turtle. The peak of either the incremental or decremental response measured from the dark-adapted potential recorded before the background onset (dashed line) is plotted as a function of the log of the test pulse intensity which elicited each response. The steady hyperpolarization produced by each background lighting condition is given by the intersection between the intensity-response curve and the small horizontal line. The continuous curves were drawn from a single template which describes the function V = V m I I + σ. Adapted from [10].
Fig. 2
Fig. 2 (a) The silicon retina in its 9 × 9mm package; (b) Magnified view of the silicon retina composed of 12 Michaelis-Menten pixels presented in this study, and 12 additional Delbrück pixels; (c) Magnified view of 3 Michaelis-Menten pixels giving the photodiode’s dimensions and the inter-receptor distance.
Fig. 3
Fig. 3 Block diagram of the retina’s serial synchronous read-out interface (see [39]).
Fig. 4
Fig. 4 (a) Block diagram of the M2APix: Michaelis-Menten Auto-adaptive Pixel. The blocks in the dashed-line area, which are replicated 12 times, belong to a single pixel, whereas the two blocks outside the dashed-line area are common to all twelve pixels. (b) Hardware implementation in VLSI of an elementary auto-adaptive pixel (photodiode and current normalizer), the output signal of which is noted Iouti. A switch S can be used to select either I0i as the mean current Imeani provided by the built-in averaging circuit, or an external current Iexti provided by an external circuit. (c) Hardware implementation in VLSI of the filtering and averaging circuit computing the mean current of the 12 mirrored photodiode currents (I′phi) produced by the 12 normalizer circuits.
Fig. 6
Fig. 6 Root Mean Square (RMS) of the simulated output noise (blue) and corresponding minimum detectable contrast (red) with respect to the photodiode current. The RMS values are given by integrating in [10−5, 108]Hz the output noise obtained with an AC noise simulation which takes into account all the transistor noises and a white noise for the input photodiode. The minimum detectable contrast can be defined as the contrast that gives rise to a transient response of the output signal ±6-fold the RMS noise.
Fig. 8
Fig. 8 Block diagram of the hardware setup and communication flow involved in the pixel characterization procedure.
Fig. 9
Fig. 9 (a) Example of M2APix responses to step changes in the LED irradiance (ILEDi), starting with the same initial irradiance ( I LED 0 = 1 W m 2). (b) Zoom of the temporal pixel responses shown in Fig. 9(a) ranging between 0 and 50ms. To be able to distinguish more clearly between the steady-state responses and the transient responses, the step changes were delayed by 10ms (Ts) after the acquisition procedure had started. The black circles amount to 90% of the peak values Vouti and give qualitative information about the rise time (Tr). The contrast values are given by the Michelson formula c i = I LED i I LED 0 I LED i + I LED 0.
Fig. 10
Fig. 10 Average rise time with respect to luminous contrast. Each point corresponds to the time (Tr) required to reach 90% of the peak value Vouti, as depicted in Fig. 9(b). Each color refers to a different initial irradiance value ILED0: red about 0.001 W m 2, pink about 0.01 W m 2, dark blue about 0.1 W m 2, light blue about 1 W m 2, cyan about 10 W m 2, green about 100 W m 2.
Fig. 11
Fig. 11 (a) S-shaped curves and steady-state responses of the 12 pixels to LED irradiance changes in a 7-decade range. Each color refers to a different initial irradiance value ILED0 (red about 0.001 W m 2, pink about 0.01 W m 2, dark blue about 0.1 W m 2, light blue about 1 W m 2, cyan about 10 W m 2, green about 100 W m 2, same color and marker as Fig. 10), and the data points correspond to the average peak value Vouti reached by the 12 pixels in response to a step change in the irradiance ILEDi, as shown in Fig. 5. The steady-state response (black points) was obtained with ILEDi = ILED0 at several values of ILED0, whereas the initial values of the irradiance ILED0 are indicated by large black circles. The shaded areas were obtained by plotting the minimum to maximum values of the mean pixel output voltages. The average dispersion of each curve (σmean) ranged from 37.2mV in the case of the green one, to 85.4mV, in that of the red one. (b) Average peak response of the 12 pixels versus the contrast. The various curves correspond to the S-shaped curves in Fig. 11(a) (same colors and markers). The contrast is given by the Michelson formula: c i = I LED i I LED 0 I LED i + I LED 0.
Fig. 12
Fig. 12 Absolute value of the error between the M2APix responses measured and those predicted by the model with respect to the LED irradiance. As post-layout simulations of the circuit showed that the current-to-voltage converter gave a non-linear response at low current values, the theoretical values V out i * were computed by applying a look-up table of the current-to-voltage converter to (2), with Iph = ILED instead of using equation (4). (Same color and marker as previous figures)
Fig. 13
Fig. 13 (a) Two sequences of small contrasts in a 2-decade irradiance range. Time responses of (b) M2APix and (c) Delbrück pixel [16], implemented in the same silicon retina (see Fig. 2), when exposed to the irradiance sequences in Fig. 13(a). The sequences were obtained by repeating: ±2% and ±4% contrasts (blue line), ±6% and ±12% contrasts (red line). The steps in the sequence were triggered every 0.5 s and the changes in the average irradiance every 5s. For the sake of clarity, the timing of the sequence has been slightly shifted. (The spurious peaks, such as that which occurred at 10.5 s, may have been due to some error in the data transmission)

Tables (1)

Tables Icon

Table 1 Specifications of the M2APix and Delbrück pixels. The main advantage is that M2APix responds monotonically over very wide illuminance range without any loss of sensitivity and contrast resolution.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

V = V m I n I n + σ n ,
I out i = I b I ph i I ph i + I mean i ,
V out = R f I out ,
V out = R f I b I ph I ph + I mean + V BG ,
V out 0 = R f I b 2 + V BG ,
V out i = R f I b c i + 1 2 + V BG .

Metrics