Abstract

GaN-based micro-LED is an emerging display and communication device, which can work as well as a photodetector, enabling possible applications in machine vision. In this work, we measured the characteristics of micro-LED based photodetector experimentally and proposed a feasible simulation of a novel artificial neural network (ANN) device for the first time based on a micro-LED based photodetector array, providing ultrafast imaging (∼133 million bins per second) and a high image recognition rate. The array itself constitutes a neural network, in which the synaptic weights are tunable by the bias voltage. It has the potentials to be integrated with novel machine vision and reconfigurable computing applications, acting as a role of acceleration and similar functionality expansion. Also, the multi-functionality of micro-LED broadens its application potentials of combining ANN with display and communication.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In the past ten years, Artificial Intelligence (AI) technology has been developing rapidly, especially for the current pioneers of deep learning [1]. Applications of AI technology cover many fields and show extraordinary advantages [28], which benefits efficient automatic discrimination, recognition, control, etc. Among those, as images and videos carry a lot of information, machine vision with artificial neural networks (ANNs) is the biggest beneficiary which gradually becomes a basic integrated part of various intelligent systems. However, such a prospect is limited by the increasing model size of ANN, with higher accuracy for the more training parameters [913]. For example, GPipe reaches the state-of-the-art ImageNet top-1 validation accuracy to 84.3% using 557 million parameters but it’s too big to only be trained with a specialized pipeline parallelism library and difficultly be applied in existing devices [12], which means advanced ANNs cannot be easily implemented in traditional von Neumann computing system [14,15]. Besides, handling such large-scale involved information and data has critical requirements in aspects of time and power consumption. Previous studies were mainly based on existing devices for preprocessing visual data on-chip without ANNs [1619]. Because of the increasing demands, researchers begin to focus on efficient neuromorphic optical or electrical or both signals processing in critical paths of ANNs and attempt to emulate the structure of ANNs [2023]. These optoelectronic devices have been proven many notable advantages in various aspects, achieving higher recognition ability with lower power and latency. Recently, Lukas M. et al. provided a reconfigurable 2-D semiconductor (WSe2) photodiode ANN array as ultrafast image sensors for imaging and image recognition with an accuracy of 100% and a throughput of 20 million bins per second in ideal [24]. Much progress has been made from related works, which means novel optoelectronic ANN hardware devices can provide better performance and energy efficiency in machine vision than traditional computer architecture and are potentials approaches to overcome current bottlenecks mainly in aspects of sensing, real-time calculation, and power consumption.

Obviously, in the above-mentioned optoelectronic ANNs, high-performance optoelectronic devices are the key to achieving advanced systems. As a new-type solid-state light source, micro light-emitting diode (micro-LED), which is the miniaturized LED with a typical size of 1-100 μm, has the unique advantages of small size, high brightness, high resolution, high electrical-to-optical modulation bandwidth, and high reliability, and has been widely used in display, visible light communication (VLC), optogenetics, spatial positioning, and other fields [2532]. Micro-LED can also have the function of photodetection in addition to the solid-state light source characteristics, which has been used to realize the combination of light emission and imaging on single-chip [33]. In our previous research, it has also been demonstrated that micro-LED based photodetector has excellent photodetection characteristics such as high responsivity, high specific detectivity, high linear dynamic range, self-power, and ultrafast response (tens of nanoseconds) [34,35]. And the responsivity of such a device can be easily adjusted by the bias voltage. The variable responsivities of micro-LED based photodetector just meet the tunability requirements of synaptic weights in ANNs. Therefore, through a reasonable micro-LED array and circuit design, high-speed and high-density image sensors are expected to be realized. Compared with the current micro-LED based imaging system [33,36], in this work, machine learning technology is first used to the micro-LED array to realize the combination of projected image detection and information processing, which makes it possible to reduce the complexity and computational cost of the whole system. It can also reduce system delay and improve imaging speed. In addition to the advantages of optoelectronic ANN devices, due to the light emission characteristics of micro-LED, compared with ANNs composed of single-purpose photodetectors, it can also achieve the expansion of multifunctional applications. Such an integrated chip will have huge application potential in the full-color display, image recognition, lighting, and VLC. For instance, a high-resolution micro-LED based display can communicate and display in the emitting mode as well as assist or realize image recognition in the detecting mode with low latency.

In our study, for the first time, a kind of ultrafast imaging sensor based on a GaN-based micro-LED array is detailed in simulation, which itself constitutes an ANN that can sense and recognize projected images. Based on a small-scale array, the influence of noise is mainly studied and accuracy reaches 100% for all level noise. We also employed the MNIST database of handwritten digits to explore the potential of application in a large-scale array. Given the pulse response, the ideal throughout could be 133 million bins per second. Besides, micro-LED is a kind of multi-functional device that can be used in display and communication, which has the potential to be combined with ANN to implement more interesting applications.

2. Micro-LED based photodetector characteristics

In this experiment, green InGaN/GaN multiple quantum well (MQW) micro-LED array was used to verify the application potential in high-performance optoelectronic ANN. Firstly, the epitaxial wafer was grown on a sapphire substrate by metal-organic chemical vapor deposition (MOCVD), which mainly included a buffer layer, an n-type GaN layer, an InGaN/GaN MQW layer and a p-type GaN layer. On the basis of the epitaxial wafer, combined with ultraviolet lithography technology, the deposited indium tin oxide (ITO) current spreading layer and the p-GaN layer was etched by buffered oxide etchant (BOE) and inductively coupled plasma etching (ICP), respectively, to realize the preparation of mesa. Rapid thermal annealing (RTA) was used to achieve ohmic contacts. Further, Ti/Au was deposited on the surface positions defined by photolithography to achieve n-pad and p-pad. The final device structure is shown in Fig. 1(a). Such a traditional micro-LED structure ensures the performance as the solid-state light source. The optoelectronic ANN research carried out on this basis will help to realize the effective combination of light emission and image sensing.

 figure: Fig. 1.

Fig. 1. Characteristics of the micro-LED based photodetector. (a). The structure of green micro-LED based photodetector. (b). Photocurrent versus bias voltage curves under the conditions of different optical power densities. (c). Photocurrent versus power density curves at different bias voltage, inset: the part of curves from 0 W cm-2 to 0.3 W cm-2. (d). The pulse response of micro-LED based photodetector at different bias voltages.

Download Full Size | PPT Slide | PDF

In order to test the photodetection characteristics of the micro-LED based photodetector, a 405 nm laser diode was used as the light source. All tests were based on a high-precision source meter (Keithlet 2614B) and high-speed probe platform. Figure 1(b) shows the typical photocurrent-voltage curve of the micro-LED based photodetector with optical power density ranging from 0.84 W cm-2 to 1.24 W cm-2. When the applied voltage (from p-pad to n-pad) is less than +2.14 V, the device presents detector characteristics, and the photocurrent varies obviously with the change of the bias voltage. However, when the voltage exceeds +2.14 V, the current of the device becomes a positive value and rises rapidly, showing typical I-V characteristics of LED, which means that the device only needs to simply adjust the bias voltage to achieve effective switching between photodetection and light emission functions.

The linearity of micro-LED based photodetector has also been studied in detail. Taking the device without bias voltage (0 V) as an example, under the illumination of 0.84 W cm-2, 1.04 W cm-2, and 1.24 W cm-2 laser, the responsivities ($R = \; {I_{ph}}/P$, where ${I_{ph}}$ is the photocurrent and P is the received optical power) are 0.210 A W-1, 0.209 A W-1, 0.209 A W-1, respectively, showing an excellent consistency. What’s more, quantum efficiency ($\eta = Rh\nu /e$, where h, $\nu $ and e are Plank constant, the frequency of the incident photon, and elementary charge, respectively) can be used to characterize the photoelectric conversion performance of the device [37]. Under the condition of 1.04 W cm-2 power density, the $\eta $ of the micro photodetector at -5 V, 0 V, and +1.8 V are 89.1%, 64.2%, and 33.3%, respectively. The difference in quantum efficiency represents the tunable ability by the bias voltage, which is mainly realized by changing the built-in electric field strength to affect the collection efficiency of photo-generated carriers. It is worth noting that even at +1.8 V, the $\eta $ of the device is still around 33%. Such a high quantum efficiency will effectively ensure the image detection of the optoelectronic ANN.

Furthermore, we tested the variation trend of photocurrent versus optical power density at different bias voltages of -5 V, 0 V, and +1.8 V, to characterize the linearity tunability of micro-LED based photodetector, as shown in Fig. 1(c). In the range from 0 W cm-2 to 24.4 W cm-2, the photo response of the device at the three bias voltages shows good linearity, and the photocurrent saturation has not yet appeared, which means that such device can adapt to different light environments. The linearity of photo response can be fitted by a power function (${I_{ph}} = A{P^\alpha }$) [34]. The corresponding $\alpha $ values of the micro-LED based photodetector, which present the linearity of the device, under the conditions of -5 V, 0 V, and +1.8 V are 0.99, 1.02, and 1.02, respectively. Due to the excellent linearity, the responsivity as the synaptic weight in the algorithm will remain unchanged. It’s convenient to predict the photocurrent output of such a photodetector under any optical illumination, which will help to improve the system’s image recognition accuracy in the subsequent design of ANN devices.

Also, the pulse response at different bias voltages was tested to characterize the response time of the micro photodetector. The pulse signal was generated by the pulse pattern generator of the signal quality analyzer (Anritsu MP1800, 0.1-14 Gb/s), with a pulse width of 1 ns and a cycle of 200 ns. The 405 nm laser diode and micro-LED based photodetector were still used as the transmitter and the receiver, respectively. The pulsed photocurrent signal was converted into the voltage signal by a high-bandwidth trans-impedance amplifier (Femto DHPCA-100, 200 MHz), which was received by a high-speed oscilloscope (Agilent DSA-X 96204Q, 63 GHz). The corresponding waveform curves are shown in Fig. 1(d). The full width at half maximum (FWHM) of the device at -5 V reverse voltage and zero-bias was 4.47 ns and 7.50 ns, respectively. The ns-level response time means that the micro-LED based photodetector can detect high-speed (∼GHz) signals. In optoelectronic ANN, the excellent response speed is the basis for realizing ultrafast image recognition. Theoretically, for micro-LED based ANN photodetector in our experiment, the image processing rate of about 133 million bins per second could be achieved.

3. Micro-LED based ANN devices

Figure 2 illustrates the schematic diagram of the implementation of ANN devices based on a 2D micro-LED array. Each micro-LED represents a subpixel and a pixel consists of M subpixels, which overall N pixels are arranged as a 2D array (in Fig. 2(a)). Under optical illumination, each subpixel delivers a tunable photocurrent controlled by bias voltage. The sum of the output photocurrent produced by the subpixel in the same color can perform the matrix-vector multiplication in ANN algorithms, which represents the classification result. The architecture of the classifier, which is implemented on-chip, is operated as a single-layer perceptron (see Fig. 2(b)), proven the ability to fit models and realize recognition [38]. To simplify simulation, unless otherwise stated, all measurements and simulations were performed under illumination from the laser with a single wavelength of 405 nm. And without noise, the standard dark power density is 0 W cm-2 and the standard bright power density is 1.0 W cm-2. The photocurrent with illuminations of optical power density or tuned by bias voltage is precisely predicted based on experimental data and mathematic calculation. In the subsequent simulation with a small-scale device array, the micro-LED based ANN device was employed to recognize a 3×3 (N=9) projected images, and four detectors (M=4) formed a pixel. As the size of images is the common unit of image processing, such an image is enough for pattern representation. To further explore the potential applications, we used the MNIST database of handwritten digits in the simulation based on a large-scale array of N=784 and M=10. For the small-scale array, the accuracy reaches 100% under different levels of noise. And for the large-scale array, the accuracy reaches 91% without damaged micro-LEDs and potential inconsistency of micro-LED based photodetector. It’s sufficient for the proof-of-principle demonstration in image processing or recognition due to the simulation results. Also, how to improve and develop the ANN devices remains a technological task in the future.

 figure: Fig. 2.

Fig. 2. Schematic diagram of the imaging ANN micro-LED array. (a). Illustration of the micro-LED based photodetector array. Each micro-LED works as a subpixel and subpixels in the same color are connected in parallel to provide the photocurrent output for recognition; (b). The architecture of the classifier, where input layer and fully-connected layer is on-chip implemented in the imaging sensor. The softmax layer for activation function is an off-chip calculation. (c). Four letters ‘N’, ‘T’, ‘V’ and ‘X’ in projected images with different Gaussian noise levels: no noise, crosstalk factor σ=0.1, 0.2, and 0.3, respectively.

Download Full Size | PPT Slide | PDF

The projected images are stylized into four letters ‘N’, ‘T’, ‘V’, and ‘Z’, as shown in Fig. 2(c). Each letter has the same bright and dark pixels and can represent a different pattern in images. Both kinds of noise are taken into consideration [39]: Gaussian noise (with a standard deviation of 0.2) was added to interfere with the optical power density of the incident light and Gaussian noise (with a standard deviation of σ = 0.1, 0.2, and 0.3) was induced by optical crosstalk of pixels. It’s worth noting that the maximum power density of the dark pixel is up to 90% of the power density of the bright pixel for the crosstalk noise with a standard deviation of 0.3.

During each training epoch in simulation with the 3×3 array, a set of S=40 randomly chosen letters (10 images per letter) was optically projected. Given the photodetector characteristics and simulation simplicity, the background constraint of weights was under non-negative weights constraint. To supervise the learning feedback, one-hot encoding was employed which means each kind of letter activates one neuron. Among M output neurons, the softmax function that was always combined with one-hot encoding is chosen as:

$${O_n}(\textrm{I} )= \frac{{{\textrm{e}^{\alpha {I_n}}}}}{{\mathop \sum \nolimits_{k = 1}^M {e^{\alpha {I_k}}}}}\; ,$$
where α=105 A-1 is the scaling factor that ensures the activation function can achieve the normalization of the classification probability ${O_n}(\textrm{I} )$ and the overall sum is 1. The appropriate loss function we used in the simulation is the cross-entropy function:
$$Loss = \frac{1}{s}\mathop \sum \nolimits_{n = 1}^s {L_n} = -\frac{1}{s}\mathop \sum \nolimits_{n = 1}^s \mathop \sum \nolimits_{i = 1}^M {y_{ni}}\log [{{O_{ni}}(I )} ],$$
where ${L_n}$ is the loss of each category and ${y_{ni}}$ is the binary label (0 or 1). The initial weights were the random photocurrents around the bias voltage of 0 V. The weights after each training epoch were updated by backpropagation of stochastic gradient descent (SGD) with a learning rate of 0.1 [40].

The accuracy curves and the loss curves are shown in Fig. 3(a) and (b) from initialization to epoch 80. The accuracy for all noise levels finally achieves 100%. Less noise helps the achievement faster and more stable. When σ=0.1 and 0.2, the accuracy of 100% can be reached around epoch 10 and keep stable. But at a high noise level (σ=0.3), it reaches an almost stable (last 7 epochs) 100% around epoch 20 and unchangeable after epoch 50. Also, the loss converges faster in the first 30 epochs and slowly decreased over epoch 60. With higher noise levels, the final losses will oscillate more at a higher value of 0.25, 0.17, and 0.09, respectively. As the weights are tunable by the bias voltages, using the bias voltage distributions is the same as the synaptic weight distribution and makes the data visualization more straightforward as well. As the accuracy is kept at 100% from epoch 18 to epoch 23, the distribution of epoch 20 is picked to observe the distribution changes between itself and the final. In Fig. 3(c), as the training processes, the distribution tends to move to increase the difference between photocurrent outputs. For all noise levels, the number of negative bias voltages and positive is 20 and 16, respectively. The detailed distribution is a little different in individual intervals.

 figure: Fig. 3.

Fig. 3. Results of the micro-LED based ANN device during the training process in simulation with the 3×3 array. (a). Accuracy of image recognition for varying random noise levels; (b). Loss calculation under different noise levels during training; (c). Bias voltage distributions of the device at the noise level of 0.1 and 0.3. The three subgraphs from top to bottom are the number of subpixels in initialization, epoch 20, and the final, respectively.

Download Full Size | PPT Slide | PDF

Moreover, Fig. 4 can demonstrate the functionality of the micro-LED based ANN device in the detecting mode more directly and can be used to analyze the phenomenon in Fig. 3. Figure 4 plots the average output photocurrents when each letter is projected with a noise level of 0.3. Accordingly, four curves are corresponding to four letters and the highest photocurrent determines which letter category the projected image belongs to. For example, the upper left subgraph means the average photocurrent output for each epoch when the kind of letter ‘N’ was projected. Besides, the difference of photocurrents can be understood as the robustness of the classifier to avoid errors, increasing with epochs. The letter ‘T’ with horizontal and vertical patterns can be the easiest and fastest recognized (after epoch 7). The letters ‘N’ and ‘V’ have the same output changes and can be identified after epoch 17 because of the similar diagonal and vertical patterns. The letter ‘X’ is the biggest victim due to the severe crosstalk noise of pure diagonal pattern. In the bottom right subgraph, it can be seen the output photocurrent of the letter ‘X’ is slightly separated around epoch 20 but not enough for recognition. The correct level is reached after epoch 50 and all ‘robustness’ is barely decreased from then, which explains the accuracy variation in Fig. 3(a).

 figure: Fig. 4.

Fig. 4. Average photocurrent output for each epoch with the noise level of 0.3. Four graphs are corresponding to the letters ‘N’, ‘T’, ‘V’ and ‘X’, respectively. The recognition result is determined by the maximum output photocurrent.

Download Full Size | PPT Slide | PDF

In simulation based on a small-scale array, the damaged micro-LED and inconsistency of the micro-LED are not taken into consideration. Given the size of the small-scale device array, the difference may occur but inconspicuously. The impact of failed subpixels and inconsistency of the device are studied in subsequent simulations with the MNIST database of handwritten digits. Due to the size of the MNIST database, we used the optoelectronic micro-LED based ANN devices with N = 784 (28×28 = 784) and M = 10 (10 kinds of handwritten digits). The number of subpixels is 7840 and normally we assume that such a large-scale array has a certain number of damaged micro-LEDs called failed subpixels. Also, there is an inconsistency between different GaN-based micro-LED photodetector. In order to focus on the impact of damaged micro-LEDs or the inconsistency of micro-LEDs in the large-scale array, we adjusted the images of MNIST images into a grayscale of 1 and the grayscale adjustment barely changes the patterns of handwritten digits. The training process is similar as previously mentioned. The difference is that the size of each epoch is 400 (40 images per digit) and the number of epoch is about 200. In this simulation, some subpixels are randomly chosen to be failed, which means their photocurrent outputs keep zero. As the inconsistency occurs, the photocurrent of each subpixel is randomly generated and fitted to the Gaussian probability distribution with a standard deviation. In Fig. 5(a), the accuracy at the proportion of failed subpixels of 0%, 5%, 10%, 15%, 20% is 90.5%, 85.5%, 83.8%, 82.4%, and 81.7%, respectively. The corresponding loss is 0.35, 0.46, 0.58, 0.71, 0.78, respectively. It’s noticeable that the decline of the accuracy is up to 8%. However, the proportion of failed subpixels of 20% is very high for common fabrications, which means the ANN devices have good robustness to deal with the impact of the on-chip damaged micro-LEDs. As shown in Fig. 5(b), the influence of the inconsistence is less than the failed subpixels. The accuracy generally decreased from 90.5% to 88.7% as the standard deviation increased from 0 to 0.5. As the value 0.5 is big enough for standard deviation, the overall decline of the accuracy is about 2% and the loss is nearly unchanged. To keep high accuracy, the number of failed subpixels should be as little as possible in a large-scale array. Also, relevant algorithms may be improved to decrease the related impact.

 figure: Fig. 5.

Fig. 5. Results of the micro-LED based ANN device in simulation with the 28×28 array. (a). The accuracy versus the proportion of failed subpixels curve and the inset is the loss versus the proportion of failed subpixels curve; (b). The accuracy versus the standard deviation curve and the inset is the loss versus the standard deviation curve. The inconsistency of the micro-LED based photodetector is supposed to fit the Gaussian distribution with a standard deviation.

Download Full Size | PPT Slide | PDF

The algorithm employed here is simple but sufficient to prove the feasibility of such a device in theory. However, the algorithm is basically from the traditional software and not optimized to fit dedicated hardware micro-LED based ANN device, for instance, taking the power consumption into loss functions for the tradeoff between the accuracy and the energy efficiency, optimizing algorithms against the influence of the failed subpixels, etc. What’s more, the implementation requires a considerable number of high-precision voltage controllers and the structure of the neural network can be easily wider but not deeper without other optoelectronic devices (like memristor, optoelectronic resistive random access memory (ORRAM), and other devices). However, the low latency, reconfigurable recognition, and other multifunctional applications of such a micro-LED based ANN device are worthy of continuous attempts in further researches.

4. Conclusions

In summary, we proposed a novel ultrafast optoelectronic ANN device based on GaN-based micro-LED. The photoelectric effect of micro-LED based photodetector has been studied. In the range of the incident optical power density from 0 W cm-2 to 24.4 W cm-2, the device showed excellent linearity characteristics and high quantum efficiency at different bias voltage, which ensured the design of micro-LED based optoelectronic ANN device. What’s more, the response speed of micro-LED based photodetector at 0 V and -5 V has also been tested, and the FWHM was 4.47 ns and 7.5 ns, respectively. The corresponding theoretical image recognition rate can reach 133 million bins per second. In the simulation with a 3×3 array and four stylized letters, we test the functionality of the micro-LED based ANN device for image recognition that reaches the accuracy of 100% eventually, even at a high noise level. In simulations with a 28×28 array and the MNIST database of handwritten digits, we intensively study the influence of failed subpixels and the inconsistency. The accuracy decreased from 90.5% to 81.7% as the proportion of failed subpixels increased from 0% to 20% and the overall accuracy decline is about 8%. Due to the influence of inconsistency, the accuracy decreased from 90.5% to 88.7% and the loss is barely changed. The classifier can sense and recognize letters with different image patterns simultaneously using the photocurrent outputs. It shows the potentials that various training algorithm possibilities for such an ultrafast machine vision with micro-LED based ANN devices.

Funding

National Key Research and Development Program of China (2021YFE0105300); National Natural Science Foundation of China (61974031); Science and Technology Commission of Shanghai Municipality (21511101303); Fudan University-CIOMP Joint Fund (FC2020-001).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015). [CrossRef]  

2. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Commun. ACM 60(6), 84–90 (2017). [CrossRef]  

3. T. Mikolov, A. Deoras, D. Povey, L. Burget, and J. Černocky, “Strategies for training large scale neural network language models,” in Proceedings of IEEE Workshop on Automatic Speech Recognition and Understanding (IEEE, 2011), pp.196–201.

4. G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury, “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Process. Mag. 29(6), 82–97 (2012). [CrossRef]  

5. J. Ma, R. P. Sheridan, A. Liaw, G. E. Dahl, and V. Svetnik, “Deep neural nets as a method for quantitative structure–activity relationships,” J. Chem. Inf. Model. 55(2), 263–274 (2015). [CrossRef]  

6. M. Helmstaedter, K. L. Briggman, S. C. Turaga, V. Jain, H. S. Seung, and W. Denk, “Connectomic reconstruction of the inner plexiform layer in the mouse retina,” Nature 500(7461), 168–174 (2013). [CrossRef]  

7. M. K. K. Leung, H. Y. Xiong, L. J. Lee, and B. J. Frey, “Deep learning of the tissue-regulated splicing code,” Bioinformatics 30(12), i121–i129 (2014). [CrossRef]  

8. H. Y. Xiong, B. Alipanahi, L. J. Lee, H. Bretschneider, D. Merico, R. K. C. Yuen, Y. Hua, S. Gueroussov, H. S. Najafabadi, T. R. Hughes, Q. Morris, Y. Barash, A. R. Krainer, N. Jojic, S. W. Scherer, B. J. Blencowe, and B. J. Frey, “The human splicing code reveals new insights into the genetic determinants of disease,” Science 347(6218), 1254806 (2015). [CrossRef]  

9. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 770–778.

10. S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 1492–1500.

11. J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 7132–7141.

12. Y. Huang, Y. Cheng, A. Bapna, O. Firat, D. Chen, M. Chen, H. Lee, J. Ngiam, Q. V. Le, Y. Wu, and Z. Chen, “Gpipe: Efficient training of giant neural networks using pipeline parallelism,” Advances in Neural Information Processing Systems 32, 103–112 (2019).

13. M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in Proceedings of the 36th International Conference on Machine Learning (PMLR, 2019), pp. 6105–6114.

14. O. Mutlu, S. Ghose, J. Gómez-Luna, and R. Ausavarungnirun, “Processing data where it makes sense: enabling in-memory computation,” Microprocessors and Microsystems 67, 28–41 (2019). [CrossRef]  

15. A. Sebastian, M. L. Gallo, R. Khaddam-Aljameh, and E. Eleftheriou, “Memory devices and applications for in-memory computing,” Nat. Nanotechnol. 15(7), 529–544 (2020). [CrossRef]  

16. C. A. Mead and M. A. Mahowald, “A silicon model of early visual processing,” Neural Networks 1(1), 91–97 (1988). [CrossRef]  

17. K. Kyuma, E. Lange, J. Ohta, A. Hermanns, B. Banish, and M. Oita, “Artificial retinas—fast, versatile image processors,” Nature 372(6502), 197–198 (1994). [CrossRef]  

18. N. Cottini, M. Gottardi, N. Massari, R. Passerone, and Z. Smilansky, “A 33 μw 64×64 pixel vision sensor embedding robust dynamic background subtraction for event detection and scene interpretation,” IEEE J. Solid-State Circuits 48(3), 850–863 (2013). [CrossRef]  

19. C. Posch, T. Serrano-Gotarredona, B. Linares-Barranco, and T. Delbruck, “Retinomorphic event-based vision sensors: bioinspired cameras with spiking output,” Proc. IEEE 102(10), 1470–1484 (2014). [CrossRef]  

20. K. Roy, A. Jaiswal, and P. Panda, “Towards spike-based machine intelligence with neuromorphic computing,” Nature 575(7784), 607–617 (2019). [CrossRef]  

21. X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361(6406), 1004–1008 (2018). [CrossRef]  

22. Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017). [CrossRef]  

23. F. Zhou, Z. Zhou, J. Chen, T. H. Choy, J. Wang, N. Zhang, Z. Lin, S. Yu, J. Kang, H.-S. P. Wong, and Y. Chai, “Optoelectronic resistive random access memory for neuromorphic vision sensors,” Nat. Nanotechnol. 14(8), 776–782 (2019). [CrossRef]  

24. L. Mennel, J. Symonowicz, S. Wachter, D. K. Polyushkin, A. J. Molina-Mendoza, and T. Mueller, “Ultrafast machine vision with 2D material neural network image sensors,” Nature 579(7797), 62–66 (2020). [CrossRef]  

25. X. Zhou, P. Tian, C.-W. Sher, J. Wu, H. Liu, R. Liu, and H.-C. Kuo, “Growth, transfer printing and colour conversion techniques towards full-colour micro-LED display,” Prog. Quantum Electron. 71, 100263 (2020). [CrossRef]  

26. T. Wu, C.-W. Sher, Y. Lin, C.-F. Lee, S. Liang, Y. Lu, S.-W. H. Chen, W. Guo, H.-C. Kuo, and Z. Chen, “Mini-LED and micro-LED: Promising candidates for the next generation display technology,” Appl. Sci. 8(9), 1557 (2018). [CrossRef]  

27. X. Liu, P. Tian, Z. Wei, S. Yi, Y. Huang, X. Zhou, Z.-J. Qiu, L. Hu, Z. Fang, C. Cong, L. Zheng, and R. Liu, “Gbps long-distance real-time visible light communications using a high-bandwidth GaN-based micro-LED,” IEEE Photonics J. 9(6), 7204909 (2017). [CrossRef]  

28. N. McAlinden, D. Massoubre, E. Richardson, E. Gu, S. Sakata, M. D. Dawson, and K. Mathieson, “Thermal and optical characterization of micro-LED probes for in vivo optogenetic neural stimulation,” Opt. Lett. 38(6), 992–994 (2013). [CrossRef]  

29. E. Xie, R. Bian, X. He, M. S. Islim, C. Chen, J. J. McKendry, E. Gu, H. Haas, and M. D. Dawson, “Over 10 Gbps VLC for long-distance applications using a GaN-based series-biased micro-LED array,” IEEE Photonics Technol. Lett. 32(9), 499–502 (2020). [CrossRef]  

30. Y. Huang, E.-L. Hsiang, M.-Y. Deng, and S.-T. Wu, “Mini-LED, micro-LED and OLED displays: present status and future perspectives,” Light: Sci. Appl. 9(1), 105 (2020). [CrossRef]  

31. N. McAlinden, E. Gu, M. D. Dawson, S. Sakata, and K. Mathieson, “Optogenetic activation of neocortical neurons in vivo with a sapphire-based micro-scale LED probe,” Front. Neural Circuits 9, 25 (2015). [CrossRef]  

32. J. Herrnsdorf, M. J. Strain, E. Gu, R. K. Henderson, and M. D. Dawson, “Positioning and space-division multiple access enabled by structured illumination with light-emitting diodes,” J. Lightwave Technol. 35(12), 2339–2345 (2017). [CrossRef]  

33. Y. Wang, X. Gao, K. Fu, F. Qin, H. Zhu, Y. Liu, and H. Amano, “Single-chip imaging system that simultaneously transmits light,” Appl. Phys. Express 13(10), 101002 (2020). [CrossRef]  

34. X. Liu, R. Lin, H. Chen, S. Zhang, Z. Qian, G. Zhou, X. Chen, X. Zhou, L. Zheng, R. Liu, and P. Tian, “High-bandwidth InGaN self-powered detector arrays toward MIMO visible light communication based on micro-LED arrays,” ACS Photonics 6(12), 3186–3195 (2019). [CrossRef]  

35. R. Lin, X. Liu, G. Zhou, Z. Qian, X. Cui, and P. Tian, “InGaN micro-LED array enabled advanced underwater wireless optical communication and underwater charging,” Adv. Opt. Mater. 9(12), 2002211 (2021). [CrossRef]  

36. K. Fu, X. Gao, F. Qin, Y. Song, L. Wang, Z. Ye, Y. Su, H. Zhu, X. Ji, and Y. Wang, “Simultaneous illumination-imaging,” Adv. Mater. Technol. 6(8), 2100227 (2021). [CrossRef]  

37. J. M. Wu and W. E. Chang, “Ultrahigh responsivity and external quantum efficiency of an ultraviolet-light photodetector based on a single VO2 microwire,” ACS Appl. Mater. Interfaces 6(16), 14286–14292 (2014). [CrossRef]  

38. K. Hornik, “Approximation capabilities of multilayer feedforward networks,” Neural Networks 4(2), 251–257 (1991). [CrossRef]  

39. C. M. Bishop, “Training with noise is equivalent to Tikhonov regularization,” Neural Computation 7(1), 108–116 (1995). [CrossRef]  

40. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323(6088), 533–536 (1986). [CrossRef]  

References

  • View by:

  1. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
    [Crossref]
  2. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Commun. ACM 60(6), 84–90 (2017).
    [Crossref]
  3. T. Mikolov, A. Deoras, D. Povey, L. Burget, and J. Černocky, “Strategies for training large scale neural network language models,” in Proceedings of IEEE Workshop on Automatic Speech Recognition and Understanding (IEEE, 2011), pp.196–201.
  4. G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury, “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Process. Mag. 29(6), 82–97 (2012).
    [Crossref]
  5. J. Ma, R. P. Sheridan, A. Liaw, G. E. Dahl, and V. Svetnik, “Deep neural nets as a method for quantitative structure–activity relationships,” J. Chem. Inf. Model. 55(2), 263–274 (2015).
    [Crossref]
  6. M. Helmstaedter, K. L. Briggman, S. C. Turaga, V. Jain, H. S. Seung, and W. Denk, “Connectomic reconstruction of the inner plexiform layer in the mouse retina,” Nature 500(7461), 168–174 (2013).
    [Crossref]
  7. M. K. K. Leung, H. Y. Xiong, L. J. Lee, and B. J. Frey, “Deep learning of the tissue-regulated splicing code,” Bioinformatics 30(12), i121–i129 (2014).
    [Crossref]
  8. H. Y. Xiong, B. Alipanahi, L. J. Lee, H. Bretschneider, D. Merico, R. K. C. Yuen, Y. Hua, S. Gueroussov, H. S. Najafabadi, T. R. Hughes, Q. Morris, Y. Barash, A. R. Krainer, N. Jojic, S. W. Scherer, B. J. Blencowe, and B. J. Frey, “The human splicing code reveals new insights into the genetic determinants of disease,” Science 347(6218), 1254806 (2015).
    [Crossref]
  9. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 770–778.
  10. S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 1492–1500.
  11. J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 7132–7141.
  12. Y. Huang, Y. Cheng, A. Bapna, O. Firat, D. Chen, M. Chen, H. Lee, J. Ngiam, Q. V. Le, Y. Wu, and Z. Chen, “Gpipe: Efficient training of giant neural networks using pipeline parallelism,” Advances in Neural Information Processing Systems 32, 103–112 (2019).
  13. M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in Proceedings of the 36th International Conference on Machine Learning (PMLR, 2019), pp. 6105–6114.
  14. O. Mutlu, S. Ghose, J. Gómez-Luna, and R. Ausavarungnirun, “Processing data where it makes sense: enabling in-memory computation,” Microprocessors and Microsystems 67, 28–41 (2019).
    [Crossref]
  15. A. Sebastian, M. L. Gallo, R. Khaddam-Aljameh, and E. Eleftheriou, “Memory devices and applications for in-memory computing,” Nat. Nanotechnol. 15(7), 529–544 (2020).
    [Crossref]
  16. C. A. Mead and M. A. Mahowald, “A silicon model of early visual processing,” Neural Networks 1(1), 91–97 (1988).
    [Crossref]
  17. K. Kyuma, E. Lange, J. Ohta, A. Hermanns, B. Banish, and M. Oita, “Artificial retinas—fast, versatile image processors,” Nature 372(6502), 197–198 (1994).
    [Crossref]
  18. N. Cottini, M. Gottardi, N. Massari, R. Passerone, and Z. Smilansky, “A 33 μw 64×64 pixel vision sensor embedding robust dynamic background subtraction for event detection and scene interpretation,” IEEE J. Solid-State Circuits 48(3), 850–863 (2013).
    [Crossref]
  19. C. Posch, T. Serrano-Gotarredona, B. Linares-Barranco, and T. Delbruck, “Retinomorphic event-based vision sensors: bioinspired cameras with spiking output,” Proc. IEEE 102(10), 1470–1484 (2014).
    [Crossref]
  20. K. Roy, A. Jaiswal, and P. Panda, “Towards spike-based machine intelligence with neuromorphic computing,” Nature 575(7784), 607–617 (2019).
    [Crossref]
  21. X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361(6406), 1004–1008 (2018).
    [Crossref]
  22. Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017).
    [Crossref]
  23. F. Zhou, Z. Zhou, J. Chen, T. H. Choy, J. Wang, N. Zhang, Z. Lin, S. Yu, J. Kang, H.-S. P. Wong, and Y. Chai, “Optoelectronic resistive random access memory for neuromorphic vision sensors,” Nat. Nanotechnol. 14(8), 776–782 (2019).
    [Crossref]
  24. L. Mennel, J. Symonowicz, S. Wachter, D. K. Polyushkin, A. J. Molina-Mendoza, and T. Mueller, “Ultrafast machine vision with 2D material neural network image sensors,” Nature 579(7797), 62–66 (2020).
    [Crossref]
  25. X. Zhou, P. Tian, C.-W. Sher, J. Wu, H. Liu, R. Liu, and H.-C. Kuo, “Growth, transfer printing and colour conversion techniques towards full-colour micro-LED display,” Prog. Quantum Electron. 71, 100263 (2020).
    [Crossref]
  26. T. Wu, C.-W. Sher, Y. Lin, C.-F. Lee, S. Liang, Y. Lu, S.-W. H. Chen, W. Guo, H.-C. Kuo, and Z. Chen, “Mini-LED and micro-LED: Promising candidates for the next generation display technology,” Appl. Sci. 8(9), 1557 (2018).
    [Crossref]
  27. X. Liu, P. Tian, Z. Wei, S. Yi, Y. Huang, X. Zhou, Z.-J. Qiu, L. Hu, Z. Fang, C. Cong, L. Zheng, and R. Liu, “Gbps long-distance real-time visible light communications using a high-bandwidth GaN-based micro-LED,” IEEE Photonics J. 9(6), 7204909 (2017).
    [Crossref]
  28. N. McAlinden, D. Massoubre, E. Richardson, E. Gu, S. Sakata, M. D. Dawson, and K. Mathieson, “Thermal and optical characterization of micro-LED probes for in vivo optogenetic neural stimulation,” Opt. Lett. 38(6), 992–994 (2013).
    [Crossref]
  29. E. Xie, R. Bian, X. He, M. S. Islim, C. Chen, J. J. McKendry, E. Gu, H. Haas, and M. D. Dawson, “Over 10 Gbps VLC for long-distance applications using a GaN-based series-biased micro-LED array,” IEEE Photonics Technol. Lett. 32(9), 499–502 (2020).
    [Crossref]
  30. Y. Huang, E.-L. Hsiang, M.-Y. Deng, and S.-T. Wu, “Mini-LED, micro-LED and OLED displays: present status and future perspectives,” Light: Sci. Appl. 9(1), 105 (2020).
    [Crossref]
  31. N. McAlinden, E. Gu, M. D. Dawson, S. Sakata, and K. Mathieson, “Optogenetic activation of neocortical neurons in vivo with a sapphire-based micro-scale LED probe,” Front. Neural Circuits 9, 25 (2015).
    [Crossref]
  32. J. Herrnsdorf, M. J. Strain, E. Gu, R. K. Henderson, and M. D. Dawson, “Positioning and space-division multiple access enabled by structured illumination with light-emitting diodes,” J. Lightwave Technol. 35(12), 2339–2345 (2017).
    [Crossref]
  33. Y. Wang, X. Gao, K. Fu, F. Qin, H. Zhu, Y. Liu, and H. Amano, “Single-chip imaging system that simultaneously transmits light,” Appl. Phys. Express 13(10), 101002 (2020).
    [Crossref]
  34. X. Liu, R. Lin, H. Chen, S. Zhang, Z. Qian, G. Zhou, X. Chen, X. Zhou, L. Zheng, R. Liu, and P. Tian, “High-bandwidth InGaN self-powered detector arrays toward MIMO visible light communication based on micro-LED arrays,” ACS Photonics 6(12), 3186–3195 (2019).
    [Crossref]
  35. R. Lin, X. Liu, G. Zhou, Z. Qian, X. Cui, and P. Tian, “InGaN micro-LED array enabled advanced underwater wireless optical communication and underwater charging,” Adv. Opt. Mater. 9(12), 2002211 (2021).
    [Crossref]
  36. K. Fu, X. Gao, F. Qin, Y. Song, L. Wang, Z. Ye, Y. Su, H. Zhu, X. Ji, and Y. Wang, “Simultaneous illumination-imaging,” Adv. Mater. Technol. 6(8), 2100227 (2021).
    [Crossref]
  37. J. M. Wu and W. E. Chang, “Ultrahigh responsivity and external quantum efficiency of an ultraviolet-light photodetector based on a single VO2 microwire,” ACS Appl. Mater. Interfaces 6(16), 14286–14292 (2014).
    [Crossref]
  38. K. Hornik, “Approximation capabilities of multilayer feedforward networks,” Neural Networks 4(2), 251–257 (1991).
    [Crossref]
  39. C. M. Bishop, “Training with noise is equivalent to Tikhonov regularization,” Neural Computation 7(1), 108–116 (1995).
    [Crossref]
  40. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323(6088), 533–536 (1986).
    [Crossref]

2021 (2)

R. Lin, X. Liu, G. Zhou, Z. Qian, X. Cui, and P. Tian, “InGaN micro-LED array enabled advanced underwater wireless optical communication and underwater charging,” Adv. Opt. Mater. 9(12), 2002211 (2021).
[Crossref]

K. Fu, X. Gao, F. Qin, Y. Song, L. Wang, Z. Ye, Y. Su, H. Zhu, X. Ji, and Y. Wang, “Simultaneous illumination-imaging,” Adv. Mater. Technol. 6(8), 2100227 (2021).
[Crossref]

2020 (6)

Y. Wang, X. Gao, K. Fu, F. Qin, H. Zhu, Y. Liu, and H. Amano, “Single-chip imaging system that simultaneously transmits light,” Appl. Phys. Express 13(10), 101002 (2020).
[Crossref]

L. Mennel, J. Symonowicz, S. Wachter, D. K. Polyushkin, A. J. Molina-Mendoza, and T. Mueller, “Ultrafast machine vision with 2D material neural network image sensors,” Nature 579(7797), 62–66 (2020).
[Crossref]

X. Zhou, P. Tian, C.-W. Sher, J. Wu, H. Liu, R. Liu, and H.-C. Kuo, “Growth, transfer printing and colour conversion techniques towards full-colour micro-LED display,” Prog. Quantum Electron. 71, 100263 (2020).
[Crossref]

E. Xie, R. Bian, X. He, M. S. Islim, C. Chen, J. J. McKendry, E. Gu, H. Haas, and M. D. Dawson, “Over 10 Gbps VLC for long-distance applications using a GaN-based series-biased micro-LED array,” IEEE Photonics Technol. Lett. 32(9), 499–502 (2020).
[Crossref]

Y. Huang, E.-L. Hsiang, M.-Y. Deng, and S.-T. Wu, “Mini-LED, micro-LED and OLED displays: present status and future perspectives,” Light: Sci. Appl. 9(1), 105 (2020).
[Crossref]

A. Sebastian, M. L. Gallo, R. Khaddam-Aljameh, and E. Eleftheriou, “Memory devices and applications for in-memory computing,” Nat. Nanotechnol. 15(7), 529–544 (2020).
[Crossref]

2019 (5)

K. Roy, A. Jaiswal, and P. Panda, “Towards spike-based machine intelligence with neuromorphic computing,” Nature 575(7784), 607–617 (2019).
[Crossref]

Y. Huang, Y. Cheng, A. Bapna, O. Firat, D. Chen, M. Chen, H. Lee, J. Ngiam, Q. V. Le, Y. Wu, and Z. Chen, “Gpipe: Efficient training of giant neural networks using pipeline parallelism,” Advances in Neural Information Processing Systems 32, 103–112 (2019).

O. Mutlu, S. Ghose, J. Gómez-Luna, and R. Ausavarungnirun, “Processing data where it makes sense: enabling in-memory computation,” Microprocessors and Microsystems 67, 28–41 (2019).
[Crossref]

F. Zhou, Z. Zhou, J. Chen, T. H. Choy, J. Wang, N. Zhang, Z. Lin, S. Yu, J. Kang, H.-S. P. Wong, and Y. Chai, “Optoelectronic resistive random access memory for neuromorphic vision sensors,” Nat. Nanotechnol. 14(8), 776–782 (2019).
[Crossref]

X. Liu, R. Lin, H. Chen, S. Zhang, Z. Qian, G. Zhou, X. Chen, X. Zhou, L. Zheng, R. Liu, and P. Tian, “High-bandwidth InGaN self-powered detector arrays toward MIMO visible light communication based on micro-LED arrays,” ACS Photonics 6(12), 3186–3195 (2019).
[Crossref]

2018 (2)

T. Wu, C.-W. Sher, Y. Lin, C.-F. Lee, S. Liang, Y. Lu, S.-W. H. Chen, W. Guo, H.-C. Kuo, and Z. Chen, “Mini-LED and micro-LED: Promising candidates for the next generation display technology,” Appl. Sci. 8(9), 1557 (2018).
[Crossref]

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361(6406), 1004–1008 (2018).
[Crossref]

2017 (4)

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017).
[Crossref]

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Commun. ACM 60(6), 84–90 (2017).
[Crossref]

X. Liu, P. Tian, Z. Wei, S. Yi, Y. Huang, X. Zhou, Z.-J. Qiu, L. Hu, Z. Fang, C. Cong, L. Zheng, and R. Liu, “Gbps long-distance real-time visible light communications using a high-bandwidth GaN-based micro-LED,” IEEE Photonics J. 9(6), 7204909 (2017).
[Crossref]

J. Herrnsdorf, M. J. Strain, E. Gu, R. K. Henderson, and M. D. Dawson, “Positioning and space-division multiple access enabled by structured illumination with light-emitting diodes,” J. Lightwave Technol. 35(12), 2339–2345 (2017).
[Crossref]

2015 (4)

N. McAlinden, E. Gu, M. D. Dawson, S. Sakata, and K. Mathieson, “Optogenetic activation of neocortical neurons in vivo with a sapphire-based micro-scale LED probe,” Front. Neural Circuits 9, 25 (2015).
[Crossref]

J. Ma, R. P. Sheridan, A. Liaw, G. E. Dahl, and V. Svetnik, “Deep neural nets as a method for quantitative structure–activity relationships,” J. Chem. Inf. Model. 55(2), 263–274 (2015).
[Crossref]

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

H. Y. Xiong, B. Alipanahi, L. J. Lee, H. Bretschneider, D. Merico, R. K. C. Yuen, Y. Hua, S. Gueroussov, H. S. Najafabadi, T. R. Hughes, Q. Morris, Y. Barash, A. R. Krainer, N. Jojic, S. W. Scherer, B. J. Blencowe, and B. J. Frey, “The human splicing code reveals new insights into the genetic determinants of disease,” Science 347(6218), 1254806 (2015).
[Crossref]

2014 (3)

C. Posch, T. Serrano-Gotarredona, B. Linares-Barranco, and T. Delbruck, “Retinomorphic event-based vision sensors: bioinspired cameras with spiking output,” Proc. IEEE 102(10), 1470–1484 (2014).
[Crossref]

M. K. K. Leung, H. Y. Xiong, L. J. Lee, and B. J. Frey, “Deep learning of the tissue-regulated splicing code,” Bioinformatics 30(12), i121–i129 (2014).
[Crossref]

J. M. Wu and W. E. Chang, “Ultrahigh responsivity and external quantum efficiency of an ultraviolet-light photodetector based on a single VO2 microwire,” ACS Appl. Mater. Interfaces 6(16), 14286–14292 (2014).
[Crossref]

2013 (3)

N. McAlinden, D. Massoubre, E. Richardson, E. Gu, S. Sakata, M. D. Dawson, and K. Mathieson, “Thermal and optical characterization of micro-LED probes for in vivo optogenetic neural stimulation,” Opt. Lett. 38(6), 992–994 (2013).
[Crossref]

M. Helmstaedter, K. L. Briggman, S. C. Turaga, V. Jain, H. S. Seung, and W. Denk, “Connectomic reconstruction of the inner plexiform layer in the mouse retina,” Nature 500(7461), 168–174 (2013).
[Crossref]

N. Cottini, M. Gottardi, N. Massari, R. Passerone, and Z. Smilansky, “A 33 μw 64×64 pixel vision sensor embedding robust dynamic background subtraction for event detection and scene interpretation,” IEEE J. Solid-State Circuits 48(3), 850–863 (2013).
[Crossref]

2012 (1)

G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury, “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Process. Mag. 29(6), 82–97 (2012).
[Crossref]

1995 (1)

C. M. Bishop, “Training with noise is equivalent to Tikhonov regularization,” Neural Computation 7(1), 108–116 (1995).
[Crossref]

1994 (1)

K. Kyuma, E. Lange, J. Ohta, A. Hermanns, B. Banish, and M. Oita, “Artificial retinas—fast, versatile image processors,” Nature 372(6502), 197–198 (1994).
[Crossref]

1991 (1)

K. Hornik, “Approximation capabilities of multilayer feedforward networks,” Neural Networks 4(2), 251–257 (1991).
[Crossref]

1988 (1)

C. A. Mead and M. A. Mahowald, “A silicon model of early visual processing,” Neural Networks 1(1), 91–97 (1988).
[Crossref]

1986 (1)

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323(6088), 533–536 (1986).
[Crossref]

Alipanahi, B.

H. Y. Xiong, B. Alipanahi, L. J. Lee, H. Bretschneider, D. Merico, R. K. C. Yuen, Y. Hua, S. Gueroussov, H. S. Najafabadi, T. R. Hughes, Q. Morris, Y. Barash, A. R. Krainer, N. Jojic, S. W. Scherer, B. J. Blencowe, and B. J. Frey, “The human splicing code reveals new insights into the genetic determinants of disease,” Science 347(6218), 1254806 (2015).
[Crossref]

Amano, H.

Y. Wang, X. Gao, K. Fu, F. Qin, H. Zhu, Y. Liu, and H. Amano, “Single-chip imaging system that simultaneously transmits light,” Appl. Phys. Express 13(10), 101002 (2020).
[Crossref]

Ausavarungnirun, R.

O. Mutlu, S. Ghose, J. Gómez-Luna, and R. Ausavarungnirun, “Processing data where it makes sense: enabling in-memory computation,” Microprocessors and Microsystems 67, 28–41 (2019).
[Crossref]

Baehr-Jones, T.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017).
[Crossref]

Banish, B.

K. Kyuma, E. Lange, J. Ohta, A. Hermanns, B. Banish, and M. Oita, “Artificial retinas—fast, versatile image processors,” Nature 372(6502), 197–198 (1994).
[Crossref]

Bapna, A.

Y. Huang, Y. Cheng, A. Bapna, O. Firat, D. Chen, M. Chen, H. Lee, J. Ngiam, Q. V. Le, Y. Wu, and Z. Chen, “Gpipe: Efficient training of giant neural networks using pipeline parallelism,” Advances in Neural Information Processing Systems 32, 103–112 (2019).

Barash, Y.

H. Y. Xiong, B. Alipanahi, L. J. Lee, H. Bretschneider, D. Merico, R. K. C. Yuen, Y. Hua, S. Gueroussov, H. S. Najafabadi, T. R. Hughes, Q. Morris, Y. Barash, A. R. Krainer, N. Jojic, S. W. Scherer, B. J. Blencowe, and B. J. Frey, “The human splicing code reveals new insights into the genetic determinants of disease,” Science 347(6218), 1254806 (2015).
[Crossref]

Bengio, Y.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

Bian, R.

E. Xie, R. Bian, X. He, M. S. Islim, C. Chen, J. J. McKendry, E. Gu, H. Haas, and M. D. Dawson, “Over 10 Gbps VLC for long-distance applications using a GaN-based series-biased micro-LED array,” IEEE Photonics Technol. Lett. 32(9), 499–502 (2020).
[Crossref]

Bishop, C. M.

C. M. Bishop, “Training with noise is equivalent to Tikhonov regularization,” Neural Computation 7(1), 108–116 (1995).
[Crossref]

Blencowe, B. J.

H. Y. Xiong, B. Alipanahi, L. J. Lee, H. Bretschneider, D. Merico, R. K. C. Yuen, Y. Hua, S. Gueroussov, H. S. Najafabadi, T. R. Hughes, Q. Morris, Y. Barash, A. R. Krainer, N. Jojic, S. W. Scherer, B. J. Blencowe, and B. J. Frey, “The human splicing code reveals new insights into the genetic determinants of disease,” Science 347(6218), 1254806 (2015).
[Crossref]

Bretschneider, H.

H. Y. Xiong, B. Alipanahi, L. J. Lee, H. Bretschneider, D. Merico, R. K. C. Yuen, Y. Hua, S. Gueroussov, H. S. Najafabadi, T. R. Hughes, Q. Morris, Y. Barash, A. R. Krainer, N. Jojic, S. W. Scherer, B. J. Blencowe, and B. J. Frey, “The human splicing code reveals new insights into the genetic determinants of disease,” Science 347(6218), 1254806 (2015).
[Crossref]

Briggman, K. L.

M. Helmstaedter, K. L. Briggman, S. C. Turaga, V. Jain, H. S. Seung, and W. Denk, “Connectomic reconstruction of the inner plexiform layer in the mouse retina,” Nature 500(7461), 168–174 (2013).
[Crossref]

Burget, L.

T. Mikolov, A. Deoras, D. Povey, L. Burget, and J. Černocky, “Strategies for training large scale neural network language models,” in Proceedings of IEEE Workshop on Automatic Speech Recognition and Understanding (IEEE, 2011), pp.196–201.

Cernocky, J.

T. Mikolov, A. Deoras, D. Povey, L. Burget, and J. Černocky, “Strategies for training large scale neural network language models,” in Proceedings of IEEE Workshop on Automatic Speech Recognition and Understanding (IEEE, 2011), pp.196–201.

Chai, Y.

F. Zhou, Z. Zhou, J. Chen, T. H. Choy, J. Wang, N. Zhang, Z. Lin, S. Yu, J. Kang, H.-S. P. Wong, and Y. Chai, “Optoelectronic resistive random access memory for neuromorphic vision sensors,” Nat. Nanotechnol. 14(8), 776–782 (2019).
[Crossref]

Chang, W. E.

J. M. Wu and W. E. Chang, “Ultrahigh responsivity and external quantum efficiency of an ultraviolet-light photodetector based on a single VO2 microwire,” ACS Appl. Mater. Interfaces 6(16), 14286–14292 (2014).
[Crossref]

Chen, C.

E. Xie, R. Bian, X. He, M. S. Islim, C. Chen, J. J. McKendry, E. Gu, H. Haas, and M. D. Dawson, “Over 10 Gbps VLC for long-distance applications using a GaN-based series-biased micro-LED array,” IEEE Photonics Technol. Lett. 32(9), 499–502 (2020).
[Crossref]

Chen, D.

Y. Huang, Y. Cheng, A. Bapna, O. Firat, D. Chen, M. Chen, H. Lee, J. Ngiam, Q. V. Le, Y. Wu, and Z. Chen, “Gpipe: Efficient training of giant neural networks using pipeline parallelism,” Advances in Neural Information Processing Systems 32, 103–112 (2019).

Chen, H.

X. Liu, R. Lin, H. Chen, S. Zhang, Z. Qian, G. Zhou, X. Chen, X. Zhou, L. Zheng, R. Liu, and P. Tian, “High-bandwidth InGaN self-powered detector arrays toward MIMO visible light communication based on micro-LED arrays,” ACS Photonics 6(12), 3186–3195 (2019).
[Crossref]

Chen, J.

F. Zhou, Z. Zhou, J. Chen, T. H. Choy, J. Wang, N. Zhang, Z. Lin, S. Yu, J. Kang, H.-S. P. Wong, and Y. Chai, “Optoelectronic resistive random access memory for neuromorphic vision sensors,” Nat. Nanotechnol. 14(8), 776–782 (2019).
[Crossref]

Chen, M.

Y. Huang, Y. Cheng, A. Bapna, O. Firat, D. Chen, M. Chen, H. Lee, J. Ngiam, Q. V. Le, Y. Wu, and Z. Chen, “Gpipe: Efficient training of giant neural networks using pipeline parallelism,” Advances in Neural Information Processing Systems 32, 103–112 (2019).

Chen, S.-W. H.

T. Wu, C.-W. Sher, Y. Lin, C.-F. Lee, S. Liang, Y. Lu, S.-W. H. Chen, W. Guo, H.-C. Kuo, and Z. Chen, “Mini-LED and micro-LED: Promising candidates for the next generation display technology,” Appl. Sci. 8(9), 1557 (2018).
[Crossref]

Chen, X.

X. Liu, R. Lin, H. Chen, S. Zhang, Z. Qian, G. Zhou, X. Chen, X. Zhou, L. Zheng, R. Liu, and P. Tian, “High-bandwidth InGaN self-powered detector arrays toward MIMO visible light communication based on micro-LED arrays,” ACS Photonics 6(12), 3186–3195 (2019).
[Crossref]

Chen, Z.

Y. Huang, Y. Cheng, A. Bapna, O. Firat, D. Chen, M. Chen, H. Lee, J. Ngiam, Q. V. Le, Y. Wu, and Z. Chen, “Gpipe: Efficient training of giant neural networks using pipeline parallelism,” Advances in Neural Information Processing Systems 32, 103–112 (2019).

T. Wu, C.-W. Sher, Y. Lin, C.-F. Lee, S. Liang, Y. Lu, S.-W. H. Chen, W. Guo, H.-C. Kuo, and Z. Chen, “Mini-LED and micro-LED: Promising candidates for the next generation display technology,” Appl. Sci. 8(9), 1557 (2018).
[Crossref]

Cheng, Y.

Y. Huang, Y. Cheng, A. Bapna, O. Firat, D. Chen, M. Chen, H. Lee, J. Ngiam, Q. V. Le, Y. Wu, and Z. Chen, “Gpipe: Efficient training of giant neural networks using pipeline parallelism,” Advances in Neural Information Processing Systems 32, 103–112 (2019).

Choy, T. H.

F. Zhou, Z. Zhou, J. Chen, T. H. Choy, J. Wang, N. Zhang, Z. Lin, S. Yu, J. Kang, H.-S. P. Wong, and Y. Chai, “Optoelectronic resistive random access memory for neuromorphic vision sensors,” Nat. Nanotechnol. 14(8), 776–782 (2019).
[Crossref]

Cong, C.

X. Liu, P. Tian, Z. Wei, S. Yi, Y. Huang, X. Zhou, Z.-J. Qiu, L. Hu, Z. Fang, C. Cong, L. Zheng, and R. Liu, “Gbps long-distance real-time visible light communications using a high-bandwidth GaN-based micro-LED,” IEEE Photonics J. 9(6), 7204909 (2017).
[Crossref]

Cottini, N.

N. Cottini, M. Gottardi, N. Massari, R. Passerone, and Z. Smilansky, “A 33 μw 64×64 pixel vision sensor embedding robust dynamic background subtraction for event detection and scene interpretation,” IEEE J. Solid-State Circuits 48(3), 850–863 (2013).
[Crossref]

Cui, X.

R. Lin, X. Liu, G. Zhou, Z. Qian, X. Cui, and P. Tian, “InGaN micro-LED array enabled advanced underwater wireless optical communication and underwater charging,” Adv. Opt. Mater. 9(12), 2002211 (2021).
[Crossref]

Dahl, G. E.

J. Ma, R. P. Sheridan, A. Liaw, G. E. Dahl, and V. Svetnik, “Deep neural nets as a method for quantitative structure–activity relationships,” J. Chem. Inf. Model. 55(2), 263–274 (2015).
[Crossref]

G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury, “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Process. Mag. 29(6), 82–97 (2012).
[Crossref]

Dawson, M. D.

E. Xie, R. Bian, X. He, M. S. Islim, C. Chen, J. J. McKendry, E. Gu, H. Haas, and M. D. Dawson, “Over 10 Gbps VLC for long-distance applications using a GaN-based series-biased micro-LED array,” IEEE Photonics Technol. Lett. 32(9), 499–502 (2020).
[Crossref]

J. Herrnsdorf, M. J. Strain, E. Gu, R. K. Henderson, and M. D. Dawson, “Positioning and space-division multiple access enabled by structured illumination with light-emitting diodes,” J. Lightwave Technol. 35(12), 2339–2345 (2017).
[Crossref]

N. McAlinden, E. Gu, M. D. Dawson, S. Sakata, and K. Mathieson, “Optogenetic activation of neocortical neurons in vivo with a sapphire-based micro-scale LED probe,” Front. Neural Circuits 9, 25 (2015).
[Crossref]

N. McAlinden, D. Massoubre, E. Richardson, E. Gu, S. Sakata, M. D. Dawson, and K. Mathieson, “Thermal and optical characterization of micro-LED probes for in vivo optogenetic neural stimulation,” Opt. Lett. 38(6), 992–994 (2013).
[Crossref]

Delbruck, T.

C. Posch, T. Serrano-Gotarredona, B. Linares-Barranco, and T. Delbruck, “Retinomorphic event-based vision sensors: bioinspired cameras with spiking output,” Proc. IEEE 102(10), 1470–1484 (2014).
[Crossref]

Deng, L.

G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury, “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Process. Mag. 29(6), 82–97 (2012).
[Crossref]

Deng, M.-Y.

Y. Huang, E.-L. Hsiang, M.-Y. Deng, and S.-T. Wu, “Mini-LED, micro-LED and OLED displays: present status and future perspectives,” Light: Sci. Appl. 9(1), 105 (2020).
[Crossref]

Denk, W.

M. Helmstaedter, K. L. Briggman, S. C. Turaga, V. Jain, H. S. Seung, and W. Denk, “Connectomic reconstruction of the inner plexiform layer in the mouse retina,” Nature 500(7461), 168–174 (2013).
[Crossref]

Deoras, A.

T. Mikolov, A. Deoras, D. Povey, L. Burget, and J. Černocky, “Strategies for training large scale neural network language models,” in Proceedings of IEEE Workshop on Automatic Speech Recognition and Understanding (IEEE, 2011), pp.196–201.

Dollár, P.

S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 1492–1500.

Eleftheriou, E.

A. Sebastian, M. L. Gallo, R. Khaddam-Aljameh, and E. Eleftheriou, “Memory devices and applications for in-memory computing,” Nat. Nanotechnol. 15(7), 529–544 (2020).
[Crossref]

Englund, D.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017).
[Crossref]

Fang, Z.

X. Liu, P. Tian, Z. Wei, S. Yi, Y. Huang, X. Zhou, Z.-J. Qiu, L. Hu, Z. Fang, C. Cong, L. Zheng, and R. Liu, “Gbps long-distance real-time visible light communications using a high-bandwidth GaN-based micro-LED,” IEEE Photonics J. 9(6), 7204909 (2017).
[Crossref]

Firat, O.

Y. Huang, Y. Cheng, A. Bapna, O. Firat, D. Chen, M. Chen, H. Lee, J. Ngiam, Q. V. Le, Y. Wu, and Z. Chen, “Gpipe: Efficient training of giant neural networks using pipeline parallelism,” Advances in Neural Information Processing Systems 32, 103–112 (2019).

Frey, B. J.

H. Y. Xiong, B. Alipanahi, L. J. Lee, H. Bretschneider, D. Merico, R. K. C. Yuen, Y. Hua, S. Gueroussov, H. S. Najafabadi, T. R. Hughes, Q. Morris, Y. Barash, A. R. Krainer, N. Jojic, S. W. Scherer, B. J. Blencowe, and B. J. Frey, “The human splicing code reveals new insights into the genetic determinants of disease,” Science 347(6218), 1254806 (2015).
[Crossref]

M. K. K. Leung, H. Y. Xiong, L. J. Lee, and B. J. Frey, “Deep learning of the tissue-regulated splicing code,” Bioinformatics 30(12), i121–i129 (2014).
[Crossref]

Fu, K.

K. Fu, X. Gao, F. Qin, Y. Song, L. Wang, Z. Ye, Y. Su, H. Zhu, X. Ji, and Y. Wang, “Simultaneous illumination-imaging,” Adv. Mater. Technol. 6(8), 2100227 (2021).
[Crossref]

Y. Wang, X. Gao, K. Fu, F. Qin, H. Zhu, Y. Liu, and H. Amano, “Single-chip imaging system that simultaneously transmits light,” Appl. Phys. Express 13(10), 101002 (2020).
[Crossref]

Gallo, M. L.

A. Sebastian, M. L. Gallo, R. Khaddam-Aljameh, and E. Eleftheriou, “Memory devices and applications for in-memory computing,” Nat. Nanotechnol. 15(7), 529–544 (2020).
[Crossref]

Gao, X.

K. Fu, X. Gao, F. Qin, Y. Song, L. Wang, Z. Ye, Y. Su, H. Zhu, X. Ji, and Y. Wang, “Simultaneous illumination-imaging,” Adv. Mater. Technol. 6(8), 2100227 (2021).
[Crossref]

Y. Wang, X. Gao, K. Fu, F. Qin, H. Zhu, Y. Liu, and H. Amano, “Single-chip imaging system that simultaneously transmits light,” Appl. Phys. Express 13(10), 101002 (2020).
[Crossref]

Ghose, S.

O. Mutlu, S. Ghose, J. Gómez-Luna, and R. Ausavarungnirun, “Processing data where it makes sense: enabling in-memory computation,” Microprocessors and Microsystems 67, 28–41 (2019).
[Crossref]

Girshick, R.

S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 1492–1500.

Gómez-Luna, J.

O. Mutlu, S. Ghose, J. Gómez-Luna, and R. Ausavarungnirun, “Processing data where it makes sense: enabling in-memory computation,” Microprocessors and Microsystems 67, 28–41 (2019).
[Crossref]

Gottardi, M.

N. Cottini, M. Gottardi, N. Massari, R. Passerone, and Z. Smilansky, “A 33 μw 64×64 pixel vision sensor embedding robust dynamic background subtraction for event detection and scene interpretation,” IEEE J. Solid-State Circuits 48(3), 850–863 (2013).
[Crossref]

Gu, E.

E. Xie, R. Bian, X. He, M. S. Islim, C. Chen, J. J. McKendry, E. Gu, H. Haas, and M. D. Dawson, “Over 10 Gbps VLC for long-distance applications using a GaN-based series-biased micro-LED array,” IEEE Photonics Technol. Lett. 32(9), 499–502 (2020).
[Crossref]

J. Herrnsdorf, M. J. Strain, E. Gu, R. K. Henderson, and M. D. Dawson, “Positioning and space-division multiple access enabled by structured illumination with light-emitting diodes,” J. Lightwave Technol. 35(12), 2339–2345 (2017).
[Crossref]

N. McAlinden, E. Gu, M. D. Dawson, S. Sakata, and K. Mathieson, “Optogenetic activation of neocortical neurons in vivo with a sapphire-based micro-scale LED probe,” Front. Neural Circuits 9, 25 (2015).
[Crossref]

N. McAlinden, D. Massoubre, E. Richardson, E. Gu, S. Sakata, M. D. Dawson, and K. Mathieson, “Thermal and optical characterization of micro-LED probes for in vivo optogenetic neural stimulation,” Opt. Lett. 38(6), 992–994 (2013).
[Crossref]

Gueroussov, S.

H. Y. Xiong, B. Alipanahi, L. J. Lee, H. Bretschneider, D. Merico, R. K. C. Yuen, Y. Hua, S. Gueroussov, H. S. Najafabadi, T. R. Hughes, Q. Morris, Y. Barash, A. R. Krainer, N. Jojic, S. W. Scherer, B. J. Blencowe, and B. J. Frey, “The human splicing code reveals new insights into the genetic determinants of disease,” Science 347(6218), 1254806 (2015).
[Crossref]

Guo, W.

T. Wu, C.-W. Sher, Y. Lin, C.-F. Lee, S. Liang, Y. Lu, S.-W. H. Chen, W. Guo, H.-C. Kuo, and Z. Chen, “Mini-LED and micro-LED: Promising candidates for the next generation display technology,” Appl. Sci. 8(9), 1557 (2018).
[Crossref]

Haas, H.

E. Xie, R. Bian, X. He, M. S. Islim, C. Chen, J. J. McKendry, E. Gu, H. Haas, and M. D. Dawson, “Over 10 Gbps VLC for long-distance applications using a GaN-based series-biased micro-LED array,” IEEE Photonics Technol. Lett. 32(9), 499–502 (2020).
[Crossref]

Harris, N. C.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017).
[Crossref]

He, K.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 770–778.

S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 1492–1500.

He, X.

E. Xie, R. Bian, X. He, M. S. Islim, C. Chen, J. J. McKendry, E. Gu, H. Haas, and M. D. Dawson, “Over 10 Gbps VLC for long-distance applications using a GaN-based series-biased micro-LED array,” IEEE Photonics Technol. Lett. 32(9), 499–502 (2020).
[Crossref]

Helmstaedter, M.

M. Helmstaedter, K. L. Briggman, S. C. Turaga, V. Jain, H. S. Seung, and W. Denk, “Connectomic reconstruction of the inner plexiform layer in the mouse retina,” Nature 500(7461), 168–174 (2013).
[Crossref]

Henderson, R. K.

Hermanns, A.

K. Kyuma, E. Lange, J. Ohta, A. Hermanns, B. Banish, and M. Oita, “Artificial retinas—fast, versatile image processors,” Nature 372(6502), 197–198 (1994).
[Crossref]

Herrnsdorf, J.

Hinton, G.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury, “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Process. Mag. 29(6), 82–97 (2012).
[Crossref]

Hinton, G. E.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Commun. ACM 60(6), 84–90 (2017).
[Crossref]

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323(6088), 533–536 (1986).
[Crossref]

Hochberg, M.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017).
[Crossref]

Hornik, K.

K. Hornik, “Approximation capabilities of multilayer feedforward networks,” Neural Networks 4(2), 251–257 (1991).
[Crossref]

Hsiang, E.-L.

Y. Huang, E.-L. Hsiang, M.-Y. Deng, and S.-T. Wu, “Mini-LED, micro-LED and OLED displays: present status and future perspectives,” Light: Sci. Appl. 9(1), 105 (2020).
[Crossref]

Hu, J.

J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 7132–7141.

Hu, L.

X. Liu, P. Tian, Z. Wei, S. Yi, Y. Huang, X. Zhou, Z.-J. Qiu, L. Hu, Z. Fang, C. Cong, L. Zheng, and R. Liu, “Gbps long-distance real-time visible light communications using a high-bandwidth GaN-based micro-LED,” IEEE Photonics J. 9(6), 7204909 (2017).
[Crossref]

Hua, Y.

H. Y. Xiong, B. Alipanahi, L. J. Lee, H. Bretschneider, D. Merico, R. K. C. Yuen, Y. Hua, S. Gueroussov, H. S. Najafabadi, T. R. Hughes, Q. Morris, Y. Barash, A. R. Krainer, N. Jojic, S. W. Scherer, B. J. Blencowe, and B. J. Frey, “The human splicing code reveals new insights into the genetic determinants of disease,” Science 347(6218), 1254806 (2015).
[Crossref]

Huang, Y.

Y. Huang, E.-L. Hsiang, M.-Y. Deng, and S.-T. Wu, “Mini-LED, micro-LED and OLED displays: present status and future perspectives,” Light: Sci. Appl. 9(1), 105 (2020).
[Crossref]

Y. Huang, Y. Cheng, A. Bapna, O. Firat, D. Chen, M. Chen, H. Lee, J. Ngiam, Q. V. Le, Y. Wu, and Z. Chen, “Gpipe: Efficient training of giant neural networks using pipeline parallelism,” Advances in Neural Information Processing Systems 32, 103–112 (2019).

X. Liu, P. Tian, Z. Wei, S. Yi, Y. Huang, X. Zhou, Z.-J. Qiu, L. Hu, Z. Fang, C. Cong, L. Zheng, and R. Liu, “Gbps long-distance real-time visible light communications using a high-bandwidth GaN-based micro-LED,” IEEE Photonics J. 9(6), 7204909 (2017).
[Crossref]

Hughes, T. R.

H. Y. Xiong, B. Alipanahi, L. J. Lee, H. Bretschneider, D. Merico, R. K. C. Yuen, Y. Hua, S. Gueroussov, H. S. Najafabadi, T. R. Hughes, Q. Morris, Y. Barash, A. R. Krainer, N. Jojic, S. W. Scherer, B. J. Blencowe, and B. J. Frey, “The human splicing code reveals new insights into the genetic determinants of disease,” Science 347(6218), 1254806 (2015).
[Crossref]

Islim, M. S.

E. Xie, R. Bian, X. He, M. S. Islim, C. Chen, J. J. McKendry, E. Gu, H. Haas, and M. D. Dawson, “Over 10 Gbps VLC for long-distance applications using a GaN-based series-biased micro-LED array,” IEEE Photonics Technol. Lett. 32(9), 499–502 (2020).
[Crossref]

Jain, V.

M. Helmstaedter, K. L. Briggman, S. C. Turaga, V. Jain, H. S. Seung, and W. Denk, “Connectomic reconstruction of the inner plexiform layer in the mouse retina,” Nature 500(7461), 168–174 (2013).
[Crossref]

Jaiswal, A.

K. Roy, A. Jaiswal, and P. Panda, “Towards spike-based machine intelligence with neuromorphic computing,” Nature 575(7784), 607–617 (2019).
[Crossref]

Jaitly, N.

G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury, “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Process. Mag. 29(6), 82–97 (2012).
[Crossref]

Jarrahi, M.

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361(6406), 1004–1008 (2018).
[Crossref]

Ji, X.

K. Fu, X. Gao, F. Qin, Y. Song, L. Wang, Z. Ye, Y. Su, H. Zhu, X. Ji, and Y. Wang, “Simultaneous illumination-imaging,” Adv. Mater. Technol. 6(8), 2100227 (2021).
[Crossref]

Jojic, N.

H. Y. Xiong, B. Alipanahi, L. J. Lee, H. Bretschneider, D. Merico, R. K. C. Yuen, Y. Hua, S. Gueroussov, H. S. Najafabadi, T. R. Hughes, Q. Morris, Y. Barash, A. R. Krainer, N. Jojic, S. W. Scherer, B. J. Blencowe, and B. J. Frey, “The human splicing code reveals new insights into the genetic determinants of disease,” Science 347(6218), 1254806 (2015).
[Crossref]

Kang, J.

F. Zhou, Z. Zhou, J. Chen, T. H. Choy, J. Wang, N. Zhang, Z. Lin, S. Yu, J. Kang, H.-S. P. Wong, and Y. Chai, “Optoelectronic resistive random access memory for neuromorphic vision sensors,” Nat. Nanotechnol. 14(8), 776–782 (2019).
[Crossref]

Khaddam-Aljameh, R.

A. Sebastian, M. L. Gallo, R. Khaddam-Aljameh, and E. Eleftheriou, “Memory devices and applications for in-memory computing,” Nat. Nanotechnol. 15(7), 529–544 (2020).
[Crossref]

Kingsbury, B.

G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury, “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Process. Mag. 29(6), 82–97 (2012).
[Crossref]

Krainer, A. R.

H. Y. Xiong, B. Alipanahi, L. J. Lee, H. Bretschneider, D. Merico, R. K. C. Yuen, Y. Hua, S. Gueroussov, H. S. Najafabadi, T. R. Hughes, Q. Morris, Y. Barash, A. R. Krainer, N. Jojic, S. W. Scherer, B. J. Blencowe, and B. J. Frey, “The human splicing code reveals new insights into the genetic determinants of disease,” Science 347(6218), 1254806 (2015).
[Crossref]

Krizhevsky, A.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Commun. ACM 60(6), 84–90 (2017).
[Crossref]

Kuo, H.-C.

X. Zhou, P. Tian, C.-W. Sher, J. Wu, H. Liu, R. Liu, and H.-C. Kuo, “Growth, transfer printing and colour conversion techniques towards full-colour micro-LED display,” Prog. Quantum Electron. 71, 100263 (2020).
[Crossref]

T. Wu, C.-W. Sher, Y. Lin, C.-F. Lee, S. Liang, Y. Lu, S.-W. H. Chen, W. Guo, H.-C. Kuo, and Z. Chen, “Mini-LED and micro-LED: Promising candidates for the next generation display technology,” Appl. Sci. 8(9), 1557 (2018).
[Crossref]

Kyuma, K.

K. Kyuma, E. Lange, J. Ohta, A. Hermanns, B. Banish, and M. Oita, “Artificial retinas—fast, versatile image processors,” Nature 372(6502), 197–198 (1994).
[Crossref]

Lange, E.

K. Kyuma, E. Lange, J. Ohta, A. Hermanns, B. Banish, and M. Oita, “Artificial retinas—fast, versatile image processors,” Nature 372(6502), 197–198 (1994).
[Crossref]

Larochelle, H.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017).
[Crossref]

Le, Q.

M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in Proceedings of the 36th International Conference on Machine Learning (PMLR, 2019), pp. 6105–6114.

Le, Q. V.

Y. Huang, Y. Cheng, A. Bapna, O. Firat, D. Chen, M. Chen, H. Lee, J. Ngiam, Q. V. Le, Y. Wu, and Z. Chen, “Gpipe: Efficient training of giant neural networks using pipeline parallelism,” Advances in Neural Information Processing Systems 32, 103–112 (2019).

LeCun, Y.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

Lee, C.-F.

T. Wu, C.-W. Sher, Y. Lin, C.-F. Lee, S. Liang, Y. Lu, S.-W. H. Chen, W. Guo, H.-C. Kuo, and Z. Chen, “Mini-LED and micro-LED: Promising candidates for the next generation display technology,” Appl. Sci. 8(9), 1557 (2018).
[Crossref]

Lee, H.

Y. Huang, Y. Cheng, A. Bapna, O. Firat, D. Chen, M. Chen, H. Lee, J. Ngiam, Q. V. Le, Y. Wu, and Z. Chen, “Gpipe: Efficient training of giant neural networks using pipeline parallelism,” Advances in Neural Information Processing Systems 32, 103–112 (2019).

Lee, L. J.

H. Y. Xiong, B. Alipanahi, L. J. Lee, H. Bretschneider, D. Merico, R. K. C. Yuen, Y. Hua, S. Gueroussov, H. S. Najafabadi, T. R. Hughes, Q. Morris, Y. Barash, A. R. Krainer, N. Jojic, S. W. Scherer, B. J. Blencowe, and B. J. Frey, “The human splicing code reveals new insights into the genetic determinants of disease,” Science 347(6218), 1254806 (2015).
[Crossref]

M. K. K. Leung, H. Y. Xiong, L. J. Lee, and B. J. Frey, “Deep learning of the tissue-regulated splicing code,” Bioinformatics 30(12), i121–i129 (2014).
[Crossref]

Leung, M. K. K.

M. K. K. Leung, H. Y. Xiong, L. J. Lee, and B. J. Frey, “Deep learning of the tissue-regulated splicing code,” Bioinformatics 30(12), i121–i129 (2014).
[Crossref]

Liang, S.

T. Wu, C.-W. Sher, Y. Lin, C.-F. Lee, S. Liang, Y. Lu, S.-W. H. Chen, W. Guo, H.-C. Kuo, and Z. Chen, “Mini-LED and micro-LED: Promising candidates for the next generation display technology,” Appl. Sci. 8(9), 1557 (2018).
[Crossref]

Liaw, A.

J. Ma, R. P. Sheridan, A. Liaw, G. E. Dahl, and V. Svetnik, “Deep neural nets as a method for quantitative structure–activity relationships,” J. Chem. Inf. Model. 55(2), 263–274 (2015).
[Crossref]

Lin, R.

R. Lin, X. Liu, G. Zhou, Z. Qian, X. Cui, and P. Tian, “InGaN micro-LED array enabled advanced underwater wireless optical communication and underwater charging,” Adv. Opt. Mater. 9(12), 2002211 (2021).
[Crossref]

X. Liu, R. Lin, H. Chen, S. Zhang, Z. Qian, G. Zhou, X. Chen, X. Zhou, L. Zheng, R. Liu, and P. Tian, “High-bandwidth InGaN self-powered detector arrays toward MIMO visible light communication based on micro-LED arrays,” ACS Photonics 6(12), 3186–3195 (2019).
[Crossref]

Lin, X.

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361(6406), 1004–1008 (2018).
[Crossref]

Lin, Y.

T. Wu, C.-W. Sher, Y. Lin, C.-F. Lee, S. Liang, Y. Lu, S.-W. H. Chen, W. Guo, H.-C. Kuo, and Z. Chen, “Mini-LED and micro-LED: Promising candidates for the next generation display technology,” Appl. Sci. 8(9), 1557 (2018).
[Crossref]

Lin, Z.

F. Zhou, Z. Zhou, J. Chen, T. H. Choy, J. Wang, N. Zhang, Z. Lin, S. Yu, J. Kang, H.-S. P. Wong, and Y. Chai, “Optoelectronic resistive random access memory for neuromorphic vision sensors,” Nat. Nanotechnol. 14(8), 776–782 (2019).
[Crossref]

Linares-Barranco, B.

C. Posch, T. Serrano-Gotarredona, B. Linares-Barranco, and T. Delbruck, “Retinomorphic event-based vision sensors: bioinspired cameras with spiking output,” Proc. IEEE 102(10), 1470–1484 (2014).
[Crossref]

Liu, H.

X. Zhou, P. Tian, C.-W. Sher, J. Wu, H. Liu, R. Liu, and H.-C. Kuo, “Growth, transfer printing and colour conversion techniques towards full-colour micro-LED display,” Prog. Quantum Electron. 71, 100263 (2020).
[Crossref]

Liu, R.

X. Zhou, P. Tian, C.-W. Sher, J. Wu, H. Liu, R. Liu, and H.-C. Kuo, “Growth, transfer printing and colour conversion techniques towards full-colour micro-LED display,” Prog. Quantum Electron. 71, 100263 (2020).
[Crossref]

X. Liu, R. Lin, H. Chen, S. Zhang, Z. Qian, G. Zhou, X. Chen, X. Zhou, L. Zheng, R. Liu, and P. Tian, “High-bandwidth InGaN self-powered detector arrays toward MIMO visible light communication based on micro-LED arrays,” ACS Photonics 6(12), 3186–3195 (2019).
[Crossref]

X. Liu, P. Tian, Z. Wei, S. Yi, Y. Huang, X. Zhou, Z.-J. Qiu, L. Hu, Z. Fang, C. Cong, L. Zheng, and R. Liu, “Gbps long-distance real-time visible light communications using a high-bandwidth GaN-based micro-LED,” IEEE Photonics J. 9(6), 7204909 (2017).
[Crossref]

Liu, X.

R. Lin, X. Liu, G. Zhou, Z. Qian, X. Cui, and P. Tian, “InGaN micro-LED array enabled advanced underwater wireless optical communication and underwater charging,” Adv. Opt. Mater. 9(12), 2002211 (2021).
[Crossref]

X. Liu, R. Lin, H. Chen, S. Zhang, Z. Qian, G. Zhou, X. Chen, X. Zhou, L. Zheng, R. Liu, and P. Tian, “High-bandwidth InGaN self-powered detector arrays toward MIMO visible light communication based on micro-LED arrays,” ACS Photonics 6(12), 3186–3195 (2019).
[Crossref]

X. Liu, P. Tian, Z. Wei, S. Yi, Y. Huang, X. Zhou, Z.-J. Qiu, L. Hu, Z. Fang, C. Cong, L. Zheng, and R. Liu, “Gbps long-distance real-time visible light communications using a high-bandwidth GaN-based micro-LED,” IEEE Photonics J. 9(6), 7204909 (2017).
[Crossref]

Liu, Y.

Y. Wang, X. Gao, K. Fu, F. Qin, H. Zhu, Y. Liu, and H. Amano, “Single-chip imaging system that simultaneously transmits light,” Appl. Phys. Express 13(10), 101002 (2020).
[Crossref]

Lu, Y.

T. Wu, C.-W. Sher, Y. Lin, C.-F. Lee, S. Liang, Y. Lu, S.-W. H. Chen, W. Guo, H.-C. Kuo, and Z. Chen, “Mini-LED and micro-LED: Promising candidates for the next generation display technology,” Appl. Sci. 8(9), 1557 (2018).
[Crossref]

Luo, Y.

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361(6406), 1004–1008 (2018).
[Crossref]

Ma, J.

J. Ma, R. P. Sheridan, A. Liaw, G. E. Dahl, and V. Svetnik, “Deep neural nets as a method for quantitative structure–activity relationships,” J. Chem. Inf. Model. 55(2), 263–274 (2015).
[Crossref]

Mahowald, M. A.

C. A. Mead and M. A. Mahowald, “A silicon model of early visual processing,” Neural Networks 1(1), 91–97 (1988).
[Crossref]

Massari, N.

N. Cottini, M. Gottardi, N. Massari, R. Passerone, and Z. Smilansky, “A 33 μw 64×64 pixel vision sensor embedding robust dynamic background subtraction for event detection and scene interpretation,” IEEE J. Solid-State Circuits 48(3), 850–863 (2013).
[Crossref]

Massoubre, D.

Mathieson, K.

N. McAlinden, E. Gu, M. D. Dawson, S. Sakata, and K. Mathieson, “Optogenetic activation of neocortical neurons in vivo with a sapphire-based micro-scale LED probe,” Front. Neural Circuits 9, 25 (2015).
[Crossref]

N. McAlinden, D. Massoubre, E. Richardson, E. Gu, S. Sakata, M. D. Dawson, and K. Mathieson, “Thermal and optical characterization of micro-LED probes for in vivo optogenetic neural stimulation,” Opt. Lett. 38(6), 992–994 (2013).
[Crossref]

McAlinden, N.

N. McAlinden, E. Gu, M. D. Dawson, S. Sakata, and K. Mathieson, “Optogenetic activation of neocortical neurons in vivo with a sapphire-based micro-scale LED probe,” Front. Neural Circuits 9, 25 (2015).
[Crossref]

N. McAlinden, D. Massoubre, E. Richardson, E. Gu, S. Sakata, M. D. Dawson, and K. Mathieson, “Thermal and optical characterization of micro-LED probes for in vivo optogenetic neural stimulation,” Opt. Lett. 38(6), 992–994 (2013).
[Crossref]

McKendry, J. J.

E. Xie, R. Bian, X. He, M. S. Islim, C. Chen, J. J. McKendry, E. Gu, H. Haas, and M. D. Dawson, “Over 10 Gbps VLC for long-distance applications using a GaN-based series-biased micro-LED array,” IEEE Photonics Technol. Lett. 32(9), 499–502 (2020).
[Crossref]

Mead, C. A.

C. A. Mead and M. A. Mahowald, “A silicon model of early visual processing,” Neural Networks 1(1), 91–97 (1988).
[Crossref]

Mennel, L.

L. Mennel, J. Symonowicz, S. Wachter, D. K. Polyushkin, A. J. Molina-Mendoza, and T. Mueller, “Ultrafast machine vision with 2D material neural network image sensors,” Nature 579(7797), 62–66 (2020).
[Crossref]

Merico, D.

H. Y. Xiong, B. Alipanahi, L. J. Lee, H. Bretschneider, D. Merico, R. K. C. Yuen, Y. Hua, S. Gueroussov, H. S. Najafabadi, T. R. Hughes, Q. Morris, Y. Barash, A. R. Krainer, N. Jojic, S. W. Scherer, B. J. Blencowe, and B. J. Frey, “The human splicing code reveals new insights into the genetic determinants of disease,” Science 347(6218), 1254806 (2015).
[Crossref]

Mikolov, T.

T. Mikolov, A. Deoras, D. Povey, L. Burget, and J. Černocky, “Strategies for training large scale neural network language models,” in Proceedings of IEEE Workshop on Automatic Speech Recognition and Understanding (IEEE, 2011), pp.196–201.

Mohamed, A.-r.

G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury, “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Process. Mag. 29(6), 82–97 (2012).
[Crossref]

Molina-Mendoza, A. J.

L. Mennel, J. Symonowicz, S. Wachter, D. K. Polyushkin, A. J. Molina-Mendoza, and T. Mueller, “Ultrafast machine vision with 2D material neural network image sensors,” Nature 579(7797), 62–66 (2020).
[Crossref]

Morris, Q.

H. Y. Xiong, B. Alipanahi, L. J. Lee, H. Bretschneider, D. Merico, R. K. C. Yuen, Y. Hua, S. Gueroussov, H. S. Najafabadi, T. R. Hughes, Q. Morris, Y. Barash, A. R. Krainer, N. Jojic, S. W. Scherer, B. J. Blencowe, and B. J. Frey, “The human splicing code reveals new insights into the genetic determinants of disease,” Science 347(6218), 1254806 (2015).
[Crossref]

Mueller, T.

L. Mennel, J. Symonowicz, S. Wachter, D. K. Polyushkin, A. J. Molina-Mendoza, and T. Mueller, “Ultrafast machine vision with 2D material neural network image sensors,” Nature 579(7797), 62–66 (2020).
[Crossref]

Mutlu, O.

O. Mutlu, S. Ghose, J. Gómez-Luna, and R. Ausavarungnirun, “Processing data where it makes sense: enabling in-memory computation,” Microprocessors and Microsystems 67, 28–41 (2019).
[Crossref]

Najafabadi, H. S.

H. Y. Xiong, B. Alipanahi, L. J. Lee, H. Bretschneider, D. Merico, R. K. C. Yuen, Y. Hua, S. Gueroussov, H. S. Najafabadi, T. R. Hughes, Q. Morris, Y. Barash, A. R. Krainer, N. Jojic, S. W. Scherer, B. J. Blencowe, and B. J. Frey, “The human splicing code reveals new insights into the genetic determinants of disease,” Science 347(6218), 1254806 (2015).
[Crossref]

Ngiam, J.

Y. Huang, Y. Cheng, A. Bapna, O. Firat, D. Chen, M. Chen, H. Lee, J. Ngiam, Q. V. Le, Y. Wu, and Z. Chen, “Gpipe: Efficient training of giant neural networks using pipeline parallelism,” Advances in Neural Information Processing Systems 32, 103–112 (2019).

Nguyen, P.

G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury, “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Process. Mag. 29(6), 82–97 (2012).
[Crossref]

Ohta, J.

K. Kyuma, E. Lange, J. Ohta, A. Hermanns, B. Banish, and M. Oita, “Artificial retinas—fast, versatile image processors,” Nature 372(6502), 197–198 (1994).
[Crossref]

Oita, M.

K. Kyuma, E. Lange, J. Ohta, A. Hermanns, B. Banish, and M. Oita, “Artificial retinas—fast, versatile image processors,” Nature 372(6502), 197–198 (1994).
[Crossref]

Ozcan, A.

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361(6406), 1004–1008 (2018).
[Crossref]

Panda, P.

K. Roy, A. Jaiswal, and P. Panda, “Towards spike-based machine intelligence with neuromorphic computing,” Nature 575(7784), 607–617 (2019).
[Crossref]

Passerone, R.

N. Cottini, M. Gottardi, N. Massari, R. Passerone, and Z. Smilansky, “A 33 μw 64×64 pixel vision sensor embedding robust dynamic background subtraction for event detection and scene interpretation,” IEEE J. Solid-State Circuits 48(3), 850–863 (2013).
[Crossref]

Polyushkin, D. K.

L. Mennel, J. Symonowicz, S. Wachter, D. K. Polyushkin, A. J. Molina-Mendoza, and T. Mueller, “Ultrafast machine vision with 2D material neural network image sensors,” Nature 579(7797), 62–66 (2020).
[Crossref]

Posch, C.

C. Posch, T. Serrano-Gotarredona, B. Linares-Barranco, and T. Delbruck, “Retinomorphic event-based vision sensors: bioinspired cameras with spiking output,” Proc. IEEE 102(10), 1470–1484 (2014).
[Crossref]

Povey, D.

T. Mikolov, A. Deoras, D. Povey, L. Burget, and J. Černocky, “Strategies for training large scale neural network language models,” in Proceedings of IEEE Workshop on Automatic Speech Recognition and Understanding (IEEE, 2011), pp.196–201.

Prabhu, M.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017).
[Crossref]

Qian, Z.

R. Lin, X. Liu, G. Zhou, Z. Qian, X. Cui, and P. Tian, “InGaN micro-LED array enabled advanced underwater wireless optical communication and underwater charging,” Adv. Opt. Mater. 9(12), 2002211 (2021).
[Crossref]

X. Liu, R. Lin, H. Chen, S. Zhang, Z. Qian, G. Zhou, X. Chen, X. Zhou, L. Zheng, R. Liu, and P. Tian, “High-bandwidth InGaN self-powered detector arrays toward MIMO visible light communication based on micro-LED arrays,” ACS Photonics 6(12), 3186–3195 (2019).
[Crossref]

Qin, F.

K. Fu, X. Gao, F. Qin, Y. Song, L. Wang, Z. Ye, Y. Su, H. Zhu, X. Ji, and Y. Wang, “Simultaneous illumination-imaging,” Adv. Mater. Technol. 6(8), 2100227 (2021).
[Crossref]

Y. Wang, X. Gao, K. Fu, F. Qin, H. Zhu, Y. Liu, and H. Amano, “Single-chip imaging system that simultaneously transmits light,” Appl. Phys. Express 13(10), 101002 (2020).
[Crossref]

Qiu, Z.-J.

X. Liu, P. Tian, Z. Wei, S. Yi, Y. Huang, X. Zhou, Z.-J. Qiu, L. Hu, Z. Fang, C. Cong, L. Zheng, and R. Liu, “Gbps long-distance real-time visible light communications using a high-bandwidth GaN-based micro-LED,” IEEE Photonics J. 9(6), 7204909 (2017).
[Crossref]

Ren, S.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 770–778.

Richardson, E.

Rivenson, Y.

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361(6406), 1004–1008 (2018).
[Crossref]

Roy, K.

K. Roy, A. Jaiswal, and P. Panda, “Towards spike-based machine intelligence with neuromorphic computing,” Nature 575(7784), 607–617 (2019).
[Crossref]

Rumelhart, D. E.

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323(6088), 533–536 (1986).
[Crossref]

Sainath, T. N.

G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury, “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Process. Mag. 29(6), 82–97 (2012).
[Crossref]

Sakata, S.

N. McAlinden, E. Gu, M. D. Dawson, S. Sakata, and K. Mathieson, “Optogenetic activation of neocortical neurons in vivo with a sapphire-based micro-scale LED probe,” Front. Neural Circuits 9, 25 (2015).
[Crossref]

N. McAlinden, D. Massoubre, E. Richardson, E. Gu, S. Sakata, M. D. Dawson, and K. Mathieson, “Thermal and optical characterization of micro-LED probes for in vivo optogenetic neural stimulation,” Opt. Lett. 38(6), 992–994 (2013).
[Crossref]

Scherer, S. W.

H. Y. Xiong, B. Alipanahi, L. J. Lee, H. Bretschneider, D. Merico, R. K. C. Yuen, Y. Hua, S. Gueroussov, H. S. Najafabadi, T. R. Hughes, Q. Morris, Y. Barash, A. R. Krainer, N. Jojic, S. W. Scherer, B. J. Blencowe, and B. J. Frey, “The human splicing code reveals new insights into the genetic determinants of disease,” Science 347(6218), 1254806 (2015).
[Crossref]

Sebastian, A.

A. Sebastian, M. L. Gallo, R. Khaddam-Aljameh, and E. Eleftheriou, “Memory devices and applications for in-memory computing,” Nat. Nanotechnol. 15(7), 529–544 (2020).
[Crossref]

Senior, A.

G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury, “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Process. Mag. 29(6), 82–97 (2012).
[Crossref]

Serrano-Gotarredona, T.

C. Posch, T. Serrano-Gotarredona, B. Linares-Barranco, and T. Delbruck, “Retinomorphic event-based vision sensors: bioinspired cameras with spiking output,” Proc. IEEE 102(10), 1470–1484 (2014).
[Crossref]

Seung, H. S.

M. Helmstaedter, K. L. Briggman, S. C. Turaga, V. Jain, H. S. Seung, and W. Denk, “Connectomic reconstruction of the inner plexiform layer in the mouse retina,” Nature 500(7461), 168–174 (2013).
[Crossref]

Shen, L.

J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 7132–7141.

Shen, Y.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017).
[Crossref]

Sher, C.-W.

X. Zhou, P. Tian, C.-W. Sher, J. Wu, H. Liu, R. Liu, and H.-C. Kuo, “Growth, transfer printing and colour conversion techniques towards full-colour micro-LED display,” Prog. Quantum Electron. 71, 100263 (2020).
[Crossref]

T. Wu, C.-W. Sher, Y. Lin, C.-F. Lee, S. Liang, Y. Lu, S.-W. H. Chen, W. Guo, H.-C. Kuo, and Z. Chen, “Mini-LED and micro-LED: Promising candidates for the next generation display technology,” Appl. Sci. 8(9), 1557 (2018).
[Crossref]

Sheridan, R. P.

J. Ma, R. P. Sheridan, A. Liaw, G. E. Dahl, and V. Svetnik, “Deep neural nets as a method for quantitative structure–activity relationships,” J. Chem. Inf. Model. 55(2), 263–274 (2015).
[Crossref]

Skirlo, S.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017).
[Crossref]

Smilansky, Z.

N. Cottini, M. Gottardi, N. Massari, R. Passerone, and Z. Smilansky, “A 33 μw 64×64 pixel vision sensor embedding robust dynamic background subtraction for event detection and scene interpretation,” IEEE J. Solid-State Circuits 48(3), 850–863 (2013).
[Crossref]

Soljacic, M.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017).
[Crossref]

Song, Y.

K. Fu, X. Gao, F. Qin, Y. Song, L. Wang, Z. Ye, Y. Su, H. Zhu, X. Ji, and Y. Wang, “Simultaneous illumination-imaging,” Adv. Mater. Technol. 6(8), 2100227 (2021).
[Crossref]

Strain, M. J.

Su, Y.

K. Fu, X. Gao, F. Qin, Y. Song, L. Wang, Z. Ye, Y. Su, H. Zhu, X. Ji, and Y. Wang, “Simultaneous illumination-imaging,” Adv. Mater. Technol. 6(8), 2100227 (2021).
[Crossref]

Sun, G.

J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 7132–7141.

Sun, J.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 770–778.

Sun, X.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017).
[Crossref]

Sutskever, I.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Commun. ACM 60(6), 84–90 (2017).
[Crossref]

Svetnik, V.

J. Ma, R. P. Sheridan, A. Liaw, G. E. Dahl, and V. Svetnik, “Deep neural nets as a method for quantitative structure–activity relationships,” J. Chem. Inf. Model. 55(2), 263–274 (2015).
[Crossref]

Symonowicz, J.

L. Mennel, J. Symonowicz, S. Wachter, D. K. Polyushkin, A. J. Molina-Mendoza, and T. Mueller, “Ultrafast machine vision with 2D material neural network image sensors,” Nature 579(7797), 62–66 (2020).
[Crossref]

Tan, M.

M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in Proceedings of the 36th International Conference on Machine Learning (PMLR, 2019), pp. 6105–6114.

Tian, P.

R. Lin, X. Liu, G. Zhou, Z. Qian, X. Cui, and P. Tian, “InGaN micro-LED array enabled advanced underwater wireless optical communication and underwater charging,” Adv. Opt. Mater. 9(12), 2002211 (2021).
[Crossref]

X. Zhou, P. Tian, C.-W. Sher, J. Wu, H. Liu, R. Liu, and H.-C. Kuo, “Growth, transfer printing and colour conversion techniques towards full-colour micro-LED display,” Prog. Quantum Electron. 71, 100263 (2020).
[Crossref]

X. Liu, R. Lin, H. Chen, S. Zhang, Z. Qian, G. Zhou, X. Chen, X. Zhou, L. Zheng, R. Liu, and P. Tian, “High-bandwidth InGaN self-powered detector arrays toward MIMO visible light communication based on micro-LED arrays,” ACS Photonics 6(12), 3186–3195 (2019).
[Crossref]

X. Liu, P. Tian, Z. Wei, S. Yi, Y. Huang, X. Zhou, Z.-J. Qiu, L. Hu, Z. Fang, C. Cong, L. Zheng, and R. Liu, “Gbps long-distance real-time visible light communications using a high-bandwidth GaN-based micro-LED,” IEEE Photonics J. 9(6), 7204909 (2017).
[Crossref]

Tu, Z.

S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 1492–1500.

Turaga, S. C.

M. Helmstaedter, K. L. Briggman, S. C. Turaga, V. Jain, H. S. Seung, and W. Denk, “Connectomic reconstruction of the inner plexiform layer in the mouse retina,” Nature 500(7461), 168–174 (2013).
[Crossref]

Vanhoucke, V.

G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury, “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Process. Mag. 29(6), 82–97 (2012).
[Crossref]

Veli, M.

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361(6406), 1004–1008 (2018).
[Crossref]

Wachter, S.

L. Mennel, J. Symonowicz, S. Wachter, D. K. Polyushkin, A. J. Molina-Mendoza, and T. Mueller, “Ultrafast machine vision with 2D material neural network image sensors,” Nature 579(7797), 62–66 (2020).
[Crossref]

Wang, J.

F. Zhou, Z. Zhou, J. Chen, T. H. Choy, J. Wang, N. Zhang, Z. Lin, S. Yu, J. Kang, H.-S. P. Wong, and Y. Chai, “Optoelectronic resistive random access memory for neuromorphic vision sensors,” Nat. Nanotechnol. 14(8), 776–782 (2019).
[Crossref]

Wang, L.

K. Fu, X. Gao, F. Qin, Y. Song, L. Wang, Z. Ye, Y. Su, H. Zhu, X. Ji, and Y. Wang, “Simultaneous illumination-imaging,” Adv. Mater. Technol. 6(8), 2100227 (2021).
[Crossref]

Wang, Y.

K. Fu, X. Gao, F. Qin, Y. Song, L. Wang, Z. Ye, Y. Su, H. Zhu, X. Ji, and Y. Wang, “Simultaneous illumination-imaging,” Adv. Mater. Technol. 6(8), 2100227 (2021).
[Crossref]

Y. Wang, X. Gao, K. Fu, F. Qin, H. Zhu, Y. Liu, and H. Amano, “Single-chip imaging system that simultaneously transmits light,” Appl. Phys. Express 13(10), 101002 (2020).
[Crossref]

Wei, Z.

X. Liu, P. Tian, Z. Wei, S. Yi, Y. Huang, X. Zhou, Z.-J. Qiu, L. Hu, Z. Fang, C. Cong, L. Zheng, and R. Liu, “Gbps long-distance real-time visible light communications using a high-bandwidth GaN-based micro-LED,” IEEE Photonics J. 9(6), 7204909 (2017).
[Crossref]

Williams, R. J.

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323(6088), 533–536 (1986).
[Crossref]

Wong, H.-S. P.

F. Zhou, Z. Zhou, J. Chen, T. H. Choy, J. Wang, N. Zhang, Z. Lin, S. Yu, J. Kang, H.-S. P. Wong, and Y. Chai, “Optoelectronic resistive random access memory for neuromorphic vision sensors,” Nat. Nanotechnol. 14(8), 776–782 (2019).
[Crossref]

Wu, J.

X. Zhou, P. Tian, C.-W. Sher, J. Wu, H. Liu, R. Liu, and H.-C. Kuo, “Growth, transfer printing and colour conversion techniques towards full-colour micro-LED display,” Prog. Quantum Electron. 71, 100263 (2020).
[Crossref]

Wu, J. M.

J. M. Wu and W. E. Chang, “Ultrahigh responsivity and external quantum efficiency of an ultraviolet-light photodetector based on a single VO2 microwire,” ACS Appl. Mater. Interfaces 6(16), 14286–14292 (2014).
[Crossref]

Wu, S.-T.

Y. Huang, E.-L. Hsiang, M.-Y. Deng, and S.-T. Wu, “Mini-LED, micro-LED and OLED displays: present status and future perspectives,” Light: Sci. Appl. 9(1), 105 (2020).
[Crossref]

Wu, T.

T. Wu, C.-W. Sher, Y. Lin, C.-F. Lee, S. Liang, Y. Lu, S.-W. H. Chen, W. Guo, H.-C. Kuo, and Z. Chen, “Mini-LED and micro-LED: Promising candidates for the next generation display technology,” Appl. Sci. 8(9), 1557 (2018).
[Crossref]

Wu, Y.

Y. Huang, Y. Cheng, A. Bapna, O. Firat, D. Chen, M. Chen, H. Lee, J. Ngiam, Q. V. Le, Y. Wu, and Z. Chen, “Gpipe: Efficient training of giant neural networks using pipeline parallelism,” Advances in Neural Information Processing Systems 32, 103–112 (2019).

Xie, E.

E. Xie, R. Bian, X. He, M. S. Islim, C. Chen, J. J. McKendry, E. Gu, H. Haas, and M. D. Dawson, “Over 10 Gbps VLC for long-distance applications using a GaN-based series-biased micro-LED array,” IEEE Photonics Technol. Lett. 32(9), 499–502 (2020).
[Crossref]

Xie, S.

S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 1492–1500.

Xiong, H. Y.

H. Y. Xiong, B. Alipanahi, L. J. Lee, H. Bretschneider, D. Merico, R. K. C. Yuen, Y. Hua, S. Gueroussov, H. S. Najafabadi, T. R. Hughes, Q. Morris, Y. Barash, A. R. Krainer, N. Jojic, S. W. Scherer, B. J. Blencowe, and B. J. Frey, “The human splicing code reveals new insights into the genetic determinants of disease,” Science 347(6218), 1254806 (2015).
[Crossref]

M. K. K. Leung, H. Y. Xiong, L. J. Lee, and B. J. Frey, “Deep learning of the tissue-regulated splicing code,” Bioinformatics 30(12), i121–i129 (2014).
[Crossref]

Yardimci, N. T.

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361(6406), 1004–1008 (2018).
[Crossref]

Ye, Z.

K. Fu, X. Gao, F. Qin, Y. Song, L. Wang, Z. Ye, Y. Su, H. Zhu, X. Ji, and Y. Wang, “Simultaneous illumination-imaging,” Adv. Mater. Technol. 6(8), 2100227 (2021).
[Crossref]

Yi, S.

X. Liu, P. Tian, Z. Wei, S. Yi, Y. Huang, X. Zhou, Z.-J. Qiu, L. Hu, Z. Fang, C. Cong, L. Zheng, and R. Liu, “Gbps long-distance real-time visible light communications using a high-bandwidth GaN-based micro-LED,” IEEE Photonics J. 9(6), 7204909 (2017).
[Crossref]

Yu, D.

G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury, “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Process. Mag. 29(6), 82–97 (2012).
[Crossref]

Yu, S.

F. Zhou, Z. Zhou, J. Chen, T. H. Choy, J. Wang, N. Zhang, Z. Lin, S. Yu, J. Kang, H.-S. P. Wong, and Y. Chai, “Optoelectronic resistive random access memory for neuromorphic vision sensors,” Nat. Nanotechnol. 14(8), 776–782 (2019).
[Crossref]

Yuen, R. K. C.

H. Y. Xiong, B. Alipanahi, L. J. Lee, H. Bretschneider, D. Merico, R. K. C. Yuen, Y. Hua, S. Gueroussov, H. S. Najafabadi, T. R. Hughes, Q. Morris, Y. Barash, A. R. Krainer, N. Jojic, S. W. Scherer, B. J. Blencowe, and B. J. Frey, “The human splicing code reveals new insights into the genetic determinants of disease,” Science 347(6218), 1254806 (2015).
[Crossref]

Zhang, N.

F. Zhou, Z. Zhou, J. Chen, T. H. Choy, J. Wang, N. Zhang, Z. Lin, S. Yu, J. Kang, H.-S. P. Wong, and Y. Chai, “Optoelectronic resistive random access memory for neuromorphic vision sensors,” Nat. Nanotechnol. 14(8), 776–782 (2019).
[Crossref]

Zhang, S.

X. Liu, R. Lin, H. Chen, S. Zhang, Z. Qian, G. Zhou, X. Chen, X. Zhou, L. Zheng, R. Liu, and P. Tian, “High-bandwidth InGaN self-powered detector arrays toward MIMO visible light communication based on micro-LED arrays,” ACS Photonics 6(12), 3186–3195 (2019).
[Crossref]

Zhang, X.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 770–778.

Zhao, S.

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017).
[Crossref]

Zheng, L.

X. Liu, R. Lin, H. Chen, S. Zhang, Z. Qian, G. Zhou, X. Chen, X. Zhou, L. Zheng, R. Liu, and P. Tian, “High-bandwidth InGaN self-powered detector arrays toward MIMO visible light communication based on micro-LED arrays,” ACS Photonics 6(12), 3186–3195 (2019).
[Crossref]

X. Liu, P. Tian, Z. Wei, S. Yi, Y. Huang, X. Zhou, Z.-J. Qiu, L. Hu, Z. Fang, C. Cong, L. Zheng, and R. Liu, “Gbps long-distance real-time visible light communications using a high-bandwidth GaN-based micro-LED,” IEEE Photonics J. 9(6), 7204909 (2017).
[Crossref]

Zhou, F.

F. Zhou, Z. Zhou, J. Chen, T. H. Choy, J. Wang, N. Zhang, Z. Lin, S. Yu, J. Kang, H.-S. P. Wong, and Y. Chai, “Optoelectronic resistive random access memory for neuromorphic vision sensors,” Nat. Nanotechnol. 14(8), 776–782 (2019).
[Crossref]

Zhou, G.

R. Lin, X. Liu, G. Zhou, Z. Qian, X. Cui, and P. Tian, “InGaN micro-LED array enabled advanced underwater wireless optical communication and underwater charging,” Adv. Opt. Mater. 9(12), 2002211 (2021).
[Crossref]

X. Liu, R. Lin, H. Chen, S. Zhang, Z. Qian, G. Zhou, X. Chen, X. Zhou, L. Zheng, R. Liu, and P. Tian, “High-bandwidth InGaN self-powered detector arrays toward MIMO visible light communication based on micro-LED arrays,” ACS Photonics 6(12), 3186–3195 (2019).
[Crossref]

Zhou, X.

X. Zhou, P. Tian, C.-W. Sher, J. Wu, H. Liu, R. Liu, and H.-C. Kuo, “Growth, transfer printing and colour conversion techniques towards full-colour micro-LED display,” Prog. Quantum Electron. 71, 100263 (2020).
[Crossref]

X. Liu, R. Lin, H. Chen, S. Zhang, Z. Qian, G. Zhou, X. Chen, X. Zhou, L. Zheng, R. Liu, and P. Tian, “High-bandwidth InGaN self-powered detector arrays toward MIMO visible light communication based on micro-LED arrays,” ACS Photonics 6(12), 3186–3195 (2019).
[Crossref]

X. Liu, P. Tian, Z. Wei, S. Yi, Y. Huang, X. Zhou, Z.-J. Qiu, L. Hu, Z. Fang, C. Cong, L. Zheng, and R. Liu, “Gbps long-distance real-time visible light communications using a high-bandwidth GaN-based micro-LED,” IEEE Photonics J. 9(6), 7204909 (2017).
[Crossref]

Zhou, Z.

F. Zhou, Z. Zhou, J. Chen, T. H. Choy, J. Wang, N. Zhang, Z. Lin, S. Yu, J. Kang, H.-S. P. Wong, and Y. Chai, “Optoelectronic resistive random access memory for neuromorphic vision sensors,” Nat. Nanotechnol. 14(8), 776–782 (2019).
[Crossref]

Zhu, H.

K. Fu, X. Gao, F. Qin, Y. Song, L. Wang, Z. Ye, Y. Su, H. Zhu, X. Ji, and Y. Wang, “Simultaneous illumination-imaging,” Adv. Mater. Technol. 6(8), 2100227 (2021).
[Crossref]

Y. Wang, X. Gao, K. Fu, F. Qin, H. Zhu, Y. Liu, and H. Amano, “Single-chip imaging system that simultaneously transmits light,” Appl. Phys. Express 13(10), 101002 (2020).
[Crossref]

ACS Appl. Mater. Interfaces (1)

J. M. Wu and W. E. Chang, “Ultrahigh responsivity and external quantum efficiency of an ultraviolet-light photodetector based on a single VO2 microwire,” ACS Appl. Mater. Interfaces 6(16), 14286–14292 (2014).
[Crossref]

ACS Photonics (1)

X. Liu, R. Lin, H. Chen, S. Zhang, Z. Qian, G. Zhou, X. Chen, X. Zhou, L. Zheng, R. Liu, and P. Tian, “High-bandwidth InGaN self-powered detector arrays toward MIMO visible light communication based on micro-LED arrays,” ACS Photonics 6(12), 3186–3195 (2019).
[Crossref]

Adv. Mater. Technol. (1)

K. Fu, X. Gao, F. Qin, Y. Song, L. Wang, Z. Ye, Y. Su, H. Zhu, X. Ji, and Y. Wang, “Simultaneous illumination-imaging,” Adv. Mater. Technol. 6(8), 2100227 (2021).
[Crossref]

Adv. Opt. Mater. (1)

R. Lin, X. Liu, G. Zhou, Z. Qian, X. Cui, and P. Tian, “InGaN micro-LED array enabled advanced underwater wireless optical communication and underwater charging,” Adv. Opt. Mater. 9(12), 2002211 (2021).
[Crossref]

Advances in Neural Information Processing Systems (1)

Y. Huang, Y. Cheng, A. Bapna, O. Firat, D. Chen, M. Chen, H. Lee, J. Ngiam, Q. V. Le, Y. Wu, and Z. Chen, “Gpipe: Efficient training of giant neural networks using pipeline parallelism,” Advances in Neural Information Processing Systems 32, 103–112 (2019).

Appl. Phys. Express (1)

Y. Wang, X. Gao, K. Fu, F. Qin, H. Zhu, Y. Liu, and H. Amano, “Single-chip imaging system that simultaneously transmits light,” Appl. Phys. Express 13(10), 101002 (2020).
[Crossref]

Appl. Sci. (1)

T. Wu, C.-W. Sher, Y. Lin, C.-F. Lee, S. Liang, Y. Lu, S.-W. H. Chen, W. Guo, H.-C. Kuo, and Z. Chen, “Mini-LED and micro-LED: Promising candidates for the next generation display technology,” Appl. Sci. 8(9), 1557 (2018).
[Crossref]

Bioinformatics (1)

M. K. K. Leung, H. Y. Xiong, L. J. Lee, and B. J. Frey, “Deep learning of the tissue-regulated splicing code,” Bioinformatics 30(12), i121–i129 (2014).
[Crossref]

Commun. ACM (1)

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Commun. ACM 60(6), 84–90 (2017).
[Crossref]

Front. Neural Circuits (1)

N. McAlinden, E. Gu, M. D. Dawson, S. Sakata, and K. Mathieson, “Optogenetic activation of neocortical neurons in vivo with a sapphire-based micro-scale LED probe,” Front. Neural Circuits 9, 25 (2015).
[Crossref]

IEEE J. Solid-State Circuits (1)

N. Cottini, M. Gottardi, N. Massari, R. Passerone, and Z. Smilansky, “A 33 μw 64×64 pixel vision sensor embedding robust dynamic background subtraction for event detection and scene interpretation,” IEEE J. Solid-State Circuits 48(3), 850–863 (2013).
[Crossref]

IEEE Photonics J. (1)

X. Liu, P. Tian, Z. Wei, S. Yi, Y. Huang, X. Zhou, Z.-J. Qiu, L. Hu, Z. Fang, C. Cong, L. Zheng, and R. Liu, “Gbps long-distance real-time visible light communications using a high-bandwidth GaN-based micro-LED,” IEEE Photonics J. 9(6), 7204909 (2017).
[Crossref]

IEEE Photonics Technol. Lett. (1)

E. Xie, R. Bian, X. He, M. S. Islim, C. Chen, J. J. McKendry, E. Gu, H. Haas, and M. D. Dawson, “Over 10 Gbps VLC for long-distance applications using a GaN-based series-biased micro-LED array,” IEEE Photonics Technol. Lett. 32(9), 499–502 (2020).
[Crossref]

IEEE Signal Process. Mag. (1)

G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury, “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Process. Mag. 29(6), 82–97 (2012).
[Crossref]

J. Chem. Inf. Model. (1)

J. Ma, R. P. Sheridan, A. Liaw, G. E. Dahl, and V. Svetnik, “Deep neural nets as a method for quantitative structure–activity relationships,” J. Chem. Inf. Model. 55(2), 263–274 (2015).
[Crossref]

J. Lightwave Technol. (1)

Light: Sci. Appl. (1)

Y. Huang, E.-L. Hsiang, M.-Y. Deng, and S.-T. Wu, “Mini-LED, micro-LED and OLED displays: present status and future perspectives,” Light: Sci. Appl. 9(1), 105 (2020).
[Crossref]

Microprocessors and Microsystems (1)

O. Mutlu, S. Ghose, J. Gómez-Luna, and R. Ausavarungnirun, “Processing data where it makes sense: enabling in-memory computation,” Microprocessors and Microsystems 67, 28–41 (2019).
[Crossref]

Nat. Nanotechnol. (2)

A. Sebastian, M. L. Gallo, R. Khaddam-Aljameh, and E. Eleftheriou, “Memory devices and applications for in-memory computing,” Nat. Nanotechnol. 15(7), 529–544 (2020).
[Crossref]

F. Zhou, Z. Zhou, J. Chen, T. H. Choy, J. Wang, N. Zhang, Z. Lin, S. Yu, J. Kang, H.-S. P. Wong, and Y. Chai, “Optoelectronic resistive random access memory for neuromorphic vision sensors,” Nat. Nanotechnol. 14(8), 776–782 (2019).
[Crossref]

Nat. Photonics (1)

Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017).
[Crossref]

Nature (6)

L. Mennel, J. Symonowicz, S. Wachter, D. K. Polyushkin, A. J. Molina-Mendoza, and T. Mueller, “Ultrafast machine vision with 2D material neural network image sensors,” Nature 579(7797), 62–66 (2020).
[Crossref]

K. Roy, A. Jaiswal, and P. Panda, “Towards spike-based machine intelligence with neuromorphic computing,” Nature 575(7784), 607–617 (2019).
[Crossref]

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

K. Kyuma, E. Lange, J. Ohta, A. Hermanns, B. Banish, and M. Oita, “Artificial retinas—fast, versatile image processors,” Nature 372(6502), 197–198 (1994).
[Crossref]

M. Helmstaedter, K. L. Briggman, S. C. Turaga, V. Jain, H. S. Seung, and W. Denk, “Connectomic reconstruction of the inner plexiform layer in the mouse retina,” Nature 500(7461), 168–174 (2013).
[Crossref]

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323(6088), 533–536 (1986).
[Crossref]

Neural Computation (1)

C. M. Bishop, “Training with noise is equivalent to Tikhonov regularization,” Neural Computation 7(1), 108–116 (1995).
[Crossref]

Neural Networks (2)

C. A. Mead and M. A. Mahowald, “A silicon model of early visual processing,” Neural Networks 1(1), 91–97 (1988).
[Crossref]

K. Hornik, “Approximation capabilities of multilayer feedforward networks,” Neural Networks 4(2), 251–257 (1991).
[Crossref]

Opt. Lett. (1)

Proc. IEEE (1)

C. Posch, T. Serrano-Gotarredona, B. Linares-Barranco, and T. Delbruck, “Retinomorphic event-based vision sensors: bioinspired cameras with spiking output,” Proc. IEEE 102(10), 1470–1484 (2014).
[Crossref]

Prog. Quantum Electron. (1)

X. Zhou, P. Tian, C.-W. Sher, J. Wu, H. Liu, R. Liu, and H.-C. Kuo, “Growth, transfer printing and colour conversion techniques towards full-colour micro-LED display,” Prog. Quantum Electron. 71, 100263 (2020).
[Crossref]

Science (2)

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361(6406), 1004–1008 (2018).
[Crossref]

H. Y. Xiong, B. Alipanahi, L. J. Lee, H. Bretschneider, D. Merico, R. K. C. Yuen, Y. Hua, S. Gueroussov, H. S. Najafabadi, T. R. Hughes, Q. Morris, Y. Barash, A. R. Krainer, N. Jojic, S. W. Scherer, B. J. Blencowe, and B. J. Frey, “The human splicing code reveals new insights into the genetic determinants of disease,” Science 347(6218), 1254806 (2015).
[Crossref]

Other (5)

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 770–778.

S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 1492–1500.

J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 7132–7141.

T. Mikolov, A. Deoras, D. Povey, L. Burget, and J. Černocky, “Strategies for training large scale neural network language models,” in Proceedings of IEEE Workshop on Automatic Speech Recognition and Understanding (IEEE, 2011), pp.196–201.

M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in Proceedings of the 36th International Conference on Machine Learning (PMLR, 2019), pp. 6105–6114.

Supplementary Material (1)

NameDescription
Supplement 1       Supplement 1

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Characteristics of the micro-LED based photodetector. (a). The structure of green micro-LED based photodetector. (b). Photocurrent versus bias voltage curves under the conditions of different optical power densities. (c). Photocurrent versus power density curves at different bias voltage, inset: the part of curves from 0 W cm-2 to 0.3 W cm-2. (d). The pulse response of micro-LED based photodetector at different bias voltages.
Fig. 2.
Fig. 2. Schematic diagram of the imaging ANN micro-LED array. (a). Illustration of the micro-LED based photodetector array. Each micro-LED works as a subpixel and subpixels in the same color are connected in parallel to provide the photocurrent output for recognition; (b). The architecture of the classifier, where input layer and fully-connected layer is on-chip implemented in the imaging sensor. The softmax layer for activation function is an off-chip calculation. (c). Four letters ‘N’, ‘T’, ‘V’ and ‘X’ in projected images with different Gaussian noise levels: no noise, crosstalk factor σ=0.1, 0.2, and 0.3, respectively.
Fig. 3.
Fig. 3. Results of the micro-LED based ANN device during the training process in simulation with the 3×3 array. (a). Accuracy of image recognition for varying random noise levels; (b). Loss calculation under different noise levels during training; (c). Bias voltage distributions of the device at the noise level of 0.1 and 0.3. The three subgraphs from top to bottom are the number of subpixels in initialization, epoch 20, and the final, respectively.
Fig. 4.
Fig. 4. Average photocurrent output for each epoch with the noise level of 0.3. Four graphs are corresponding to the letters ‘N’, ‘T’, ‘V’ and ‘X’, respectively. The recognition result is determined by the maximum output photocurrent.
Fig. 5.
Fig. 5. Results of the micro-LED based ANN device in simulation with the 28×28 array. (a). The accuracy versus the proportion of failed subpixels curve and the inset is the loss versus the proportion of failed subpixels curve; (b). The accuracy versus the standard deviation curve and the inset is the loss versus the standard deviation curve. The inconsistency of the micro-LED based photodetector is supposed to fit the Gaussian distribution with a standard deviation.

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

O n ( I ) = e α I n k = 1 M e α I k ,
L o s s = 1 s n = 1 s L n = 1 s n = 1 s i = 1 M y n i log [ O n i ( I ) ] ,

Metrics