After their breakthrough around a decade ago, neural networks electrify and transform computer science, engineering, natural sciences and the public. Suddenly, new computational tasks became solvable thanks to breaking away from classical programming: instead of arranging precise instructions in a premeditated sequence, computation is decomposed into many very simple nonlinear transformations by the neurons, which are weighted according to a staggering number of network connections. Programming
painstakingly fine-tunes all parameters by iteratively measuring their gradient towards the correct result that is typically known from example data, and fittingly this step is often called teaching
. Unfortunately, this fundamentally different concept creates very real challenges. The first is to create hardware that efficiently supports all these transformations and coefficients, and inherently parallel photonics has compelling arguments. The second is to efficiently teach such computers, and Zhou et al.
have suggested a novel approach which lets optics do (almost) all the work (almost) for free. Instead of determining each gradient individually, their concept of in situ
back-propagation uses optical propagation to advantageously accumulate gradients as they build up at different stages of a neural network. This promises a significant simplification and shows that optics can not only be very helpful for building, but also for teaching neural networks.
You must log in
to add comments.