Abstract
Unitary learning is a backpropagation (BP) method that serves to update unitary weights in fully connected deep complex-valued neural networks, meeting a prior unitary in an active modulation diffractive deep neural network. However, the square matrix characteristic of unitary weights in each layer results in its learning belonging to a small-sample training, which produces an almost useless network that has a fairly poor generalization capability. To alleviate such a serious over-fitting problem, in this Letter, optical random phase dropout is formulated and designed. The equivalence between unitary forward and diffractive networks deduces a synthetic mask that is seamlessly compounded with a computational modulation and a random sampling comb called dropout. The dropout is filled with random phases in its zero positions that satisfy the Bernoulli distribution, which could slightly deflect parts of transmitted optical rays in each output end to generate statistical inference networks. The enhancement of generalization benefits from the fact that massively parallel full connection with different optical links is involved in the training. The random phase comb introduced into unitary BP is in the form of conjugation, which indicates the significance of optical BP.
© 2021 Optical Society of America
Full Article | PDF ArticleMore Like This
Yong-Liang Xiao, Sikun Li, Guohai Situ, and Jianxin Zhong
Opt. Lett. 47(7) 1746-1749 (2022)
Yong-Liang Xiao, Zhi-Gang Zhang, Sikun Li, and Jianxin Zhong
Opt. Lett. 48(12) 3303-3306 (2023)
Shuiqin Zheng, Shixiang Xu, and Dianyuan Fan
Opt. Lett. 47(7) 1798-1801 (2022)