Abstract
As an optical-based classifier of the physical neural network, the independent diffractive deep neural network (${{\rm D }^2}{\rm NN}$) can be utilized to learn the single-view spatial featured mapping between the input lightfields and the truth labels by preprocessing a large number of training samples. However, it is still not enough to approach or even reach a satisfactory classification accuracy on three-dimensional (3D) targets owing to already losing lots of effective lightfield information on other view fields. This Letter presents a multiple-view ${{\rm D }^2}{\rm NNs}$ array (MDA) scheme that provides a significant inference improvement compared with individual ${{\rm D }^2}{\rm NN}$ or Res-${{\rm D }^2}{\rm NN}$ by constructing a different complementary mechanism and then merging all base learners of distinct views on an electronic computer. Furthermore, a robust multiple-view ${{\rm D }^2}{\rm NNs}$ array (r-MDA) framework is demonstrated to resist the redundant spatial features of invalid lightfields due to severe optical disturbances.
© 2021 Optical Society of America
Full Article | PDF ArticleMore Like This
Hongkun Dou, Yue Deng, Tao Yan, Huaqiang Wu, Xing Lin, and Qionghai Dai
Opt. Lett. 45(10) 2688-2691 (2020)
Jiashuo Shi, Yingshi Chen, and Xinyu Zhang
Opt. Lett. 47(3) 605-608 (2022)
Li-Lan Tian, Fan Chu, Wu-Xiang Zhao, Lei Li, and Qiong-Hua Wang
Opt. Lett. 46(23) 5870-5873 (2021)