Abstract
We propose a polarimetric imaging processing method based on feature fusion and apply it to the task of target detection. Four images with distinct polarization orientations were used as one parallel input, and they were fused into a single feature map with richer feature information. We designed a learning feature fusion method using convolutional neural networks (CNNs). The fusion strategy was derived from training. Meanwhile, we generated a dataset involving one original image, four polarization orientation images, ground truth masks, and bounding boxes. The effectiveness of our method was compared to that of conventional deep learning methods. Experimental results revealed that our method gets a 0.80 mean average precision (mAP) and a 0.09 miss rate (MR), which are both better than the conventional deep learning method.
© 2021 Optical Society of America
Full Article | PDF ArticleMore Like This
Wenhao Li, Shangwei Guo, Yu Zhai, Fei Liu, Zhengchao Lai, and Shaokun Han
Appl. Opt. 60(28) 8809-8817 (2021)
Kashif Usmani, Gokul Krishnan, Timothy O’Connor, and Bahram Javidi
Opt. Express 29(8) 12215-12228 (2021)
Khalid Omer and Meredith Kupinski
J. Opt. Soc. Am. A 38(4) 515-525 (2021)