Abstract
Visible light positioning (VLP) is a promising technology since it can provide high accuracy indoor localization based on the existing lighting infrastructure. However, existing approaches often require dense LED distributions and persistent line-of-sight (LOS) between transmitter and receiver. What's more, sensors are imperfect, and their measurements are prone to errors. Through multi sensors fusion, we can compensate the deficiencies of stand-alone sensors and provide more reliable pose estimations. In this work, we propose a loosely-coupled multi-sensor fusion method based on VLP and Simultaneous Localization and Mapping (SLAM), using light detection and ranging (LiDAR), odometry, and rolling shutter camera. Our multi-sensor localizer can provide accurate and robust robot localization and navigation in LED shortage/outage situations. The experimental results show that our proposed scheme can provide an average accuracy of 2.5 cm with around 42 ms average positioning latency.
PDF Article
More Like This
Cited By
Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.