Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Real-time visible light positioning supporting fast moving speed

Open Access Open Access

Abstract

Real-time capability is a key factor which affects the practicality of an indoor positioning system significantly. While visible light positioning (VLP) is widely studied since it can provide indoor positioning functionality with LED illumination, the existing VLP systems still suffer from the high positioning latency and are not practical for mobile units with high moving speed. In this paper, a real-time VLP system with low positioning latency and high positioning accuracy is proposed. With the lightweight image processing algorithm, the proposed system can be implemented on low-cost embedded system and support real-time accurate indoor positioning for mobile units with a fast moving speed. Experimental results show that the proposed system implemented on a Raspberry Pi can achieve a positioning accuracy of 3.93 cm and support the moving speed up to 38.5 km/h.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Indoor positioning is critical for indoor navigation services, especially with the increasing demand for automatic guided vehicles (AGV), indoor robots, indoor location-based services (LBS), autonomous vehicle in tunnels and etc. However, Global Positioning System (GPS) cannot be directly applied to indoor environment since the satellites’ signal may be blocked by the exterior wall of buildings [1], while the traditional radio based indoor positioning technologies, such as Bluetooth [2], Wi-Fi [3], Radio-frequency Identification (RFID) [4] and ultra-wideband (UWB), still have some disadvantages in terms of low accuracy, high latency, electromagnetic interference or high hardware cost.

Based on LED and visible light communication technologies, visible light positioning (VLP) has attracted more and more attention due to many advantages, including high positioning accuracy, low cost and dual functionality of illumination and positioning. Depending on the devices for receiving light signal, current VLP technologies can be divided into two categories: image sensor (IS) based VLP or photodiode (PD) based VLP. Since PD-based VLP systems are sensitive to the direction of the light and the mobility can be limited [5], IS-based VLP is more suitable for moving objects in indoor environment. Furthermore, with the widely uses of complementary metal-oxide-semiconductor (CMOS) sensor cameras, IS-based VLP is easier to be implemented or integrated with current mobile terminals and mobile equipment.

Several IS-based VLP systems have been proposed with high positioning accuracy [610]. However, the real-time performance of these systems are limited by the high computational latency of image processing or the communication latency of transmitting image data for server-assisted computation. As a result, the application of VLP systems for practical scenarios can be significantly limited since the high latency will bring collision risks for mobile objects with fast moving speed. Guan et al. have proposed several IS-based VLP systems using industrial camera, which can achieve high positioning accuracy with positioning latency lower than 80 ms [67]. However, these systems rely on the computational power of a desktop with a quad core i7-7700HQ CPU. Besides, the communication latency for image data transmission is not counted in. Recently, some IS-based VLP systems using smartphone camera have be proposed [810]. While these systems can also achieve positioning error less than 10 cm, the positioning latency is not considered in these systems. Besides, these smartphone based systems also need to transmit image data to a desktop or server for VLP computation. We previously proposed a high-speed VLP system with commercial smartphones [11], which can support moving speed up to 18 km/h, while it relies on the processor of smartphones and is limited to personal applications.

In this paper, an IS-based VLP system supporting fast moving speed is proposed, targeting for industrial applications and mobile units such as mobile robots, AGVs, moving vehicles in indoor environment and etc. Utilizing the pixel intensity sampling based lightweight image processing algorithm to further reduce the computational time, the proposed system can support real-time accurate indoor positioning for mobile units with fast moving speed. Besides, due to the low computational complexity, the positioning algorithm can be implemented in low-cost embedded platforms such as Raspberry Pi. Since the propose system does not require the assistance of server, there is no communication latency. Experimental results show that the proposed system can achieve an average positioning accuracy of 3.93 cm and an average positioning time of 44.3 ms which can support real-time indoor navigation for mobile units with moving speed up to about 38.5 km/h.

2. Design of the proposed IS-based VLP system

In principle, LED based VLP system is an “indoor GPS”-like positioning system. As shown in Fig. 1, the LED lamps work as VLP information transmitters, which are like “indoor GPS satellites”, while the image sensor on mobile equipment captures VLP light signal and is like “indoor GPS receiver”. When the mobile equipment collects enough VLP information via the image sensor, it can calculate the accurate position by solving the equations which are used in Ref. [11,12].

 figure: Fig. 1.

Fig. 1. The architecture of the proposed IS-based VLP system.

Download Full Size | PDF

2.1 VLP information transmitter using LED lamps

Same as our previous work [11], in this system, LED lamps are driven by VLP modulators to illuminate and broadcast their positioning signal simultaneously. Each VLP modulator is assigned a unique identifier (UID) after being manufactured. When a LED lamp with a VLP modulator is installed, the UID will be associated with the coordinate of the installed LED lamp in the database of the positioning system. The micro-controller inside the VLP modulator encodes the UID to a interleaved two of five (ITF) codeword which is suitable for not only optical transmission but also flicker mitigation and dimming support, and the modulator’s driver circuit uses on-off keying intensity modulation (OOK IM) to control the LED lamp to transmit the encoded light signals.

2.2 VLP information receiver using CMOS sensor

IS-based VLP system utilizes the rolling shutter mechanism of CMOS image sensors to receive the OOK modulated LED light signals. Since all the pixels on a rolling shutter CMOS image sensor are exposed and read out line by line instead of perceiving light at the same time at a single moment. The “ON” light signal or the “OFF” light signal can be transferred into bright fringe or dark fringe with a rolling shutter CMOS image sensor. Therefore, if the OOK modulation frequency is set to match the exposure time of a CMOS image sensor, one frame of OOK modulated VLP information can be received as a fringe image, as shown in Fig. 2. The details could be found in Ref. [11].

 figure: Fig. 2.

Fig. 2. An example of VLP fringe image captured by a rolling shutter CMOS image sensor.

Download Full Size | PDF

2.3 Sampling-based lightweight image processing

As discussed in Ref. [6], the computational latency for IS-based VLP system is mainly originated from the image processing, more specifically, obtaining the region of interest (ROI) area of the LED lamp in the captured image. In Ref. [11], we previously proposed an lightweight image processing algorithm to decrease the average computational time for IS-based VLP to 22.7 ms on a OnePlus 2 smartphone equipped with a 5 megapixel front-facing camera and a Qualcomm Snapdragon 810 ARM CPU, which is a Octa-core 64-bit CPU with clock speed up to 2.0 GHz.

In the proposed system, we further reduce the computational complexity of the lightweight image processing using pixel intensity sampling instead of the original pixel intensity integration. Our previous ROI detection algorithm integrates the intensity of every pixel of every row or column to detect the boundary of the fringe areas inside the captured image. In this paper, we propose an improved ROI detection algorithm, which do not take every pixel into computation but sample pixels with a constant distance, e.g., 3, as illuminated in Fig. 3. Therefore, in theory, the computational time can be significantly decreased to 1/9 of that of the previous algorithm. To avoid errors raised by sampling, when the boundary of a ROI area is found, all the pixels nearby the area will be checked again to detect the precise boundary of the ROI area. Since the number of the pixels inside the first area is much less than that in the original image, the computational time for detecting the precise boundary is neglectable short.

 figure: Fig. 3.

Fig. 3. Sampling based lightweight image processing algorithm with simplified ROI detection.

Download Full Size | PDF

With the propose sampling-based lightweight image processing algorithm, the positioning latency of the IS-based VLP system is greatly shorten, even with a low-cost embedded system instead of an expensive mobile terminal. Therefore, the proposed VLP system is able to provide real-time positioning, which can track the position of mobile units under fast moving speed in time to support safely real-time navigation.

3. Experimental results and discussion

3.1 Experimental setup

As shown in Fig. 4, the experimental area is with a size of 181 cm x 181 cm and 9 VLP lamps are installed on the roof of the shelf with a height of 204 cm. The VLP lamps are 17.5 cm diameter commercial LED lamps with our self-designed VLP modulator and the distance between two nearby VLP lamps is 62 cm. A remote control car carrying a Raspberry Pi 3 Model B development kit with a 1600 × 1200 pixels CMOS image sensor is used as the mobile unit in the experiments to test the real-time VLP performance in terms of positioning accuracy and positioning speed. The processor of Raspberry Pi 3 Model B development kit is a Quad Core 1.2 GHz Broadcom BCM2837 64bit CPU. The two light gates are used to measure the speed of the mobile unit when it moves along the plastic track.

 figure: Fig. 4.

Fig. 4. Experimental environment and hardware.

Download Full Size | PDF

3.2 Positioning accuracy

To evaluate the positioning accuracy of the proposed VLP system, two series of experiments were carried out. The first series were used to test the performance for motionless objects. As shown in Fig. 5(a), 49 spots with a grid pattern inside the experimental area were selected and the standard deviation between the measured positions and the actual positions were calculated. As the heat map shown in Fig. 5(b), the average positioning error is 3.93 cm, and the maximum positioning error is 6 cm.

 figure: Fig. 5.

Fig. 5. Positioning accuracy experiment for motionless objects and the heat map of positioning errors.

Download Full Size | PDF

The second series of experiments were used to test the performance for moving mobile units. As shown in Fig. 6, the remote control car was controlled to moving along a straight track or a rounded quadrilateral track with different speeds. In Fig. 6, the measured positions are plotted as red spot, while the tracks are plotted as black bold lines.

 figure: Fig. 6.

Fig. 6. The positioing accuracy for mobile unit moving along different tracks with different speeds.

Download Full Size | PDF

As shown in Figs. 6(a) and 6(b), the measured real-time position of the remote control car is close to the straight track when it moves along the track. Since the x coordinate of the remote control car cannot be precisely controlled during moving, the average positioning error is calculated by only compare the y coordinate of the measured position and that of the straight track. For Fig. 6(a), which shows the measured positions for the remote control car with moving speed of 1 m/s, the average positioning error is 1.49 cm. As for Fig. 6(b), where the speed of the remote control car is 2 m/s, the average positioning error is 1.86 cm. Note that the reason for the lower positioning error of moving units is that the positioning error in x-axis is not taken into calculation for moving units. Figs. 6(c) and 6(d) shows the measured positions when the remote control car moves along the rounded quadrilateral track. When the remote control car turns at the rounded corner, the average positioning error is increased to 5.31 cm due to the rotation of the angle of the CMOS image sensor.

Note that the measured positions are mainly distributed around two sections of the straight track (x = 50 cm ∼ 90 cm and x = 110 cm ∼ 140 cm) in Figs. 6(a) and 6(b), since the positioning algorithm requires at least two LED lamps for calculation and mobile unit in these two sections can capture two fringe areas in the CMOS image sensor. The experiments did not include the section around x = 90 cm ∼ 110 cm since the CMOS image sensor at that section cannot capture two VLP lamps simultaneously.

3.3 Real-time positioning speed

Positioning speed is another key factor for VLP systems. It indicates that how fast the mobile unit moves while it can receive VLP information and calculate current position in time. Therefore, for IS-based VLP system, VLP information should be captured and extracted before the mobile unit passes the VLP information transmitter, i.e., the VLP lamp leaves the field of view (FOV) of the CMOS image sensor. As shown in Fig. 7, the physical motion of the mobile unit can be transformed to the image motion of the fringe area in the captured images [7]. Thence, the maximum supported moving speed is the speed with which the mobile unit can extract the VLP information during the timeframe for the fringe area moving from the leftmost position to the rightmost. Otherwise, the mobile unit will miss the VLP transmitter and cannot get real-time position information.

 figure: Fig. 7.

Fig. 7. The image motion of fringe area.

Download Full Size | PDF

Then, the maximum supported moving speed v is defined as v = s/t, where s represents the physical length corresponding to the FOV of the CMOS image sensor and t represents the time required for processing one image frame. There is a proportional relationship between images coordinate and world coordinate: s/r = D/d, where r represents the pixel length of images, which is 1600 in this paper, D represents the actual diameter of VLP lamps, and d represents the diameter of VLP lamps in images. As mentioned above, the actual diameter of VLP lamps is 17.5 cm, and the diameter of VLP lamps in images is 590 pixels.

In our experiments, the computational time for position calculation was measured 70 times continuously to calculate the positioning speed, as shown in Fig. 8. The average computational time of positioning is 44.3 ms, and therefore, the maximum supported moving speed is vmax = (17.5 cm * 1600 / 590) / 44.3 ms ≈ 10.7 m/s = 38.5 km/h. Note that the computational power of the mobile device in the experiments is lower than that in Ref. [11], while the number of pixels of the image sensor is larger.

 figure: Fig. 8.

Fig. 8. The measured positioning time.

Download Full Size | PDF

In Fig. 8, there are some abnormal points of positioning time, which fluctuate from about 40 ms to 60 ms. The reason is that some captured images contain one incomplete additional fringe area which cannot be decoded correctly, while the image processing algorithm tries to decode it and introduces about 20 ms additional computational time. These incomplete fringe areas can be filtered by setting a threshold for fringe area size.

4. Conclusion

Targeting for industrial applications and mobile units such as mobile robots, AGVs, moving vehicles in indoor environment and etc., an IS-based VLP system supporting fast moving speed is proposed. With an improved lightweight image processing algorithm which utilizes pixel intensity sampling instead of intensity integration for ROI detection, the computational complexity of the proposed VLP system can be significantly reduced. As a result, the positioning algorithm can be implemented on low-cost embedded system and the positioning latency is low. Experimental results show that the proposed system implemented on a Raspberry Pi can achieve an average positioning accuracy of 3.93 cm and an average positioning time of 44.3 ms which can support real-time indoor navigation for mobile units with moving speed up to about 38.5 km/h.

Funding

National Key Research and Development Program of China (2018YFB1801900); National Natural Science Foundation of China (61771222, 61872109); Key Research and Development Program for Guangdong Province (2019B010136001); Science and Technology Project of Guangzhou (201707010253, 201803020023); Project of Guangzhou Industry Leading Talents (CXLJTD-201607); Science and Technology Project of Shenzhen (JCYJ20170815145900474, JSGG20170824163239586); National and Provincial Program Supporting Projects of Shenzhen (GJHS20170313113617970); Peng Cheng Laboratory Project of Guangdong Province (PCL2018KP004).

Disclosures

The authors declare no conflicts of interest.

References

1. Y. Gu, A. Lo, and I. Niemegeers, “A survey of indoor positioning systems for wireless personal networks,” IEEE Commun. Surv. Tutor. 11(1), 13–32 (2009). [CrossRef]  

2. R. Faragher and R. Harle, “Location fingerprinting with bluetooth low energy beacons,” IEEE J. Select. Areas Commun. 33(11), 2418–2428 (2015). [CrossRef]  

3. Y. Zhuang, Z. Syed, Y. Li, and N. El-Sheimy, “Evaluation of two WiFi positioning systems based on autonomous crowdsourcing of handheld devices for indoor navigation,” IEEE Trans Mob Comput. 15(8), 1982–1995 (2016). [CrossRef]  

4. S. Park and S. Hashimoto, “Autonomous mobile robot navigation using passive RFID in indoor environment,” IEEE Trans. Ind. Electron. 56(7), 2366–2373 (2009). [CrossRef]  

5. W. Guan, Y. Wu, C. Xie, H. Chen, Y. Cai, and Y. J. O. E. Chen, “High-precision approach to localization scheme of visible light communication based on artificial neural networks and modified genetic algorithms,” Opt. Eng. 56(10), 1 (2017). [CrossRef]  

6. W. Guan, C. Shihuan, S.-S. Wen, Z. Tan, H. Song, and W. J. I. P. J. Hou, “High-Accuracy Robot Indoor Localization Scheme based on Robot Operating System using Visible Light Positioning,” IEEE Photonics J. 12(2), 1–16 (2020). [CrossRef]  

7. Z. Xie, W. Guan, J. Zheng, X. Zhang, S. Chen, and B. J. S. Chen, “A High-Precision, Real-Time, and Robust Indoor Visible Light Positioning Method Based on Mean Shift Algorithm and Unscented Kalman Filter,” Sensors 19(5), 1094 (2019). [CrossRef]  

8. X. Liu, X. Wei, L. J. I. I. o, and T. J. Guo, “DIMLOC: Enabling High-Precision Visible Light Localization Under Dimmable LEDs in Smart Buildings,” IEEE Internet Things J. 6(2), 3912–3924 (2019). [CrossRef]  

9. H. Huang, B. Lin, L. Feng, and H. J. A. o. Lv, “Hybrid indoor localization scheme with image sensor-based visible light positioning and pedestrian dead reckoning,” Appl. Opt. 58(12), 3214–3221 (2019). [CrossRef]  

10. J. Xu, C. Gong, and Z. J. I. P. J. Xu, “Experimental indoor visible light positioning systems with centimeter accuracy based on a commercial smartphone camera,” IEEE Photonics J. 10(6), 1–17 (2018). [CrossRef]  

11. J. Fang, Z. Yang, S. Long, Z. Wu, X. Zhao, F. Liang, Z. L. Jiang, and Z. Chen, “High-Speed Indoor Navigation System based on Visible Light and Mobile Phone,” IEEE Photonics J. 9(2), 1–11 (2017). [CrossRef]  

12. J.-Y. Kim, S.-H. Yang, Y.-H. Son, and S.-K. J. I. O. Han, “High-resolution indoor positioning using light emitting diode visible light and camera image sensor,” IET Optoelectron. 10(5), 184–192 (2016). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. The architecture of the proposed IS-based VLP system.
Fig. 2.
Fig. 2. An example of VLP fringe image captured by a rolling shutter CMOS image sensor.
Fig. 3.
Fig. 3. Sampling based lightweight image processing algorithm with simplified ROI detection.
Fig. 4.
Fig. 4. Experimental environment and hardware.
Fig. 5.
Fig. 5. Positioning accuracy experiment for motionless objects and the heat map of positioning errors.
Fig. 6.
Fig. 6. The positioing accuracy for mobile unit moving along different tracks with different speeds.
Fig. 7.
Fig. 7. The image motion of fringe area.
Fig. 8.
Fig. 8. The measured positioning time.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.