Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Efficient coding and detection of ultra-long IDs for visible light positioning systems

Open Access Open Access

Abstract

Visible light positioning (VLP) is a promising technique to complement Global Navigation Satellite System (GNSS) such as Global positioning system (GPS) and BeiDou Navigation Satellite System (BDS) which features the advantage of low-cost and high accuracy. The situation becomes even more crucial for indoor environments, where satellite signals are weak or even unavailable. For large-scale application of VLP, there would be a considerable number of Light emitting diode (LED) IDs, which bring forward the demand of long LED ID detection. In particular, to provision indoor localization globally, a convenient way is to program a unique ID into each LED during manufacture. This poses a big challenge for image sensors, such as the CMOS camera in everybody’s hands since the long ID covers the span of multiple frames. In this paper, we investigate the detection of ultra-long ID using rolling shutter cameras. By analyzing the pattern of data loss in each frame, we proposed a novel coding technique to improve the efficiency of LED ID detection. We studied the performance of Reed-Solomon (RS) code in this system and designed a new coding method which considered the trade-off between performance and decoding complexity. Coding technique decreases the number of frames needed in data processing, significantly reduces the detection time, and improves the accuracy of detection. Numerical and experimental results show that the detected LED ID can be much longer with the coding technique. Besides, our proposed coding method is proved to achieve a performance close to that of RS code while the decoding complexity is much lower.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Recently, visible light communication (VLC) has attracted a lot of attention as a promising technique due to its high transmit speed, high security and low power consumption [1–3]. Among the numerous applications of VLC systems, indoor positioning [4–6] is useful and practical for its low requirement for channel and receiver. LED can be modulated at a pretty high frequency and is easy to be integrated with indoor lighting system, which makes it a common optical source in VLC systems. By switching the LED on and off at a certain speed, we can use it to send data stream to the receiver for positioning.

Besides photo-detector (PD), optical camera is an important receiver of visible light positioning (VLP) system [7]. With the position of LED in captured image and the modulated information in LED light, it’s practical to calculate the accurate positon of receiver [8]. Many schemes are proposed to realize the high transmission performance between LED and CMOS camera [10–12]. One of the most practical schemes is utilizing the rolling shutter effect of CMOS camera to detect the transmitted data from LED [9]. In this scheme, each LED transmitter circularly sends its own ID modulated by on-off keying (OOK), and then the rolling shutter camera takes photo of the rapidly flickering light. The distribution of LEDs in the position system is designed to avoid the interference of multiple LEDs. Due to the rolling shutter effect, LED ID can be detected from the stripes of light and dark. By mapping the LED ID with the position in the real world, we can achieve the area location. Former works in this field mainly focus on the image processing, using multiple LEDs for high accuracy positioning [7,8,11] or using LED light with different colors to increase the VLC transmission capacity [12]. However, the length of LED ID is limited when we use only one photo to do the detection. When the distance between the transmitter and the receiver gets longer or the interference of background light gets stronger, the recognizable stripes would get much fewer, and then the length of LED ID one photo could contain will be much shorter. Few former works considered the problem in long LED ID transmission. Considering the large-scale application of VLP, more and more LED transmitters will be arranged in shopping malls, underground parking lots, automated factories, etc. To satisfy people’s increasing demands for indoor positioning, it is necessary to perform long LED ID deployment and detection. Especially, to provide global indoor localization, we need to identify huge amounts of LED transmitters all around the world. A convenient way is to program a unique ID into each LED during manufacture. Efficient detection of ultra-long ID in VLP system would be helpful. Ultra-long ID implies more bits can be broadcasted, giving operators the flexibility to deliver customized information, e.g. postal address or GPS coordinates, or other information that the operator would like to send.

To transmit long LED ID in VLP system, a common method is to divide long ID into several blocks, then use the continuous photos taken by rolling shutter camera sequentially to detect the blocks through image processing, finally recover the LED ID by splicing separated blocks. This method is commonly known as direct splicing (DS) method. In this method, in order to achieve a good success rate of LED ID detection, we must adjust the frequency of LED and the frame rate of rolling shutter camera to make sure all the blocks are received in several continuous photos, which would be a tough task to carry out when considering different kinds of rolling shutter cameras. Besides, since the DS method requires multiple photos for detection, for high speed indoor VLP, for example, used in autonomous vehicles, the fast motion of receiver will obviously reduce the quality of photos, thus leads to low detection efficiency and even detection failure.

In this paper, we propose and demonstrate a coding technique to improve the efficiency of long LED ID detection in rolling shutter camera based VLP system. We studied the performance of Reed-Solomon (RS) code [13] in this system and designed a new code which features the advantage of both performance and decoding complexity. Due to the fluctuation of the photo quality influenced by the changing environment, burst error occurs in LED ID detection at the receiver. A good choice to deal with the problem is to use erasure code, of which the commonly used RS code can achieve a good performance. However, its decoding requires a large amount of computations especially when using long code word. Inspired by the idea of erasure code, we propose a new coding method for the VLP system. Numerical and experimental results show that by using this new coding method, the length of LED ID can be detected reliably is much longer than that of the DS method. And, this proposed coding method is proved to achieve the similar performance of RS code while the decoding complexity is much lower.

The rest of this paper is organized as follows. We first present the proposed coding and decoding method for VLP system in Section 2. Then we analyze and propose the proper device settings of transmitter and receiver, investigate the quality of received photos influenced by the environment, and explore the details of coding method in Section 3. The performance and complexity analysis of the coding method is given in Section 4 through simulations and experiments. Finally, the conclusion is drawn in Section 5.

2. Principle of the proposed coding aided VLC based positioning system

Figure 1 shows the block diagram of the proposed coding aided positioning system based on LED and rolling shutter camera. At the transmitter, the LED ID is firstly encoded and divided into several data blocks by series-to-parallel (S/P) conversion. Then, each data block is Manchester encoded and headers for synchronization are inserted at the beginning of every block. After that, the N data blocks are processed by parallel-to-serial (P/S) conversion to modulate the driver of LED utilizing on-off keying (OOK). The data is sent to the free-space optical channel by the LED light circularly. At the receiver, the light from LED and the background is received by the rolling shutter camera frame by frame. Each photo taken by the camera is processed by the image processing algorithm and the data blocks in each photo can be extracted by the synchronization headers. M data blocks can be extracted from all photos totally (MN). Then, each data block is Manchester decoded, respectively. Finally, the decoder recovers the LED ID using the data blocks after Manchester decoding.

 figure: Fig. 1

Fig. 1 Block diagram of the coding aided VLC based positioning system.

Download Full Size | PDF

2.1 Rolling shutter and image processing

The rolling shutter mechanism is a method of image acquisition used by sensor cameras, which means that when we take photos or videos, the entire image is not captured all at once, but sequentially in rows. As shown in Fig. 2, since the photo records the state of LED light row by row, when the flashing frequency of LED is lower than the scanning frequency of rolling shutter in each row and higher than the frequency that the camera takes photos, stripes of different light intensity appear in photos. Thus, the stripes in captured photos actually contain the data sent by LED transmitter.

 figure: Fig. 2

Fig. 2 The operation of rolling shutter. The squares above indicate the state of LED light. The squares below show the output of camera—the stripes in photos.

Download Full Size | PDF

Every time a photo is captured, it is processed by the image processing algorithm, which is shown in Fig. 3. Firstly, the captured photo is converted into grayscale format. Each pixel is represented from 0 to 255 grayscale levels, in which 0 represents the completely dark and 255 represents completely bright. Then we blur the image row by row to decrease the influence of noise and overexposure. Then we calculate the average grayscale value of each row for further binarization. To recover the logic 1 and 0 of the original data, a proper threshold is selected. Each row is decided as 0 or 1 according to whether its value is lower than the threshold or not. Then we record the results in an array. Since the headers of data blocks transmitted appear wider stripes compared to the data Manchester encoded, the header of each block can be easily found, and then the data block between two adjacent headers can be extracted. If two new adjacent headers cannot be found, move to the next photo captured.

 figure: Fig. 3

Fig. 3 Flow chart of the image processing.

Download Full Size | PDF

2.2 The proposed coding method

Figure 4 shows the principle of the proposed encoder. The LED ID of length r is represented as x=[x0x1x2xr1]. Firstly, x is equally divided into blocks with the same length. With a proper length L, the i-th block can be denoted as ai=[ai(0)ai(1)ai(2)ai(L1)], whose element is ai(j)=xi×L+j. As a result, there are K blocks in total which satisfies K=r/L.

 figure: Fig. 4

Fig. 4 The principle of the proposed encoder.

Download Full Size | PDF

These K blocks denoted by A can be expressed as A=[a0,a1,a2,aK1]. Then we multiply A by the generator matrix G to derive B. The generator matrix G is a binary matrix with the size of N×K and is shown as

G=[IG'],
where I is the K×K identity matrix. The submatrix G' could be constructed as
G'ij={1jiF0others,
where F is the set of Fibonacci sequence.

B is composed of N data blocks, whereB=[b0,b1,b2,bN1] and bi=j=0KGijaj. The summation here is operated with addition modular two. A binary vector ni is inserted at the beginning of each data block. ni is a binary verctor contains the number information with a fixed length. For example, when the length of ni is 4 bits, n0=[0000], n1=[0001], n2=[0010], etc. The output of the proposed encoder can be denoted by S=[b˜0,b˜1,b˜2,b˜N1], where b˜i=[nibi]. Each row of S is further processed and sent circularly by the transmitter.

2.3 The decoding method

As shown in Fig. 3, data extraction depends on the recognizable stripes and the search for the headers. The data block between two adjacent headers can be extracted for further processing. But if the two new adjacent headers are not found, we could not extract any more data blocks and this leads to the data loss. Through the image processing and data extraction, if M blocks are recovered from the photo and satisfies MK, we can recover all the original blocks with a high probability even though not all the original blocks are contained in these M blocks.

Figure 5 shows how to recover the unknown original blocks using the recovered redundant blocks and the generator matrix G. a1, a2and a3 are the unknown original data blocks, and the dark dots below represent the redundant blocks received. The connection between each original block and redundant block means that this original block is one of the blocks that are added together to form a redundant block. The degree of a redundant block suggests that how many original blocks to form this redundant block. When an original block related to this redundant block is recovered, the degree deceases by one. The decoder algorithm is described as follows:

 figure: Fig. 5

Fig. 5 Schematic diagram of decoding procedure.

Download Full Size | PDF

step 1): Find a redundant block of degree one.

step 2): Recover the original block it connected, and remove the connection between them.

step 3): When one original block is recovered, refresh the degree of every redundant blocks. Go to step 1) and repeat the operation above until all original data blocks are recovered.

step 4): Once all original blocks are known, recover the LED ID by array them in order.

If there’s no redundant block of degree one and not all the original blocks are recovered, the decoding process fails.

3. System performance evaluation and optimization

The detection of LED ID relies on the stripes in photos, whose quantities are strongly influenced by the environment and the parameters of transmitter and receiver. In this chapter, we analyze and propose the proper device settings of transmitter and receiver, investigate the quality of received photos influenced by the environment, and explore the details of coding scheme.

3.1 Parameters of transmitter and receiver

In the photos received, the binary data exist in the form of dark or bright stripes. An important factor that must be considered in detection is the width of each stripe, which suggests how many pixel rows a single stripe contains. To pursue high data rate, it’s better to display more stripes in one photo, but too dense stripes also make them difficult to be detected. For the algorithm used in the image processing, our experiments suggest that each stripe containing 3~5 pixel rows is a proper choice to transmit more information in one photo and to ensure efficient detection in the meanwhile.

In a rolling shutter camera, the exposure time of each row is the same, which should be shorter than the duration time of a single bit the transmitter sends. Otherwise, a single row would be the combination of multiple bits. Assume the flash frequency of LED is ft, where ft satisfies

Exposuretime1ft.

There is a delay time between two adjacent exposures, which is a fixed time once the COMS camera is also fixed. In this system, the reciprocal of this fixed delay time is the sample frequency fs of the receiver, and the width of the stripe in photos is fs/ft pixels. Assume the number of rows in one photo is m, thus the total information transmitted in one photo is

kt=m×ftfs.

The image quality taken by the camera significantly affects the detection, which is mainly influenced by environment. The intensity of light decreases rapidly when the distance from its source increases, which is usually much lower than the background light at a long distance. The interference of background light and the noise from camera itself cause the fuzziness of stripes in photos. As shown in Fig. 6, only the stripes near the LED position are able to be detected. The data loss caused by fuzziness of stripes is the major problem in long LED ID transmission. We evaluate the image quality with a data loss percentage μ. After the data extraction in the image processing, the headers and the data recognizable in one photo are kr bits. Thus the data loss percentage μ could be calculated as follows

 figure: Fig. 6

Fig. 6 The fuzziness of stripes causes the data loss.

Download Full Size | PDF

μ=1-krkt.

3.2 The optimized block length

When we extract a data block from the photo, the valid data coming from LED ID is only L bits. If L is too large, the blocks in photos are more likely to be affected by the fuzziness of stripes since the recognizable data in one photo is limited. Too small L will also reduce the transmission efficiency as header and the serial number ni must be sent simultaneously. When kr, the total length of header and ni are determined, there exists a local optimal value of L that helps to extract the most valid data from one photo.

We define the successful detection probability λ as the success rate that we can detect the original LED ID from a certain number of photos received. When λ80%, the detection is reliable. ψ is defined as the length of the longest LED ID that can be reliably detected. To figure out the relationship of ψ and the block length L, we simulated the detection performance of the system with the data loss μ=30% and kt = 96 bits, thus kr = 67 bits. The total length of header and ni is 6 bits. The result is shown in Fig. 7. Different numbers of photos are used for detection. Figure 7(a) shows the relations between the largest LED ID length and the number of the used photos. Figure 7(b) shows the relations between the largest LED ID length and the block length. It can be found that when the block length L is set to be 10, we can detect the longest LED ID reliably. Thus, in the following experiments, L is set to be 10.

 figure: Fig. 7

Fig. 7 (a) The relations between the largest LED ID length and the number of the used photos; (b) the relations between the largest LED ID length and the block length.

Download Full Size | PDF

4. Performance of the proposed coding method and complexity analysis

4.1 Numerical results

In this section, we compare the performance of our proposed coding method with that of the DS method and RS code by simulations.

Table 1 shows the parameters of transmitter and receiver. For this case, the stripe width is 3.75 pixels, and kt in one photo is 192 bits. The header used in the system is “100001”, and the length of serial number of the data block ni is 4.

Tables Icon

Table 1. Parameters of transmitter and receiver

RS code is a typical erasure code widely used in the field of communication and storage system. For the (n,k) RS code, the number of original data blocks is k, and nk redundant blocks are transmitted alongside. In the simulations, the block length of RS code is set to be the same as our proposed coding method. The length of LED ID detected is k×L. As shown in Fig. 8, for different numbers of photos used in detection, we optimized the best (n,k) for RS code which makes the successful detection probability λ higher than 80%. When using 1~4 photos for detection, the parameters of RS code are (7, 3), (10, 6), (15, 9) and (20, 12) respectively. When using certain number of photos for detection, the generator matrix G for our proposed coding method is built according to the method in Section II enabling the longest LED ID transmission when λ80%.

 figure: Fig. 8

Fig. 8 The length of LED ID that can be reliably detected (λ80%) when the proposed code, RS code and the DS method are used.

Download Full Size | PDF

Figure 8 shows the performances of different methods. It can be seen that when using coding techniques, there’s a significant improvement in performance comparing to the DS method. Using 1 to 4 photos for detection, our proposed coding method achieves the performance close to RS code, and shows great superiority to DS method. When 4 photos are used for detection, the largest LED ID length that can be detected reliably is 50 bits with the help of the DS method. While using our coding method, the largest LED ID length increases to 110 bits, which is 120% longer than the DS method. The largest LED ID length is 120 bits when using RS code, whose performance is a little better than that of our proposed coding method.

4.2 Complexity analysis

RS code has the best performance at the expense of higher complexity, which makes it difficult to complete real-time detection in mobile terminals. The complexities of RS code and our proposed coding method are both related to the block length L. When using RS code, encoding and decoding are all performed in Galois Field (2L), i.e. GF(2L) for short. The complexity of additions in GF(2L) is O(L), and that of the multiplications is O(L2). But for the proposed coding method, all the operations are performed in GF(2), thus the complexity of additions and multiplications are both O(L). Since the multiplications take the vast majority of complexity, it’s feasible to compare the complexities of the two coding methods by the amounts of multiplications. The complexities of multiplications are O(L) for our coding method and O(L2) for RS code. Thus, comparing to RS code, the complexity of the proposed code is reduced by about L/(L+1). When L = 10, our method reduces about 90% the amount of calculations compared with RS code. Overall, the complexity of the code method we proposed is much lower than RS code, especially when the length of data block L gets longer.

4.3 Experimental results

Figure 9 shows our experimental indoor visible light positioning system, whose parameters are also set by Table 1.

 figure: Fig. 9

Fig. 9 The experimental indoor visible light positioning system.

Download Full Size | PDF

We did experiments to test the performance of the proposed coding method when two photos are used for detection. The LED ID transferred in the experiment is 60 bits, broken into 6 blocks and sent. The submatrix G' is

G'=[110100011010001101100110010011101001],
where 6 redundant blocks were added in each code. The header designed is “100001”.

We also developed an IOS application which integrates the image processing algorithm and decoder algorithm. The application calls the front camera on the screen, processes the photos and accomplishes the decoding in real time. The result of each decoding process is recorded in the application, which would be exported to calculate the successful detection probability λ.

The data loss is closely related to the distance between the receiver and transmitter. The experiments were carried out in daytime with the interference of sunlight and the indoor lighting system. Figure 10 shows the measured data loss versus different distance in our experiment. Figure 11 shows the successful detection probability with two photos for detection. For comparison, the performance of the DS method is also measured. As the experiments were conducted with a common mobile device (iphone5s) in real time, the RS code was not tested due to its high complexity.

 figure: Fig. 10

Fig. 10 Data loss μ versus the distance d.

Download Full Size | PDF

 figure: Fig. 11

Fig. 11 Successful detection probability λ with different distances.

Download Full Size | PDF

As the distance increases, the data loss gets larger especially when the distance is larger than 3m, and our coding method starts to show a better performance, when just two photos can be used for detection. If more photos can be used, the performance advantage will be much better.

5. Conclusion

In this paper, we suggest to use coding technique to solve the problem of long LED ID transmission in VLP system employing rolling shutter camera. We also propose a new coding method to achieve high system performance with low computational complexity. Taking into account the practical application, we show how to set the key parameters to optimize the performance of the VLP system employing the proposed coding method. When 4 photos are used for detection, the maximum length of LED ID that can be detected reliably is increased 120% using the proposed method compared with the DS method. Compared with RS code, the proposed code achieves a similar performance and reduces about 90% the amount of calculations when block length is 10. The coding technique can significantly improve the long LED ID transmission performance and the efficiency of the VLP system.

Funding

International S and T Cooperation Program of China.

Acknowledgments

This research was supported by the International S&T Cooperation Program of China (2015DFG12520).

References and links

1. Y. F. Yin, W. Y. Lan, T. C. Lin, C. Wang, M. Feng, and J. J. Huang, “High-speed visible light communication using GaN-based light-emitting diodes with photonic crystals,” J. Lightwave Technol. 35(2), 258–264 (2017). [CrossRef]  

2. Z. Lu, P. Tian, H. Chen, I. Baranowski, H. Fu, X. Huang, J. Montes, Y. Fan, H. Wang, X. Liu, R. Liu, and Y. Zhao, “Active tracking system for visible light communication using a GaN-based micro-LED and NRZ-OOK,” Opt. Express 25(15), 17971–17981 (2017). [CrossRef]   [PubMed]  

3. A. Sewaiwar, S. V. Tiwari, and Y. H. Chung, “Visible light communication based motion detection,” Opt. Express 23(14), 18769–18776 (2015). [CrossRef]   [PubMed]  

4. T. H. Do and M. Yoo, “An in-depth survey of visible light communication based positioning systems,” Sensors (Basel) 16(5), 678 (2016). [CrossRef]   [PubMed]  

5. Z. Zheng, L. Liu, and W. Hu, “Accuracy of ranging based on DMT visible light communication for indoor positioning,” IEEE Photonics Technol. Lett. 29(8), 679–682 (2017). [CrossRef]  

6. H. Zheng, Z. Xu, C. Yu, and M. Gurusamy, “Indoor three-dimensional positioning based on visible light communication using Hamming filter,” in Advanced Photonics 2016 (IPR, NOMA, Sensors, Networks, SPPCom, SOF), OSA Technical Digest (online) (Optical Society of America, 2016), paper SpM4E.3.

7. P. Luo, M. Zhang, Z. Ghassemlooy, S. Zvanovec, S. Feng, and P. Zhang, “Undersampled-Based Modulation Schemes for Optical Camera Communications,” IEEE Commun. Mag. 56(2), 204–212 (2018). [CrossRef]  

8. B. Lin, Z. Ghassemlooy, C. Lin, X. Tang, Y. Li, and S. Zhang, “An Indoor Visible Light Positioning System Based on Optical Camera Communications,” IEEE Photonics Technol. Lett. 29(7), 579–582 (2017). [CrossRef]  

9. K. Liang, C.-W. Chow, and Y. Liu, “Mobile-phone based visible light communication using region-grow light source tracking for unstable light source,” Opt. Express 24(15), 17505–17510 (2016). [CrossRef]   [PubMed]  

10. K. Liang, C.-W. Chow, and Y. Liu, “RGB visible light communication using mobile-phone camera and multi-input multi-output,” Opt. Express 24(9), 9383–9388 (2016). [CrossRef]   [PubMed]  

11. R. Boubezari, H. L. Minh, Z. Ghassemlooy, and A. Bouridane, “Smartphone camera based visible light communication,” J. Lightwave Technol. 34(17), 4020–4026 (2016). [CrossRef]  

12. J. Shi, J. He, J. He, R. Deng, Y. Wei, F. Long, Y. Cheng, and L. Chen, “Multilevel modulation scheme using the overlapping of two light sources for visible light communication with mobile phone camera,” Opt. Express 25(14), 15905–15912 (2017). [CrossRef]   [PubMed]  

13. U. Demir and O. Aktas, “Raptor versus Reed Solomon forward error correction codes,” in Proceedings of IEEE International Symposium on Computer Networks (IEEE, 2006), pp. 264–269. [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 Block diagram of the coding aided VLC based positioning system.
Fig. 2
Fig. 2 The operation of rolling shutter. The squares above indicate the state of LED light. The squares below show the output of camera—the stripes in photos.
Fig. 3
Fig. 3 Flow chart of the image processing.
Fig. 4
Fig. 4 The principle of the proposed encoder.
Fig. 5
Fig. 5 Schematic diagram of decoding procedure.
Fig. 6
Fig. 6 The fuzziness of stripes causes the data loss.
Fig. 7
Fig. 7 (a) The relations between the largest LED ID length and the number of the used photos; (b) the relations between the largest LED ID length and the block length.
Fig. 8
Fig. 8 The length of LED ID that can be reliably detected ( λ80%) when the proposed code, RS code and the DS method are used.
Fig. 9
Fig. 9 The experimental indoor visible light positioning system.
Fig. 10
Fig. 10 Data loss μ versus the distance d.
Fig. 11
Fig. 11 Successful detection probability λ with different distances.

Tables (1)

Tables Icon

Table 1 Parameters of transmitter and receiver

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

G=[ I G' ],
G ' ij ={ 1 jiF 0 others ,
Exposure time 1 f t .
k t =m× f t f s .
μ=1- k r k t .
G'=[ 1 1 0 1 0 0 0 1 1 0 1 0 0 0 1 1 0 1 1 0 0 1 1 0 0 1 0 0 1 1 1 0 1 0 0 1 ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.