Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Unmanned-aerial-vehicle based optical camera communication system using light-diffusing fiber and rolling-shutter image-sensor

Open Access Open Access

Abstract

We put forward and demonstrate a light-diffusing fiber equipped unmanned-aerial-vehicle (UAV) to provide a large field-of-view (FOV) optical camera communication (OCC) system. The light-diffusing fiber can act as a bendable, lightweight, extended and large FOV light source for the UAV-assisted optical wireless communication (OWC). During UAV flying, the light-diffusing fiber light source could be tilted or bended; hence, offering large FOV as well as supporting large receiver (Rx) tilting angle are particularly important for the UAV-assisted OWC systems. To improve the transmission capacity of the OCC system, one method based on the camera shutter mechanism, which is known as rolling-shuttering is utilized. The rolling-shuttering method makes use of the feature of complementary-metal-oxide-semiconductor (CMOS) image sensor to extract signal pixel-row by pixel-row. The data rate can be significantly increased since the capture start time for each pixel-row is different. As the light-diffusing fiber is thin and occupies only a few pixels in the CMOS image frame, Long-short-term-memory neural-network (LSTM-NN) is used to enhance the rolling-shutter decoding. Experimental results show that the light-diffusing fiber can satisfactorily act as an “omnidirectional optical antenna” providing wide FOVs and 3.6 kbit/s can be achieved, accomplishing the pre-forward error correction bit-error-rate (pre-FEC BER = 3.8 × 10−3).

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Nowadays, unmanned aerial vehicles (UAVs) have been widely employed in different applications, including architecture, construction, logistic delivery, construction, search and rescue, etc. Recently, UAV employed in wireless communication has gain significant attention. UAV-assisted wireless communication can provide additional dimension and mobility augmenting the existing terrestrial networks. UAV is shown to be promising for the fronthaul and backhaul networks [1]. Besides, it can handle unexpected large amounts of data, and it can be utilized to provide urgent communication during catastrophic disasters, e.g. earth quakes or hurricanes [2]. Due to the increasing demand of wireless communication bandwidths in different broadband applications, the traditional radio frequency (RF) communication spectrum is heavily congested, and industries and academia are now expanding the communication spectrum to visible light and infra-red, which is known as visible light communication (VLC) and optical wireless communication (OWC) [36]. VLC/OWC can provide several transmission advantages, e.g. electromagnetic-interference-free and license-free. As it does not interfere with the RF signal and it can be used to augment RF communication for providing additional bandwidths. In addition, OWC can also provide many unique applications, such as non-terrestrial network applications covering underwater [7,8] and space [9,10], as well as high precision positioning [11,12] and etc. VLC/OWC are also considered promising in future 6 G network [13].

The tradeoff between the UAV weight and battery makes it challenging to install additional batteries or equipments on UAV for communication. Hence, utilizing VLC/OWC on UAV could be particularly promising because the light emitting diodes (LEDs) transmitter (Tx) is lightweight and provide high energy efficiency. UAV-assisted VLC/OWC performance when subjected to different weathers has been studied [14]. Besides, UAV-assisted VLC/OWC for high-speed train backhauling and post-disaster monitoring have also been proposed [1517]. Figure 1 shows some potential uses of UAV-assisted OWC system, for example, it can be used to provide non-line-of-sight transmission covering the network “blind spots”, or as relays and repeaters for wireless backhauling as illustrated in Figs. 1(a) and 1(b) respectively. Although UAV-assisted OWC systems can provide many advantages, there are still several technical challenges to overcome. One of them is the presence of pointing error in the OWC system [18], which is caused by the random fluctuation of the angle-of-arrival at the receiver (Rx), leading to optical power loss in the Rx. Furthermore, the installed OWC Tx and Rx on the UAV should be lightweight from the battery lifetime and government regulation points of view. All these add challenges to the UAV-assisted OWC systems.

 figure: Fig. 1.

Fig. 1. Potential uses of UAV-assisted OWC system, (a) providing NLOS transmission, or (b) as relays and repeaters for wireless backhauling.

Download Full Size | PDF

One realization of VLC/OWC is the adoption of image sensors or cameras, which is known as optical camera communication (OCC) [1921]. These systems can be easily realized using the existing LED lamps on the UAVs as Txs, while the smart-phone image sensors or surveillance cameras as Rxs. The UAV-assisted OCC systems utilizing red LED panel [16] and red-green-blue (RGB) LEDs [17] have been experimentally demonstrated recently. One disadvantage of them is that the data rate is limited by camera frame rate. The frame rate is only 60 fps unless expensive high-speed camera is utilized. To improve the transmission capacity, one method based on the camera shutter mechanism, which is known as rolling-shutter can be utilized. The rolling-shutter method makes use of the feature of complementary metal oxide semiconductor (CMOS) image sensor to extract signal pixel-row by pixel-row. The data rate can be significantly increased since the capture start time for each pixel-row is different.

Different decoding schemes of the rolling-shutter method have been demonstrated to improve the transmission capacity and distance, such as utilizing polynomial curve fitting [19,22], beacon jointed packet reconstruction technique [23], multiple-input multiple-output (MIMO) [24], adaptive Extreme-Value-Averaging scheme [25]. Recently, machine learning schemes are also utilized to further enhance the rolling-shutter decoding, such as logistic regression and classification [26], convolution neural network [27,28], artificial neural network (ANN) with Z-score normalization [29], accumulative sampling scheme [30], grayscale conversion and matched filtering [31], and pixel-row-per-bit neural-network (PPB-NN) [32,33], etc. Table 1 summarize different OCC rolling-shutter decoding schemes reported recently. Besides the data rate and transmission distance, one important factor for the realization of OCC is the Tx field-of-view (FOV). This is particular crucial for the UAV-assisted OCC system in which the orientation of the UAV is not easy to adjust for aligning the Tx and Rx due to turbulence experienced by the UAV. Light-coupled illuminating plastic optical fiber (POF) or glass light-diffusing fiber could be promising as they can act as the flexible “omnidirectional optical antenna” providing 360° FOV.

Tables Icon

Table 1. Different OCC rolling-shutter decoding schemes

Here, we demonstrate using a LD-coupled glass light-diffusing fiber equipped on an UAV to provide wide FOV OCC system. When compared with our previous demonstration [34], a telescope-based Rx is utilized in front of the camera to extend the free-space transmission distance from 1 m to 23 m. The light-diffusing fiber is commercial available at different lengths for decoration and providing lighting to irregular space. [35] reports that the evolution of optical fiber technology has revolutionized a variety of fields, from the traditional optical transmission to environmental monitoring and biomedicine. In this paper here, the light-diffusing fiber is employed as lightweight and bendable Tx for the UAV-assisted OCC system. As the LD-coupled light-diffusing fiber is thin and only occupies a few pixels in the CMOS image frame, long-short-term-memory neural-network (LSTM-NN) is used to enhance the rolling-shutter decoding. The LSTM-NN can mitigate the signal fluctuations in the flying platform using the temporal memory characteristics [36]. Experimental results show that the LD-coupled light-diffusing fiber can provide wide FOV. Data rate of 3.6 kbit/s, fulfilling the pre-forward error correction bit-error-rate (pre-FEC BER = 3.8 × 10−3) can be achieved at a transmission distance of 23 m.

2. Experiment and Algorithm

Figure 2(a) shows the experimental setup of the light-diffusing fiber equipped UAV OCC system supporting wide FOV. The light-diffusing fiber (Corning Fibrance) acts as the bendable and flexible extended light source providing a wide 360° FOV around the fiber circumference and 120° FOV along the fiber length if viewed from the ends of the fiber. The light-diffusion fiber has the core and cladding diameters of 170 µm and 230 µm respectively. It has a silica core containing non-periodically distributed scattering nanostructures in both radial and axial directions. These nanostructures range in size from 50-500 nm, and can effectively scatter the propagating light in different visible wavelengths. When the blue light from a blue-LD is launched from one end of the fiber, it will hit these nanostructures, which diffuse the light out from the surface of the fiber. Yellow phosphor, e.g. Ce-YAG is coated on the surface of the fiber. When the diffused blue light hits the yellow phosphor, yellow light will be produced. Eventually, the combined yellow and blue lights produce white color along the surface of the fiber. The light-diffusing fiber is pigtailed to a blue laser-diode (LD) and circuit producing on-off-keying (OOK) data. It has a phosphor coating producing white-light emission from the blue LD allowing OCC operation. The LD module has the length of 1.25 cm, and the whole LD-coupled light-diffusing fiber module has a weight of 20 g. The blue LD has the wavelength and output power of 450 nm and 20 mW respectively. The UAV (Ryze Tello) has the dimensions of about 9.8 × 9.3 × 4.1 cm and weight of 80 g including battery. According to the specification, it has the maximum flying speed, distance and height of 8 m/s, 100 m and 30 m respectively. The OCC signal emitted by the light-diffusing fiber can be received by a single or multiple optical camera(s) in the ground stations, as well as the embedded cameras of another UAVs. Here, the optical Rx is a smart-phone mounted on a telescope. The smart-phone CMOS camera has the frame rate and resolution of 30 fps and 1920 × 1080 pixels respectively. The transmission distance is about 23 m. Figures 2(b) and (c) show the experimental photos of the hovering light-diffusing fiber equipped UAV when the light-diffusing fiber is “OFF” and “ON” respectively. It is worth to mention that during UAV flying, the light-diffusing fiber will be tilted or bended, and is not perpendicular to the ground; hence, supporting large FOV as well as large Rx tilting angle are particularly important for the UAV-assisted OCC system.

 figure: Fig. 2.

Fig. 2. (a) Experimental setup of the light-diffusing fiber equipped UAV OCC system supporting wide FOV. Photos of the hovering light-diffusing fiber equipped UAV when it is (b) “OFF” and (c) “ON”.

Download Full Size | PDF

As discussed before, the rolling-shutter method makes use of the feature of CMOS image sensor to extract signal pixel-row by pixel-row. The data rate can be significantly increased since the capture start time for each pixel-row is different. This means that bright and dark patterns representing light “ON” and “OFF” can be observed in the smart-phone. Figure 3(a) illustrates the architecture of the rolling-shutter decoding scheme, including the training phase and testing phase. During the training phase, training images are launched to the “Data Preprocessing” (i.e. red block) as shown in Fig. 3(b). Inside the “Data Preprocessing” block, the light-diffusing fiber is first identified and located. Then, the image frames are converted into grayscale values from 0 (total darkness) to 255 (total brightness). Since the light-diffusing fiber is a line light source, which is very thin and only occupies a few pixels in the horizontal, LSTM-NN is proposed to enhance the rolling-shutter decoding performance. After this, the grayscale values indicating the light-diffusing fiber in each pixel-row are arranged to produce a column matrix with grayscale values. Then, a complete OCC packet will be extracted by looking for headers. After this, pixel-row per bit (PPB) calculation and re-sampling are performed. PPB calculation and re-sampling is to make sure the same number of pixels in each logic bit is used for the label in LSTM-NN. After the training phase, testing phase will be performed. As shown in Fig. 3(a), the training and testing phases are similar, and finally, the bit-error-ratio (BER) will be measured using bit-by-bit comparison.

 figure: Fig. 3.

Fig. 3. Architectures of (a) rolling-shutter decoding scheme, including the training phase and testing phase; (b) “Data Preprocessing” block.

Download Full Size | PDF

Figure 4(a) shows the proposed LSTM-NN model used in both training and testing phases. The LSTM-NN has two LSTM layers with neuron number of 64 and 32 respectively. Batch normalization is performed after the first LSTM layer. The last three layers are fully-connected (FC) dense layers, with neuron number of 32, 16 and 1 respectively. Dropout layers are employed to avoid over-fitting. The activation functions are ReLU, ReLU, Sigmoid as shown in Fig. 4(a). The loss function is binary cross entropy and the optimizer is Adam for parameter update during the training phase. The number of training and testing frames are 200 and 100 respectively. Figure 4(b) shows the structure of a LSTM cell used in the first two layers in the LSTM-NN model. The parameters: xt, σ, Ct-1, Ct, ht-1,ht, are the current input, Sigmoid function, memory from last LSTM cell, newly updated memory, output from last LSTM cell, and current output respectively. There are two internal gates in each LSTM cell, known as forget, input and output gates. Forget gate determines information to erase; input gate determines what new information to stored; and output gate generates output based on the cell state.

 figure: Fig. 4.

Fig. 4. Architectures of (a) proposed LSTM-NN model, and (b) LSTM cell.

Download Full Size | PDF

Inside the forget gate, the current input xt as well as the output from the previous LSTM cell ht-1 are input to the cell with the multiplication of weight matrices Wf and the addition of a bias bf. The result will be performed by a Sigmod function as ft, as illustrated in Eq. (1). The information will be retained or forgotten depends on whether the output is 1 or 0 respectively.

$${f_t} = \sigma ({W_f}({h_{t - 1}},{x_t}\textrm{)} + {b_f})$$

Inside the input gate, similar process as in forget gate is duplicated. Inputs xt and ht-1 are regulated by the Sigmod function to same scale as it, as shown in Eq. (2).

$${i_t} = \sigma ({W_i}\textrm{(}{h_{t - 1}},{x_t}) + \textrm{ }{b_i})$$

After this, a vector is generated by a hyperbolic tangent function producing an output from -1 to +1. These values in the vector and the regulated values are multiplied to get the information needed as illustrated in Eq. (3).

$$\tilde{C}_t = \tanh ({W_C}({h_{t - 1}},{x_t}) + {b_C})$$

Inside the output gate, needed information and cell state are extracted and output. First of all, the current cell state Ct is produced by the forget gate and input gate as illustrated in Eq. (4).

$${C_t} = {f_t}\cdot {C_{t - 1}} + {i_t}\cdot {\tilde{C}_t}$$

Finally, the regulated vector ot and the current output of the LSTM cell can be obtained as illustrated in Eq. (5) and Eq. (6), respectively, where • is the Hadamard product (i.e. element-wise product).

$${o_t} = \sigma (\textrm{ }{W_o}({h_{t - 1}},{x_t}\textrm{) } + \textrm{ }{b_o})$$
$${h_t} = {o_t}\cdot \tanh ({C_t})$$

Figure 5 illustrates the operation of the LSTM layer. Here, we use adjacent 3 logic bits for the feature extraction, and each logic bit has 3 pixels (i.e. the PPB = 3). As also illustrated in Fig. 5, the delay is one PPB (i.e. 3 pixels).

 figure: Fig. 5.

Fig. 5. Operation of the LSTM layer. Adjacent 3 logic bits are used for the feature extraction and the PPB is equal to 3.

Download Full Size | PDF

3. Results and discussions

Figures 6(a) shows the measured BER of the light-diffusing fiber OCC system. Here, we also compare the BER performance of a conventional ANN. The ANN has been optimized with an input, output and 5 FC hidden layers with neuron numbers of 1, 2, 40, 63, 138, 512, 138. ReLU and Softmax activation functions are applied for the first 6 layers and the last layer respectively. The loss function is sparse categorical cross entropy and the optimizer is Adam. In Fig. 6(a), we can observe that the proposed LSTM-NN provide significant BER enhancement to mitigate the signal distortion due to signal fluctuations in the flying platform. The data rate can be enhanced from 2 kbit/s to 3.3 kbit/s meeting the pre-FEC BER when the LSTMNN is utilized. In order to extend the transmission distance, a telescope is inserted in front of the smart-phone camera. We can observe that the transmission distance can be significantly increased to 23 m with a better BER performance. A higher data rate of 3.6 kbit/s can be achieved a BER of 1.6 × 10−4, which is lower the pre-FEC BER (i.e. 3.8 × 10−3). This is because the telescope can provide a better zoom-in images of the rolling-shutter pattern of the light-diffusing fiber at the expense of additional component is needed. We also measure the PPB of the system as shown in Fig. 6(b). At the extreme bit rate at 3.6 kbit/s, the PPB is already 3. This means that one logic bit is represented by 3 pixel rows. In order to further increase the data rate, higher frame rate cameras (e.g. 60 fps or 120 fps) or higher resolution image sensors (e.g. 4 K/8 K) can be used.

 figure: Fig. 6.

Fig. 6. (a) BER measurements of the proposed light-diffusing fiber OCC system at different data rates and transmission distances. (b) PPB measurements of the proposed system at different data rates.

Download Full Size | PDF

As the light-diffusing fiber emits light omnidirectionally around the fiber circumference, we can observe in Fig. 7(a) that similar BER performance can be achieved 360° around the fiber circumference. Here, the data rate used is 3.3 kbit/s. Then, we analyze and measure the rotation angle of the smart-phone Rx with respected to the light-diffusing fiber light source. Figure 7(b) illustrates that when the smart-phone Rx rotation angles is within ± 70°, that pre-FEC BER can be achieved. The rotation angle is defined as illustrated in Fig. 8(a). Here, we also analyze the maximum smart-phone Rx rotation angle. As shown in Fig. 8(b), the rolling-shutter pattern of the light-diffusing fiber light source can be captured by the smart-phone CMOS image sensor with resolution of width × length = 1080 × 1920 pixels. In this photo, the rotation angle is zero. the Assume the light-diffusing fiber at θ = 0° occupies x pixels and y pixels in the horizontal direction and vertical direction respectively, when the it is rotated from 0° to a rotational angle θ, the observed light-diffusing fiber in the screen is elongated. This means that the light-diffusing fiber light source still occupies x pixels in the horizontal direction; however, the pixels in the vertical direction is increased. When the rotation angle is further increase, a complete OCC packet cannot be observed; hence, the data cannot be successfully decoded. At the extreme case when θ = 90°, rolling-shutter pattern cannot be observed. The maximum rotation angle of the smart-phone Rx is restricted by the length resolution of the CMOS image sensor, not by the width. Here, we follow similar analysis in [32] and can find that the maximum rotation angle θMAX can be expressed in Eq. (7).

$${\theta _{MAX}} \approx {\tan ^{ - 1}}(\frac{{Sensor - length(pixels)}}{{PPB(pixels/bits) \times {B_{payload}}(bits)}})$$

 figure: Fig. 7.

Fig. 7. BER measurements of the proposed light-diffusing fiber OCC system (a) around the fiber circumference, and (b) at different smart-phone Rx rotation angle.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. (a) Analysis of the maximum smart-phone Rx rotation angle with respect to the light-diffusing fiber. (b) Photo of the rolling-shutter pattern of the light-diffusing fiber captured by the smart-phone CMOS image sensor.

Download Full Size | PDF

In our experiment, the image sensor length is 1920 pixels, the OCC data packet length is 120 bits, and the PPB is 3. The data rate used is 3.6 kbit/s (i.e. 30 fps × 120 bits). Hence, the maximum theoretical rotation angle of the Rx is ∼ 70o, which well agrees with our experimental result obtained in Fig. 7(b). It is also worth to mention that although the light-diffusing fiber can provide 360° FOV around the circumference and 120° along the fiber. The whole system FOV is ultimately determined by the combination of all the elements of the VLC system, including the Tx and Rx. Hence, the whole system FOV is determined by the angular resolution, which is 140° as illustrated in Fig. 7(b). In this proof-of-concept demonstration, the transmission distance is limited by the diffused optical signal power from the light-diffusing fiber. The transmission distance could be improved by bundling several light-diffusing fibers to increase the surface area and the diffused optical power. Higher launching power from the blue-LD could also be used to increase the diffused optical signal power.

4. Conclusion

We demonstrated a LD-coupled glass light-diffusing fiber equipped on an UAV to provide wide FOV OCC system. Here, the light-diffusing fiber was employed as lightweight and bendable Tx for the UAV-assisted OCC system. As the LD-coupled light-diffusing fiber was thin and only occupied a few pixels in the CMOS image frame, LSTM-NN was used to enhance the rolling-shutter decoding. The LSTM-NN can mitigate the signal fluctuations in the flying platform using the temporal memory characteristics. Here, detail operation principle of the LSTM-NN was discussed. When compared with the conventional ANN, the data rate can be enhanced from 2 kbit/s to 3.3 kbit/s. In order to extend the transmission distance, a telescope can be inserted, and a data rate of 3.6 kbit/s with free-space transmission distance of 23 m, fulfilling the pre-FEC BER (i.e. 3.8 × 10−3) can be achieved.

Funding

National Science and Technology Council (NSTC-109-2221-E-009-155-MY3, NSTC-110-2221-E-A49-057-MY3).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J.-H. Lee, K.-H. Park, Y.-C. Ko, and M.-S. Alouini, “A UAV-mounted free space optical communication: trajectory optimization for flight time,” IEEE Trans. Wireless Commun. 19(3), 1610–1621 (2020). [CrossRef]  

2. A. Restas, “Drone applications for supporting disaster management,” World Journal of Engineering and Technology 03(03), 316–321 (2015). [CrossRef]  

3. R. Zhang, P. C. Peng, X. Li, S. Liu, Q. Zhou, J. He, Y. W. Chen, S. Shen, S. Yao, and G. K. Chang, “4 × 100-Gb/s PAM-4 FSO transmission based on polarization modulation and direct detection,” IEEE Photon. Technol. Lett. 31(10), 755–758 (2019). [CrossRef]  

4. C. W. Chow, C. H. Yeh, Y. Liu, Y. Lai, L. Y. Wei, C. W. Hsu, G. H. Chen, X. L. Liao, and K. H. Lin, “Enabling techniques for optical wireless communication systems,” Proc. OFC 2020, paper M2F.1. (Invited).

5. H. H. Lu, Y. P. Lin, P. Y. Wu, C. Y. Chen, M. C. Chen, and T. W. Jhang, “A multiple-input-multiple-output visible light communication system based on VCSELs and spatial light modulators,” Opt. Express 22(3), 3468–3474 (2014). [CrossRef]  

6. C. H. Chang, C. Y. Li, H. H. Lu, C. Y. Lin, J. H. Chen, Z. W. Wan, and C. J. Cheng, “A 100-Gb/s multiple-input multiple-output visible laser light communication system,” J. Lightwave Technol. 32(24), 4723–4729 (2014). [CrossRef]  

7. C. Shen, Y. Guo, H. M. Oubei, T. K. Ng, G. Liu, K. H. Park, K. T. Ho, M. S. Alouini, and B. S. Ooi, “20-meter underwater wireless optical communication link with 1.5 Gbps data rate,” Opt. Express 24(22), 25502–25509 (2016). [CrossRef]  

8. H. H. Lu, C. Y. Li, H. H. Lin, W. S. Tsai, C. A. Chu, B. R. Chen, and C. J. Wu, “An 8 m/9.6 Gbps underwater wireless optical communication system,” IEEE Photonics J. 8(5), 1–7 (2016). [CrossRef]  

9. J. Shapiro, S. Guha, and B. Erkmen, “Ultimate channel capacity of free-space optical communications [Invited],” J. Opt. Netw. 4(8), 501–516 (2005). [CrossRef]  

10. M. Toyoshima, “Trends in satellite communications and the role of optical free-space communications [Invited],” J. Opt. Netw. 4(6), 300–311 (2005). [CrossRef]  

11. C. Hsu, S. Liu, F. Lu, C. Chow, C. Yeh, and G. Chang, “Accurate indoor visible light positioning system utilizing machine learning technique with height tolerance,” Proc. OFC, (Optica Publishing Group, 2018), paper M2K.2.

12. Y. C. Chuang, Z. Q. Li, C. W. Hsu, Y. Liu, and C. W. Chow, “Visible light communication and positioning using positioning cells and machine learning algorithms,” Opt. Express 27(11), 16377–16383 (2019). [CrossRef]  

13. N. Chi, Y. Zhou, Y. Wei, and F. Hu, “Visible light communication in 6G: advances, challenges, and prospects,” IEEE Veh. Technol. Mag. 15(4), 93–102 (2020). [CrossRef]  

14. M. Alzenad, M. Z. Shakir, H. Yanikomeroglu, and M.-S. Alouini, “FSO-based vertical backhaul/fronthaul framework for 5G+ wireless networks,” IEEE Commun. Mag. 56(1), 218–224 (2018). [CrossRef]  

15. H. S. Khallaf and M. Uysal, “UAV-based FSO communications for high speed train backhauling,” Proc. IEEE WCNC, Marrakesh,” Morocco, 1–6 (2019).

16. H. Takano, D. Hisano, M. Nakahara, K. Suzuoki, K. Maruta, Y. Onodera, R. Yaegashi, and Y. Nakayama, “Visible light communication on LED-equipped drone and object-detecting camera for post-disaster monitoring,” IEEE 93rd Vehicular Technology Conference (VTC2021-Spring), Helsinki, Finland, 2021, pp. 1–5.

17. H. Takano, M. Nakahara, K. Suzuoki, Y. Nakayama, and D. Hisano, “300-meter long-range optical camera communication on RGB-LED-equipped drone and object-detecting camera,” IEEE Access 10, 55073–55080 (2022). [CrossRef]  

18. M. S. Bashir and M.-S. Alouini, “Optimal positioning of hovering UAV relays for mitigation of pointing error in free-space optical communications,” IEEE Trans. Commun. 70(11), 7477–7490 (2022). [CrossRef]  

19. C. Danakis, M. Afgani, G. Povey, I. Underwood, and H. Haas, “Using a CMOS camera sensor for visible light communication,” Proc. OWC 12, 1244–1248.

20. P. Luo, M. Zhang, Z. Ghassemlooy, H. L. Minh, H. M. Tsai, X. Tang, L. C. Png, and D. Han, “Experimental demonstration of RGB LED-based optical camera communications,” IEEE Photonics J. 7(5), 1–12 (2015). [CrossRef]  

21. C. W. Chow, Y. Liu, C. H. Yeh, Y. H. Chang, Y. S. Lin, K. L. Hsu, X. L. Liao, and K. H. Lin, “Display light panel and rolling shutter image sensor based optical camera communication (OCC) using frame-averaging background removal and neural network,” J. Lightwave Technol. 39(13), 4360–4366 (2021). [CrossRef]  

22. C. W. Chow, C. Y. Chen, and S. H. Chen, “Visible light communication using mobile-phone camera with data rate higher than frame rate,” Opt. Express 23(20), 26080–26085 (2015). [CrossRef]  

23. W. C. Wang, C. W. Chow, C. W. Chen, H. C. Hsieh, and Y. T. Chen, “Beacon jointed packet reconstruction scheme for mobile-phone based visible light communications using rolling shutter,” IEEE Photonics J. 9(6), 1–6 (2017). [CrossRef]  

24. K. Liang, C. W. Chow, and Y. Liu, “RGB visible light communication using mobile-phone camera and multi-input multi-output,” Opt. Express 24(9), 9383–9388 (2016). [CrossRef]  

25. C. W. Chen, C. W. Chow, Y. Liu, and C. H. Yeh, “Efficient demodulation scheme for rolling-shutter-patterning of CMOS image sensor based visible light communications,” Opt. Express 25(20), 24362–24367 (2017). [CrossRef]  

26. Y. C. Chuang, C. W. Chow, Y. Liu, C. H. Yeh, X. L. Liao, K. H. Lin, and Y. Y. Chen, “Using logistic regression classification for mitigating high noise-ratio advisement light-panel in rolling-shutter based visible light communications,” Opt. Express 27(21), 29924–29929 (2019). [CrossRef]  

27. L. Liu, R. Deng, and L. K. Chen, “47-kbit/s RGB-LED-based optical camera communication based on 2D-CNN and XOR-based data loss compensation,” Opt. Express 27(23), 33840–33846 (2019). [CrossRef]  

28. L. Liu, R. Deng, J. Shi, J. He, and L. Chen, “Beyond 100-kbit/s transmission over rolling shutter camera-based VLC enabled by color and spatial multiplexing,” Proc. OFC 2020, OSA Technical Digest (Optica Publishing Group, 2020), paper M1J.4.

29. K. L. Hsu, C. W. Chow, Y. Liu, Y. C. Wu, C. Y. Hong, X. L. Liao, K. H. Lin, and Y. Y. Chen, “Rolling-shutter-effect camera-based visible light communication using RGB channel separation and an artificial neural network,” Opt. Express 28(26), 39956–39962 (2020). [CrossRef]  

30. P. Zhang, Q. Wang, Y. Yang, Y. Wang, Y. Sun, W. Xu, J. Luo, and L. Chen, “Enhancing the performance of optical camera communication via accumulative sampling,” Opt. Express 29(12), 19015–19023 (2021). [CrossRef]  

31. S. R. Teli, K. Eollosova, S. Zvanovec, Z. Ghassemlooy, and M. Komanec, “Optical camera communications link using an LED-coupled illuminating optical fiber,” Opt. Lett. 46(11), 2622–2625 (2021). [CrossRef]  

32. D. C. Tsai, Y. H. Chang, C. W. Chow, Y. Liu, C. H. Yeh, C. W. Peng, and L. S. Hsu, “Optical camera communication (OCC) using a laser-diode coupled optical-diffusing fiber (ODF) and rolling shutter image sensor,” Opt. Express 30(10), 16069–16077 (2022). [CrossRef]  

33. Y. S. Lin, C. W. Chow, Y. Liu, Y. H. Chang, K. H. Lin, Y. C. Wang, and Y. Y. Chen, “PAM4 rolling-shutter demodulation using a pixel-per-symbol labeling neural network for optical camera communications,” Opt. Express 29(20), 31680–31688 (2021). [CrossRef]  

34. Y. H. Chang, D. C. Tsai, C. W. Chow, C. C. Wang, S. Y. Tsai, Y. Liu, and C. H. Yeh, “Lightweight light diffusing fiber transmitter equipped unmanned-aerial-vehicle (UAV) for large field-of-view (FOV) optical wireless communication,” Proc. OFC, USA, 2023. Paper Th3H.6.

35. M. S. Soares, M. Vidal, N. F. Santos, F. M. Costa, C. Marques, S. O. Pereira, and C. Leitão, “Immunosensing based on optical fiber technology: recent advances,” Biosensors 11(9), 305 (2021). [CrossRef]  

36. S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation 9(8), 1735–1780 (1997). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Potential uses of UAV-assisted OWC system, (a) providing NLOS transmission, or (b) as relays and repeaters for wireless backhauling.
Fig. 2.
Fig. 2. (a) Experimental setup of the light-diffusing fiber equipped UAV OCC system supporting wide FOV. Photos of the hovering light-diffusing fiber equipped UAV when it is (b) “OFF” and (c) “ON”.
Fig. 3.
Fig. 3. Architectures of (a) rolling-shutter decoding scheme, including the training phase and testing phase; (b) “Data Preprocessing” block.
Fig. 4.
Fig. 4. Architectures of (a) proposed LSTM-NN model, and (b) LSTM cell.
Fig. 5.
Fig. 5. Operation of the LSTM layer. Adjacent 3 logic bits are used for the feature extraction and the PPB is equal to 3.
Fig. 6.
Fig. 6. (a) BER measurements of the proposed light-diffusing fiber OCC system at different data rates and transmission distances. (b) PPB measurements of the proposed system at different data rates.
Fig. 7.
Fig. 7. BER measurements of the proposed light-diffusing fiber OCC system (a) around the fiber circumference, and (b) at different smart-phone Rx rotation angle.
Fig. 8.
Fig. 8. (a) Analysis of the maximum smart-phone Rx rotation angle with respect to the light-diffusing fiber. (b) Photo of the rolling-shutter pattern of the light-diffusing fiber captured by the smart-phone CMOS image sensor.

Tables (1)

Tables Icon

Table 1. Different OCC rolling-shutter decoding schemes

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

f t = σ ( W f ( h t 1 , x t ) + b f )
i t = σ ( W i ( h t 1 , x t ) +   b i )
C ~ t = tanh ( W C ( h t 1 , x t ) + b C )
C t = f t C t 1 + i t C ~ t
o t = σ (   W o ( h t 1 , x t +   b o )
h t = o t tanh ( C t )
θ M A X tan 1 ( S e n s o r l e n g t h ( p i x e l s ) P P B ( p i x e l s / b i t s ) × B p a y l o a d ( b i t s ) )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.