Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Experimental study of laser spot tracking for underwater optical wireless communication

Open Access Open Access

Abstract

In this paper, a novel laser spot tracking algorithm that incorporates the Kalman filter with the continuously adaptive Meanshift algorithm (Cam-Kalm) is proposed and employed in an underwater optical wireless communication (UOWC) system. Since the Kalman filter has the advantage of predicting the state information of the target spot based on its spatial motion features, the proposed algorithm can improve the accuracy and stability of the moving laser spot tracking. A 2 m optical wireless communication experimental system with auto-tracking based on a green laser diode (LD) is built to evaluate the tracking performance of different algorithms. Experimental results verify that the proposed algorithm outperforms conventional tracking algorithms in aspects of tracking accuracy, interference resistance, and response time. With the proposed Cam-Kalm algorithm, the experimental system can establish an effective communication link, while the maximum tracking speed is 20 mm/s given the forward-error-correction (FEC) threshold.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Underwater optical wireless communication (UOWC) with the advantages of high transmission rate and great confidentiality has emerged as a promising technology and attracted many researchers’ attention. Light-emitting diodes (LEDs) or laser diodes (LDs) are usually adopted as light sources in UOWC systems. Compared with LEDs, LDs have stronger penetration, longer transmission distance, and higher transmission rate underwater, making them a better choice for underwater optical wireless communication [1]. Most of recent studies have concentrated on increasing the range and data rate of LD-based transmission systems. However, an inherent challenge associated with optical wireless communication systems is the requirement of line-of-sight (LOS) between the moving transmitter and receiver [2,3]. During long-distance underwater transmission, the pointing of the emitted optical beam can easily deviate from the original optical direction due to susceptibility to interference from bubbles, waves, etc. [4]. Besides, for the reason that the target spot and photodetector receiving area are relatively small, it is difficult to achieve accurate alignment, resulting in poor communication quality or failure to establish a communication link. Therefore, rapid and stable acquisition and tracking of the laser spot are imperative for enhancing the reliability and stability of communication.

In the free space optical communication system, the pointing, acquisition, and tracking (PAT) technology is utilized to establish and maintain accurate and reliable laser communication links [5,6]. However, the control system of space PAT is too complex and costly to be directly applied to underwater. The low-cost, simple-structure PAT systems for underwater applications are being explored by researchers around the world. An underwater laser spot tracking system between moving vehicles was designed and demonstrated for the first time [7,8]. The demonstrated precision and robustness could enable 1+ Gbps data links between independent, moving vehicles, over several 100 meters in clear ocean water. A novel hybrid scanning strategy exploiting variable beam divergence was developed with the aim of providing robust acquisition whilst minimizing acquisition time [9]. In [10], to maintain the optical link during the terminal’s movement, a compact, low-power and low-cost PAT system was proposed. In this PAT system, a camera fixed on the transmitter was utilized to identify and track a special mark. The typical PAT system has three parts. Acquisition is achieved by one of the communication terminals scanning following a certain pattern to make the beam appear in the detector field of view, and this process is relatively simpler. Pointing and tracking, which respectively establish and maintain stable communication links, are complex to realize and are key parts of PAT. Currently, there are two main schemes for underwater optical wireless communication pointing and tracking systems: based on light intensity [1113] and image processing [14,15]. Adopting the light intensity-based scheme, most systems achieved pointing of the communication link through redundancy of transmitters and/or receivers, which lead to higher cost and complexity [16]. The image processing-based tracking scheme with a camera expands the search range of a terminal beam sensor, making it more likely to detect laser spots from other terminals. Also, the receiver can locate the beam more accurately with spot images. Consequently, the scheme is more suitable for UOWC systems with narrower and highly directional LD beams. Moreover, the image processing-based tracking scheme is economical and practical for most underwater mobile terminals that equipped with accessible cameras [17].

To enhance the effectiveness of image processing-based tracking schemes in UOWC systems, extensive research is being conducted on various image tracking algorithms. For the LD communication system, a target tracking algorithm combining Meanshift and unscented Kalman filter was utilized for underwater laser spot tracking experiments [18]. However, the fixed search window of the Meanshift algorithm renders it susceptible to losing target spots, particularly in dynamic underwater environment where laser spot conditions change rapidly. To improve the performance of the LD link against external disturbances, a visual tracking method based on gaussian mixed model (GMM) algorithm was proposed in [17], whose tracking experiments show that the receiver can achieve pointing during water disturbances. However, it takes a long time to adjust the pose through its motion controller, and, it does not take into account the similar spot disturbances in the camera field of view. Aiming at the problem of video tracking under laser interfering, a kernel correlation filtering (KCF)-based high-precision eccentric tracking algorithm was introduced in [19]. By interpolating the KCF response map in the frequency domain, the tracking accuracy is further improved. Most of the current research focuses on improving spot alignment accuracy, but there are few reports on real-time performance in UOWC. The Camshift algorithm, originating from the early Meanshift algorithm, has been progressively enhanced by researchers, particularly in the fusion of multiple features and handling occlusions [20]. Its widespread applications in scenarios such as face tracking [21] and unmanned aerial vehicle tracking [22] have marked its evolution. Considering the simplicity and high contrast of underwater laser spots, the Camshift algorithm, which tracks based on color and brightness features, has the potential to be a relatively accessible and highly accurate algorithm for underwater laser spot tracking. However, the proposal and evaluation of an optical tracking algorithm suitable for real-time UOWC systems in complex underwater environment still deserves further research.

In this paper, a novel underwater spot tracking algorithm based on Camshift and Kalman filter (Cam-Kalm) is firstly proposed and introduced for UOWC systems. Compared with conventional algorithms, the proposed algorithm leverages the color-based tracking features of the Camshift algorithm and incorporates the predictive capabilities of the Kalman filter algorithm, determining the initial iteration position in the next frame. Simultaneously, the Kalman filter algorithm is used to adaptively adjust the search window, effectively mitigating the problem of losing the tracking target spot and enhancing interference resistance. Addressing the drawback of the conventional Camshift algorithm requiring manual selection of the tracking target, the grayscale centroid algorithm is further applied to automatically detect the initial target, ensuring automated acquisition of the tracking target. Subsequently, an ARM-based green LD underwater laser communication experimental system with auto-tracking is constructed for evaluating tracking and communication performance. A detailed performance comparison of the proposed algorithm, Meanshift, Camshift, and a typical detection-based grayscale centroid algorithm are conducted upon the built experimental system. The experimental results show that the proposed Cam-Kalm algorithm can effectively improve tracking accuracy, real-time ability and resistance to similar spot interference.

The rest of the paper is organized as follows. Section 2 introduces Cam-Kalm algorithm principle. Section 3 shows the structure of the underwater laser communication tracking system and experimental setup. The experiment results and discussions are given in Section 4. Finally, Section 5 draws the conclusions.

2. Tracking algorithm principle

In this section, a novel underwater laser spot tracking algorithm named Cam-Kalm algorithm is presented. The conventional incorporation of Camshift and Kalman filter algorithms is designed to utilize the Kalman estimated value directly as the tracking position under target occlusion [20]. Recognizing the unique features of tracking underwater laser target spots, the introduction of the Kalman filter algorithm has two purposes. On one hand, it predicts the initial iteration position for the next frame, which can reduce the iterative computational load for faster coordination with a platform for tracking. On the other hand, it achieves adaptive adjustment of the search window, effectively addressing the issue of losing the tracking target and minimizing interference from surrounding light spots caused by the expansion of the Camshift algorithm’s tracking window. Figure 1 shows the principle block diagram of the proposed Cam-Kalm algorithm. For achieving automatically starting tracking, a grayscale centroid algorithm is firstly introduced to detect the spot. Then, the Cam-Kalm algorithm is initialized with the detected spot. The detailed realization steps of the algorithm are provided as follows:

 figure: Fig. 1.

Fig. 1. Principle block diagram of the Cam-Kalm algorithm.

Download Full Size | PDF

Step 1: Initialize.

1) Obtaining the region of interest (ROI) of the target spot using the grayscale centroid detection algorithm. The results are adopted to initialize the search window (width $w$ and height $l$).

2) Initializing the Kalman filter. The state equation of the linear system is

$$\boldsymbol{X}_{t_k}=\boldsymbol{A}\boldsymbol{X}_{t_{k-1}}+\boldsymbol{V}_{t_{k}},$$
and observation equation is
$$\boldsymbol{Y}_{t_k}=\boldsymbol{H}\boldsymbol{X}_{t_k}+\boldsymbol{W}_{t_{k}},$$
where $\boldsymbol {X}_{t_k}$ and $\boldsymbol {X}_{t_{k-1}}$ denote the motion state vectors at the $t_k$ moment and the $t_{k-1}$ moment, respectively [16]. And $t_k$, $t_{k-1}$ denote the beginning moment when processing of the $k$-th and $(k-1)$-th frame image. $\boldsymbol {Y}_{t_k}$ is the observation vector. $\boldsymbol {A}$ is the state transfer matrix. $\boldsymbol {H}$ is the observation matrix. $\boldsymbol {V}_{t_k}$, $\boldsymbol {W}_{t_k}$ represent the system state noise and the observed noise matrix of the Gaussian distribution, respectively.
$$\left\{\begin{array}{l} \boldsymbol{A}=\left[\begin{array}{cccc} 1 & 0 & \Delta t & 0 \\ 0 & 1 & 0 & \Delta t \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right] \\ \boldsymbol{H}=\left[\begin{array}{llll} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{array}\right] \end{array}\right.,$$
where $\Delta t=t_k-t_{k-1}$ is the time interval between two adjacent frames. And the corresponding covariance matrices $\boldsymbol {Q}$, $\boldsymbol {R}$ of $\boldsymbol {V}_{t_{k}}$ and $\boldsymbol {W}_{t_k}$ are
$$\left\{\begin{array}{c} \boldsymbol{Q}=\left[\begin{array}{cccc} 0.1 & 0 & 0 & 0 \\ 0 & 0.1 & 0 & 0 \\ 0 & 0 & 0.1 & 0 \\ 0 & 0 & 0 & 0.1 \end{array}\right] \\ \boldsymbol{R}=\left[\begin{array}{cc} 0.01 & 0 \\ 0 & 0.01 \end{array}\right] \end{array}\right..$$

Step 2: Calculate search window.

1) Reading the $k$-th frame image and converting from RGB to HSV color space, and extracting the Hue component of the image [20].

2) Calculating the color histogram of this search region, and normalizing it to get the probability distribution map $I_k(x,y)$, which is used as a probability lookup table. On the probability projection map, the first-order moment of the tracking window is

$$\left\{\begin{array}{l} M_{10, k}=\sum_{x} \sum_{y} x I_k(x, y) \\ M_{01, k}=\sum_{x} \sum_{y} y I_k(x, y) \\ M_{00, k}=\sum_{x} \sum_{y} I_k(x, y) \end{array}\right..$$

Then, the center-of-mass position $(x_c,y_c)$ can be obtained by

$$\left(x_c, y_c\right)=\left(\frac{M_{10, k}}{M_{00, k}}, \frac{M_{01, k}}{M_{00, k}}\right).$$

3) Moving the center of the search window $(x_k,y_k)$ to the center-of-mass position $(x_c,y_c)$.

4) Determining whether the center position converges or not, if not, return to 2) and continue to calculate the center-of-mass position according to the new search window. When using second-order moments to find the image center of mass, the original image is equivalent to an ellipse with determined size, direction and eccentricity, centered at the image center of mass and with constant radiance. The direction angle of the ellipse is expressed by Eq. (7) [23].

$$\theta = \frac{1}{2}{\arctan \left( {\frac{{2\left( {\frac{{{M_{11,k}}}}{{{M_{00,k}}}} - {x_c}{y_c}} \right)}}{{\left( {\frac{{{M_{20,k}}}}{{{M_{00,k}}}} - x_c^2} \right) - \left( {\frac{{{M_{02,k}}}}{{{M_{00,k}}}} - y_c^2} \right)}}} \right)},$$
where $M_{20,k}$, $M_{02,k}$ and $M_{11,k}$ are second-order moments:
$$\left\{ {\begin{array}{c} {{M_{20,k}} = \mathop \sum _x \mathop \sum _y {x^2}{I_k}(x,y)}\\ {{M_{02,k}} = \mathop \sum _x \mathop \sum _y {y^2}{I_k}(x,y)}\\ {{M_{11,k}} = \mathop \sum _x \mathop \sum _y xy{I_k}(x,y)} \end{array}} \right..$$

Noting $a=\frac {{{M_{20,k}}}}{{{M_{00,k}}}}-x_c^2$, $b=\frac {{{M_{11,k}}}}{{{M_{00,k}}}}-x_cy_c$, $c=\frac {{{M_{02,k}}}}{{{M_{00,k}}}}-y_c^2$, then, the new search window is obtained according to

$$\left\{\begin{array}{l} w=\sqrt{\frac{(a+c)-\sqrt{b^2+(a-c)^2}}{2}} \\ l=\sqrt{\frac{(a+c)+\sqrt{b^2+(a-c)^2}}{2}} \end{array}\right..$$

Successive iterations are performed until the central position converges. The center of mass of the target spot is the iteration result $(x_k,y_k)$, which is used as $(x_{t_k},y_{t_k})$ to update and predict at the moment $t_k$ for the Kalman filter.

Step 3: Kalman filter predict.

Assuming that the system is in uniform motion, the state vector of the spot at the moment $t_k$ is ${{\boldsymbol {X}}_{{t_k}}} = {\left [ {{x_{{t_k}}},{y_{{t_k}}},v{x_{{t_k}}},v{y_{{t_k}}}} \right ]^T}$ and observation vector is ${{\boldsymbol {Y}}_{{t_k}}} = {[{x_{{t_k}}},{y_{{t_k}}}]^T}$. The velocities in the $x$-axis and $y$-axis directions are $vx_{t_k}$, $vy_{t_k}$. The relationships between the variables are

$$\left\{\begin{array}{l} x_{t_k}=x_{t_{k-1}}+v x_{t_k} \Delta t \\ y_{t_k}=y_{t_{k-1}}+v y_{t_k} \Delta t \end{array}\right..$$

According to Eq. (1), the motion state equation of the system is

$$\left[ {\begin{array}{c} {{x_{{t_k}}}}\\ {{y_{{t_k}}}}\\ {v{x_{{t_k}}}}\\ {v{y_{{t_k}}}} \end{array}} \right] = \left[ {\begin{array}{cccc} 1 & 0 & {\Delta t} & 0\\ 0 & 1 & 0 & {\Delta t}\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \end{array}} \right]\left[ {\begin{array}{c} {{x_{{t_{k - 1}}}}}\\ {{y_{{t_{k - 1}}}}}\\ {v{x_{{t_{k - 1}}}}}\\ {v{y_{{t_{k - 1}}}}} \end{array}} \right] + {{\boldsymbol{V}}_{{t_k}}}.$$

According to Eq. (2), the motion observation equation of the system is

$$\left[ {\begin{array}{c} {{x_{{t_k}}}}\\ {{y_{{t_k}}}} \end{array}} \right] = \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{array} \right]\left[ {\begin{array}{c} {{x_{{t_k}}}}\\ {{y_{{t_k}}}}\\ {v{x_{{t_k}}}}\\ {v{y_{{t_k}}}} \end{array}} \right] + {{\boldsymbol{W}}_{{t_k}}}.$$

Then, the obtained measurement is used to predict and update the Kalman filter by [24,25]

(i) Measurement update:

$$\begin{aligned} & \left\{\begin{array}{l} \boldsymbol{K}_{t_k}^{\prime}=\boldsymbol{P}_{t_k} \boldsymbol{H}^T\left(\boldsymbol{H} \boldsymbol{P}_{t_k} \boldsymbol{H}^T+\boldsymbol{R}\right)^{{-}1} \\ \boldsymbol{X}_{t_k}^{\prime}=\boldsymbol{X}_{t_k}+\boldsymbol{K}_{t_k}^{\prime}\left(\boldsymbol{Y}_{t_k}-\boldsymbol{H} \boldsymbol{X}_{t_k}\right)\\ \boldsymbol{P}_{t_k}^{\prime}=\left(\boldsymbol{I}-\boldsymbol{K}_{t_k}^{\prime} \boldsymbol{H}\right) \boldsymbol{P}_{t_k} \end{array}\right., \end{aligned}$$
where $\boldsymbol {K}_{t_k}^{\prime }$ denotes the Kalman gain, and $\boldsymbol {I}$ is the identity matrix. $\boldsymbol {P}_{t_k}$ is the covariance matrix of $\boldsymbol {X}_{t_k}$. And $\boldsymbol {X}_{t_k}^{\prime }$ and $\boldsymbol {P}_{t_k}^{\prime }$ are the correction of $\boldsymbol {X}_{t_k}$ and $\boldsymbol {P}_{t_k}$, respectively.

(ii) Prediction:

$$\left\{\begin{array}{c} \boldsymbol{X}_{t_{k+1}}=\boldsymbol{A} \boldsymbol{X}_{t_k}^{\prime} \\ \boldsymbol{P}_{t_{k+1}}=\boldsymbol{A} \boldsymbol{P}_{t_k}^{\prime} \boldsymbol{A}^T+\boldsymbol{Q} \end{array}\right.,$$
where $\boldsymbol {X}_{t_{k+1}}$ and $\boldsymbol {P}_{t_{k+1}}$ denote the predicted outputs.

Step 4: Repeat Step 2, Step 3.

Then return Step 2. The prediction $\boldsymbol {X}_{t_{k+1}}$ in Step 3 is used as the search window center of the search region for the $(k+1)$-th frame.

3. System design and experiment setup

3.1 System design

Figure 2(a) presents the design of an underwater laser communication system with auto-tracking, which consists of three parts: a transmitter, an underwater optical channel and a receiver. To reduce system complexity, the light source LD transmits optical beams with single carrier on-off keying (OOK) signals in the transmitter. A collimating lens is adopted for reducing the divergence angle of the optical beams. An amplifier is designed to regulate the amplitude of the signals to improve the signal-to-noise ratio. After passing through the underwater optical channel, a laser spot forms on the water tank wall. A target plane is placed on the wall of the tank to deal with the images of the laser source reflected from the acrylic wall. In the receiver, the laser spot can be acquired by the charge coupled device (CCD) camera mounted on the two-dimensional (2D) platform, and the spot can be detected and its measurement locations are obtained in real time using tracking algorithms. The 2D platform is fixed on an electric moving stage with adjustable speed, used to evaluate the real-time performance of the algorithm in dynamic cases. The tracking video and spot coordinates are displayed in real time on the screen. At the same time, the off-target amount is transmitted to an ARM-based signal processing system via the assistance of serial communication. The 2D platform is driven by an output pulse width modulation (PWM) value from the proportional-integral-derivative (PID) controller to ensure the alignment of the receiver with the transmitter. Throughout the tracking process, the photodetector (PD) receives the signal, which is then transmitted to the Bit Error Ratio Tester (BERT) and mixed signal oscilloscope (MSO) for subsequent signal analysis.

 figure: Fig. 2.

Fig. 2. (a) Block diagram and (b) experimental setup of the underwater laser communication system.

Download Full Size | PDF

3.2 Experimental setup

Figure 2(b) presents the experimental setup used for the underwater laser communication system. The transmitting light source is an Osram LD with a central wavelength of 520 nm and luminous power of about 50 mW. The transmitted OOK signals of rate 30 Mbit/s are generated by BERT and then amplified by amplifiers (ZHL-6A-S+). The signal is superimposed onto a direct-current (DC) supply, then the output of Bias-Tee (ZFBT-6GW+) is directly connected to the LD as the information light source. The underwater environment for the experiment is set in a 200$\times$35$\times$40 cm acrylic water tank. At the receiver, a PD (Hamamatsu S6801) is used to detect the optical signals and a CCD camera (CGU3-500C) to acquire the laser spot. The PD is held slightly down near the center of the camera lens, aiming to avoid obstructing the camera’s field of view while ensuring optimal reception of the light signal by the PD. The received signals are recorded by a MSO (Tektronix 70604C) and BERT for signal processing. All the experiments are conducted under normal ambient light (~200 lux).

4. Experiment results and discussions

In this section, the tracking and communication performance of the experimental underwater laser communication system are evaluated and analyzed, when the proposed Cam-Kalm algorithm, the Meanshift algorithm [26], the Camshift algorithm [23], the grayscale centroid algorithm [27,28] are adopted for auto-tracking, respectively.

4.1 Tracking accuracy

The ground-truth of the spot center of the video frame image is manually labeled by hand and used as a reference base for several algorithm tracking error calculations. The CLE (Center Location Error) is calculated as Euclidean distance between the tracked measurement and the ground-truth, which is a performance metric widely used to evaluate the accuracy of tracking [23]. In the experiment, four algorithms are adopted for tracking a 160-frame video of underwater laser spot motion. Figure 3 counts the CLE for each frame when using different algorithms. The closer the CLE curve is to the $x$-axis, the more the error of the algorithm converges to zero, indicating that the tracking performance is better. It can be noticed from Fig. 3 that the Meanshift algorithm has the largest CLE due to the loss of the target spot between 67-th and 113-th frame. The window size of the Meanshift algorithm is invariable, and tracking will fail when the spot scale changes. The CLE of the Camshift algorithm is inferior to the Meanshift algorithm. The Camshift algorithm also has a large CLE between 67-th and 82-th frame. It rapidly resumes tracking making it lower in lost frames than Meanshift algorithm. The tracking performance of the Camshift without incorporating Kalman filter algorithm is also poor because it only considers the color histogram and ignores the spatial distribution features of the laser spots. Overall, the tracking effect of these two algorithms is unstable, especially when the spot conditions change rapidly. The grayscale centroid algorithm and the Cam-Kalm algorithm present high tracking accuracy with an average CLE of less than 20 pixels. Moreover, the small fluctuations of the blue curve imply that the Cam-Kalm algorithm has a relatively stable tracking. The grayscale centroid algorithm is sensitive to the brightness of the underwater spot in motion because of the limited threshold, resulting in misjudgments of the edge part, which is the cause of curve fluctuation. The Cam-Kalm algorithm and the grayscale centroid algorithm with better CLE performance are used for the next comparison and analysis.

 figure: Fig. 3.

Fig. 3. Comparison of center location error with frame number using different algorithms.

Download Full Size | PDF

4.2 Interference resistance

In order to simulate the interference spot generated in real environment caused by reflection or other factors, a laser beam is manually added to form an interference spot. Figure 4 shows BER variation in 0 ~ 120 s, where after 30 s is the variation of BER with adding of a similar laser spot interference. As shown in Fig. 4, the grayscale centroid algorithm tracks both the target spot and the interfering spot, treating their average coordinates as the tracking window center. Due to indistinguishable brightness, when either spot moves, the center is shifted, disrupting signal beam pointing and causing a rapid increase in the system BER. The BER results represented by the blue curve in the Fig. 4 remain essentially unchanged under similar spot interference. The Cam-Kalm algorithm conducts prediction before searching to obtain the optimal location in the search window, thus still maintaining stable and effective tracking. This indicates that the Cam-Kalm algorithm has a better performance in resisting interference from similar spots.

 figure: Fig. 4.

Fig. 4. Variation of system BER with time when similar spot interference is added.

Download Full Size | PDF

4.3 Response time

To reduce system power consumption and improve the stability of the link, it is preferable to have a shorter link pointing time [9], which is related to response time. Response time can be measured by the time used to move the spot from different positions to the center of the field of view. To avoid chance of the results, the evaluation experiment of response time is conducted three times. Figure 5 shows the variation of center error curve with response time for three groups of experiments with different algorithms. The horizontal axis represents the Euclidean distance between the center of the spot and the center of the field of view during the spot motion. A value of zero for the vertical coordinate indicates has moved to the center of the field of view. It can be seen from the curves that the response time of the grayscale centroid algorithm is around 0.8 s, while the Cam-Kalm algorithm is around 0.1 s. The reason is that the Cam-Kalm algorithm only requires several iterative calculations of pixels in the search window, while the grayscale centroid algorithm calculates the entire image pixel by pixel, which greatly increases the response time of the system. In addition, the incorporated Kalman filter can accurately predict the center of the spot in the next frame, contributing to efficient time savings during iterations. Therefore, laser link pointing can be achieved faster using the Cam-Kalm algorithm.

 figure: Fig. 5.

Fig. 5. Variation of spot center error with response time during spot motion.

Download Full Size | PDF

4.4 Tracking effect of different states of laser spots

In order to better apply to complex underwater environment, the effects of the Cam-Kalm algorithm and the grayscale centroid algorithm on spot tracking in different states are further investigated.

1) Spot appearance variation

The accuracy of the proposed algorithm is investigated by tracking spots with varying appearance. In the experiment, the receiver is moved to make spot changes in appearance which simulates changes in water environment or different CCD viewing angles in real application scenarios. The tracking results are recorded when the spot appearance keeps changing in the camera’s field of view. Figures 6(a) and (b) show the initial tracking spot appearance, while Figs. 6(c)–(f) present the tracking results after the spot appearance is changed. It can be seen that both grayscale centroid algorithm and Cam-Kalm algorithm are able to track the spot accurately when the spot appearance keeps changing. Because the grayscale centroid algorithm is a pixel-by-pixel detection, each pixel can be detected. As long as it is within its detection threshold, the spot position can be accurately detected in real time regardless of any appearance change of the spot. The Cam-Kalm algorithm can achieve accurate tracking by adaptively changing the search window which matches the appearance of the spot. Besides, as depicted in Fig. 6, the Cam-Kalm algorithm frames out more complete spot during tracking. Due to its higher sensitivity to spot pixels, the grayscale centroid algorithm exhibits a higher false rate in edge detection, resulting in the framing of incomplete spots.

 figure: Fig. 6.

Fig. 6. Tracking results of spot appearance variation for Cam-Kalm and grayscale centroid algorithms.

Download Full Size | PDF

2) Spot brightness variation

For better evaluating the adaptability of the tracking algorithms to changes in spot brightness, the brightness of the spot is gradually changed by adjusting the DC supply voltage from 5.5 V to 3.0 V. The relationship between optical power at the receiver and laser voltage adjustment is shown in Table 1. At DC of 5.5 V, 5.0 V and 4.5 V, the tracking results of the two algorithms to different brightness spot is presented in Figs. 7(a)–(f). The tracking results reveal that the grayscale centroid algorithm can no longer detect the spot at the brightness at 4.5 V shown in Fig. 7(f), while the Cam-Kalm algorithm can still accurately track the spot. The reason is that the proposed Cam-Kalm algorithm utilizes the information between frames and the predictive performance of Kalman filter. The grayscale centroid algorithm detects laser spots based on differences in the brightness of different regions in the image. Therefore, the threshold is chosen to be able to distinguish between the background and the target spot. Additionally, as the DC decreases, the Cam-Kalm algorithm also fails to track, and the tracking results are given in Figs. 8(a)–(c). As shown in Fig. 8(a), when the DC is less than 3.2 V, the spot almost disappears from the field of view, indicating a failure in tracking.

 figure: Fig. 7.

Fig. 7. Tracking results of spot brightness variation for Cam-Kalm and Grayscale centroid algorithms.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Tracking results of the Cam-Kalm algorithm when DC is below 3.2 V.

Download Full Size | PDF

Tables Icon

Table 1. DC corresponding to the optical power at the receiver.

3) Spot scintillation variation

In the experiment, two bubble pumps are installed to simulate the effect of underwater bubbles on the optical path, and each of them generates bubbles at a uniform rate of 2.0 L/min. The general threshold for the tracking field is set to 20 pixels. The valid detection frames are defined those frames in which the spot is tracked and the CLE of the tracking does not exceed 20 pixels. Table 2 presents that the grayscale centroid algorithm detects only 72 frames out of 312 frames tracked, with a frame success rate of 23.1%, while 486 frames of the Cam-Kalm algorithm are validly detected, with a frame success rate of 100%. The main reason is that the Cam-Kalm algorithm has the predictive capability, while the grayscale centroid algorithm relies on the brightness threshold to detect the spot, and it is difficult to detect when the bubbles make the spot scintillation variation. These results indicate that the grayscale centroid algorithm is less accurate than the proposed Cam-Kalm algorithm in detecting the scintillation state of the spot, which is consistent with the results of brightness variation in last part.

Tables Icon

Table 2. Tracking results of laser spot scintillation variation from bubbles in water for Cam-Kalm and grayscale centroid algorithms.

4.5 Communication performance

Furthermore, the real-time communication performance for the system using the Cam-Kalm algorithm and the grayscale centroid algorithm in a dynamic case is further investigated, where the receiver is moving horizontally at speeds ranging from 8 to 22 mm/s. In this experiment, DC is set to 5.5V and is kept unchanged. Reliable communication at data rate of 30 Mbit/s and a BER of as low as $1.13\times 10^{-6}$ has been achieved at an underwater communication distance of 2 m. The tracking speed of these two algorithms is only constrained by their processing efficiency in this experiment, with faster algorithm efficiency corresponding to higher tracking speeds. Figure 9 gives the system BER performance of the two algorithms for speeds from 8 to 22 mm/s. The forward-error-correction (FEC) threshold of $3.8\times 10^{-3}$ is shown by a horizontal line. When the moving speed is more than 20 mm/s, the tracking effect of grayscale centroid algorithm is very poor and the BERT falls out of synchronization, therefore only the system BER for the Cam-Kalm algorithm is measured at 22 mm/s. It can be seen that the grayscale centroid algorithm cannot satisfy the FEC limit and maintain communication in tracking when the speed is more than 12 mm/s, due to its slow frame processing speed. In contrast, the Cam-Kalm algorithm can be tracked stably in real time for moving speeds less than 20 mm/s. As a result, the system using the Cam-Kalm algorithm enables higher speed tracking and more reliable communication in underwater environment. In future work, the optical tracking algorithm will be further studied and improved, taking into account various light sources, detectors, and other relevant factors. It will be applied in more complex and longer underwater channels [29,30] to enable a more effective evaluation of tracking and communication performance.

 figure: Fig. 9.

Fig. 9. System BER performance influenced by the varying speeds for Cam-Kalm and grayscale centroid algorithms.

Download Full Size | PDF

5. Conclusion

In this paper, the Cam-Kalm tracking algorithm is proposed and evaluated in an experimental underwater laser communication system. The experimental results demonstrate that the Cam-Kalm algorithm performs higher accuracy, interference resistance and real-time performance in the practical UOWC system. With a single LD and a single PD, the system can track and maintain stable real-time communications within 20 mm/s tracking speed while satisfying the forward error correction threshold. The image processing-based tracking scheme dynamically adjusts the pointing angle between the transmitter and receiver by tracking the laser spot motion, which enables to achieve alignment and tracking of the laser transmitter and receiver. The communication performance of the system is enhanced effectively by the improvement of the tracking algorithm. It is indicated that the proposed tracking algorithm can be considered as a potential and suitable candidate for stable and reliable laser communication in underwater environment with bubbles and waves.

Funding

National Natural Science Foundation of China (62374015); Basic and Applied Basic Research Foundation of Guangdong Province (2021B1515120086, 2022A1515110154, 2022A1515110770); Fundamental Research Funds for the Central Universities (FRF-IDRY-21-001, FRF-IDRY-22-019, FRF-TP-22-044A1).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. S. Ishibashi and K.-I. Susuki, “1Gbps × 100m underwater optical wireless communication using laser module in deep sea,” in OCEANS 2022 (IEEE, 2022), pp. 1–7.

2. P. B. Solanki, S. D. Bopardikar, and X. Tan, “Active alignment control-based led communication for underwater robots,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2020), pp. 1692–1698.

3. Y. Weng, T. Matsuda, Y. Sekimori, et al., “Pointing error control of underwater wireless optical communication on mobile platform,” IEEE Photonics Technol. Lett. 34(13), 699–702 (2022). [CrossRef]  

4. Y. Di, Y. Shao, and L.-K. Chen, “Real-time wave mitigation for water-air OWC systems via beam tracking,” IEEE Photonics Technol. Lett. 34(1), 47–50 (2021). [CrossRef]  

5. X. Ke and H. Liang, “Airborne laser communication system with automated tracking,” Int. J. Opt. 2021, 1–8 (2021). [CrossRef]  

6. F. Liu, W. Jiang, X. Jin, et al., “A simple and effective tracking scheme for visible light communication systems,” in 17th International Conference on Optical Communications and Networks, vol. 11048 (SPIE, 2019), pp. 289–293.

7. S. A. Hamilton, C. E. DeVoe, A. S. Fletcher, et al., “Undersea narrow-beam optical communications field demonstration,” in Ocean Sensing and Monitoring IX, vol. 10186 (SPIE, 2017), pp. 7–22.

8. N. D. Hardy, H. G. Rao, S. D. Conrad, et al., “Demonstration of vehicle-to-vehicle optical pointing, acquisition, and tracking for undersea laser communications,” in Free-Space Laser Communications XXXI, vol. 10910 (SPIE, 2019), pp. 205–214.

9. A. J. Williams, L. L. Laycock, M. S. Griffith, et al., “Acquisition and tracking for underwater optical communications,” in Advanced Free-Space Optical Communication Techniques and Applications III, vol. 10437 (SPIE, 2017), pp. 44–54.

10. J. Lin, Z. Du, C. Yu, et al., “Machine-vision-based acquisition, pointing, and tracking system for underwater wireless optical communications,” Chin. Opt. Lett. 19(5), 050604 (2021). [CrossRef]  

11. M. Al-Rubaiai and X. Tan, “Design and development of an LED-based optical communication system with active alignment control,” in International Conference on Advanced Intelligent Mechatronics (IEEE, 2016), pp. 160–165.

12. B. Fahs, M. Romanowicz, J. Kim, et al., “A self-alignment system for LOS optical wireless communication links,” IEEE Photonics Technol. Lett. 29(24), 2207–2210 (2017). [CrossRef]  

13. S. Xie, H. Mi, and R. Feng, “Spot alignment based on a five-photodiode receiver for a UWOC system,” Appl. Opt. 61(22), G1–G8 (2022). [CrossRef]  

14. S. Arai, T. Yamazato, H. Okada, et al., “LED acquisition methods for image-sensor-based visible light communication,” in Opto-Electronics and Communications Conference (IEEE, 2015), pp. 1–3.

15. Z. Qiu, L. Lin, and L. Chen, “An active method to improve the measurement accuracy of four-quadrant detector,” Opt. Lasers Eng. 146, 106718 (2021). [CrossRef]  

16. P. B. Solanki, M. Al-Rubaiai, and X. Tan, “Extended Kalman filter-based active alignment control for LED optical communication,” IEEE/ASME Trans. Mechatron. 23(4), 1501–1511 (2018). [CrossRef]  

17. J. Tang, R. Jiang, Z. Chen, et al., “Monocular vision aided optical tracking for underwater optical wireless communications,” Opt. Express 30(9), 14737–14747 (2022). [CrossRef]  

18. F. He, S. L. I. Y. Yang, and J. Zhang, “Receiver alignment system based on underwater spot tracking,” Acta Photonica Sinica 49, 11–18 (2020). [CrossRef]  

19. X. Zhang, Y. Tian, J. Pan, et al., “A high-precision eccentric tracking algorithm of laser interfering targets based on Kernel Correlation filtering,” in 3rd International Conference on Image, Vision and Computing (IEEE, 2018), pp. 637–641.

20. Y. Zhang, “Detection and tracking of human motion targets in video images based on Camshift algorithms,” IEEE Sens. J. 20(20), 11887–11893 (2019). [CrossRef]  

21. R. Bankar and S. Salankar, “Improvement of head gesture recognition using camshift based face tracking with UKF,” in 9th International Conference on Emerging Trends in Engineering and Technology-Signal and Information Processing (IEEE, 2019), pp. 1–5.

22. J. Zhang, Y. Zhang, S. Shen, et al., “Research and application of police UAVs target tracking based on improved camshift algorithm,” in 3rd International Conference on Electronic Information Technology and Computer Engineering (2019), pp. 1238–1242.

23. W. Guan, Z. Liu, S. Wen, et al., “Visible light dynamic positioning method using improved Camshift-Kalman algorithm,” IEEE Photonics J. 11(6), 1–22 (2019). [CrossRef]  

24. L. Hongmei, H. Lin, Z. Ruiqiang, et al., “Object tracking in video sequence based on Kalman filter,” in International Conference on Computer Engineering and Intelligent Control (IEEE, 2020), pp. 106–110.

25. P. S. Madhukar and L. Prasad, “State estimation using extended kalman filter and unscented kalman filter,” in International Conference on Emerging Trends in Communication, Control and Computing (IEEE, 2020), pp. 1–4.

26. Z. Zheng, H. Yin, J. Wang, et al., “A laser spot tracking algorithm for underwater wireless optical communication based on image processing,” in 13th International Conference on Communication Software and Networks (IEEE, 2021), pp. 192–198.

27. J. Li, F. Xue, H. Zhang, et al., “Centroid and grayscale reconstruction algorithm for infrared point targets,” in AOPC 2022: Optical Sensing, Imaging, and Display Technology, vol. 12557 (SPIE, 2023), pp. 472–477.

28. T. Sun, F. Xing, J. Bao, et al., “Centroid determination based on energy flow information for moving dim point targets,” Acta Astronaut. 192, 424–433 (2022). [CrossRef]  

29. X. Zhang, S. Li, W. Liu, et al., “Positioning of turbulence-distorted laser spot for underwater optical wireless communication,” in 12th International Symposium on Communication Systems, Networks and Digital Signal Processing (IEEE, 2020), pp. 1–6.

30. W. Liu, Y. Jiang, N. Huang, et al., “Distorted laser spot center positioning based on captured image under laser-camera misalignment in UOWC,” IEEE/ASME Trans. Mechatron. 15(3), 1–6 (2023). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Principle block diagram of the Cam-Kalm algorithm.
Fig. 2.
Fig. 2. (a) Block diagram and (b) experimental setup of the underwater laser communication system.
Fig. 3.
Fig. 3. Comparison of center location error with frame number using different algorithms.
Fig. 4.
Fig. 4. Variation of system BER with time when similar spot interference is added.
Fig. 5.
Fig. 5. Variation of spot center error with response time during spot motion.
Fig. 6.
Fig. 6. Tracking results of spot appearance variation for Cam-Kalm and grayscale centroid algorithms.
Fig. 7.
Fig. 7. Tracking results of spot brightness variation for Cam-Kalm and Grayscale centroid algorithms.
Fig. 8.
Fig. 8. Tracking results of the Cam-Kalm algorithm when DC is below 3.2 V.
Fig. 9.
Fig. 9. System BER performance influenced by the varying speeds for Cam-Kalm and grayscale centroid algorithms.

Tables (2)

Tables Icon

Table 1. DC corresponding to the optical power at the receiver.

Tables Icon

Table 2. Tracking results of laser spot scintillation variation from bubbles in water for Cam-Kalm and grayscale centroid algorithms.

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

X t k = A X t k 1 + V t k ,
Y t k = H X t k + W t k ,
{ A = [ 1 0 Δ t 0 0 1 0 Δ t 0 0 1 0 0 0 0 1 ] H = [ 1 0 0 0 0 1 0 0 ] ,
{ Q = [ 0.1 0 0 0 0 0.1 0 0 0 0 0.1 0 0 0 0 0.1 ] R = [ 0.01 0 0 0.01 ] .
{ M 10 , k = x y x I k ( x , y ) M 01 , k = x y y I k ( x , y ) M 00 , k = x y I k ( x , y ) .
( x c , y c ) = ( M 10 , k M 00 , k , M 01 , k M 00 , k ) .
θ = 1 2 arctan ( 2 ( M 11 , k M 00 , k x c y c ) ( M 20 , k M 00 , k x c 2 ) ( M 02 , k M 00 , k y c 2 ) ) ,
{ M 20 , k = x y x 2 I k ( x , y ) M 02 , k = x y y 2 I k ( x , y ) M 11 , k = x y x y I k ( x , y ) .
{ w = ( a + c ) b 2 + ( a c ) 2 2 l = ( a + c ) + b 2 + ( a c ) 2 2 .
{ x t k = x t k 1 + v x t k Δ t y t k = y t k 1 + v y t k Δ t .
[ x t k y t k v x t k v y t k ] = [ 1 0 Δ t 0 0 1 0 Δ t 0 0 1 0 0 0 0 1 ] [ x t k 1 y t k 1 v x t k 1 v y t k 1 ] + V t k .
[ x t k y t k ] = [ 1 0 0 0 0 1 0 0 ] [ x t k y t k v x t k v y t k ] + W t k .
{ K t k = P t k H T ( H P t k H T + R ) 1 X t k = X t k + K t k ( Y t k H X t k ) P t k = ( I K t k H ) P t k ,
{ X t k + 1 = A X t k P t k + 1 = A P t k A T + Q ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.