Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

CSI-based sliding window fingerprinting method tailored for a signal blocking environment in VLP systems

Open Access Open Access

Abstract

In visible light indoor positioning systems, the localization performance of the received signal strength (RSS)-based fingerprinting algorithm would drop dramatically due to the occlusion of the line-of-sight (LOS) signal caused by randomly moving people or objects. A sliding window fingerprinting (SWF) algorithm based on channel state information (CSI) is put forward to enhance the accuracy and robustness of indoor positioning in this work. The core idea behind SWF is to combine CSI with sliding matching. The sliding window is used to match the received CSI and the fingerprints in the database twice to obtain the optimal matching value and reduce the interference caused by the lack of the LOS signal. On this premise, in order to reflect the different contributions of various paths in CSI to the calculation of match values, a weighted sliding window fingerprinting (W-SWF) is also proposed for the purpose of further improving the accuracy of fingerprint matching. A 4 m × 4 m × 3 m indoor multipath scene with four LEDs is established to evaluate the positioning performance. The simulation results reveal that the mean errors of the proposed method are 0.20 cm and 1.43 cm respectively when the LOS signal of 1 or 2 LEDs is blocked. Compared with the traditional RSS algorithm, the weighted k-nearest neighbor (WKNN) algorithm, and the adaptive residual weighted k-nearest neighbor (ARWKNN) algorithm, the SWF algorithm achieves over 90% improvement in terms of mean error and root mean square error (RMSE).

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Internet of things (IoT) has become increasingly associated with intelligence in recent years, and the necessity for indoor positioning systems in IoT applications is stronger than ever [1]. As the most well-known positioning service, Global Positioning System is often unavailable in indoor environments, such as intelligent plants or underground garages, due to wall shielding [2,3]. Nowadays, a number of indoor positioning approaches have been developed, such as Bluetooth, WIFI, and radio frequency identification [46]. Limited by the multipath signal propagation, low security, and crowded spectrum resources, these technologies cannot be massively applied to commercial purposes. Visible light positioning (VLP) technology, in contrast to the other positioning approaches mentioned above, has gained popularity due to its low-cost, high-security, license-free spectrum, and anti-electromagnetic interference [7]. Until now, many different VLP techniques have been studied extensively, such as the time of arrival (TOA) [8], angle of arrival (AOA) [9], time difference of arrival (TDOA) [10], and received signal strength (RSS). Regarding the complexity and the cost of positioning system infrastructure, the photodiode (PD)-based indoor positioning utilizing RSS technology is the preferred choice among the technologies above owing to its low complexity and low cost.

Trilateration and fingerprinting are two common VLP methods. Combined with optimization algorithms, the trilateration positioning method can achieve good positioning performance, including non-linear optimization [11], improved adaptive cuckoo search algorithm [12], and improved whale optimization algorithm [13]. Most trilateration methods, on the other hand, do not take into account the elimination of interference from the reflection paths, resulting in poor performance in the corner. The RSS-based fingerprinting method establishes the fingerprint database in the offline stage, and in the online stage, the target point is located by matching the received RSS with the offline fingerprint, such as weighted k-nearest neighbor (WKNN) [14], adaptive residual WKNN (ARWKNN) [15]. At present, the fingerprinting method combined with machine learning (ML) has acquired excellent positioning performance, such as extreme learning machine (ELM) [16], convolutional neural network (CNN)[17]. But these methods entail large model training costs, and most importantly, ML models lack generality and only target specific scenarios. Considering the presence of randomly moving AGVs or people blocking line-of-sight (LOS) signals in smart factories, traditional RSS-based fingerprint methods are prone to obvious matching errors in the online stage, resulting in significantly reduced positioning accuracy [18].

Taking into account the shortcomings of the above RSS-based fingerprinting method, a channel state information (CSI) based sliding window fingerprinting (SWF) method is put forward in this paper, which draws on our previous work [19]. In this work, the CSI is introduced into the indoor fingerprinting method for high-precision positioning in the absence of the LOS signal. In the offline stage, the PD is utilized to collect CSI as the offline fingerprint for the purpose of establishing the fingerprint database. In the online stage, the sliding window is used to match the received CSI and the fingerprints in the database twice so as to obtain the matching matrix consisting of two matching vectors. The coordinate of the target point will then be obtained from the matching matrix. Furthermore, in order to further improve the positioning performance, weights are introduced to optimize the proposed method.

This work introduces two innovations. Firstly, by storing the CSI as fingerprints, the information of offline fingerprints is more abundant in contrast to the RSS-based offline fingerprints, which can enhance the stability of online fingerprint matching. Secondly, two CSI-based sliding matching methods are investigated to improve positioning accuracy in the absence of the LOS signal. The proposed method has the advantages of high positioning accuracy under the blockage of the LOS path and less impact caused by channel environment changes. A typical indoor scene including 4 LEDs and 1 PD is employed as the simulation environment. The simulation outcomes reveal that the proposed methods outperform the WKNN algorithm [14], ARWKNN algorithm [15], and traditional RSS algorithm in terms of positioning error in the entire positioning area, edge area, and central area. The overall obvious positioning error reduction certificated that the proposed fingerprinting method using a sliding window is quite effective.

The rest of this article is organized in the following manner. Section II describes the system model. In Section III, the proposed fingerprinting method is presented. The simulation results of positioning performance are analyzed in Section IV. Section V outlines the conclusion of this article.

2. System model

2.1 Optical wireless channel model

The geometric model of a typical indoor environment is illustrated in Fig. 1, whose length is 4 m, width is 4 m, and height is 3 m. A central area and an edge area make up the indoor environment. The central area is a square area of 2 m × 2 m in the center of the room, and the edge area is the remaining area of the room. Four LEDs are uniformly deployed at the 3 meters high ceiling of the room as anchor nodes for positioning, whose plane coordinates can be expressed as Tx1 (1 m, 1 m), Tx2 (3 m, 1 m), Tx3 (1 m, 3 m), and Tx4 (3 m, 3 m) respectively. In order to distinguish different LEDs, time division multiple access (TDMA) is used as a means of transmitting positioning information consisting of coordinate information, unique identification, and transmitted light intensity to prevent signals of different LEDs from interfering with each other. The PD on the ground may collect the received power and position information of various LEDs. As shown in Fig. 1, the signal received by PD is composed of a LOS signal and a plurality of non-line of sight (NLOS) signals generated by reflection. Thus, a LOS signal and the first-order reflection signal are taken into account in this wireless optical channel model.

 figure: Fig. 1.

Fig. 1. Geometric model of an LED-based VLP system.

Download Full Size | PDF

The total channel DC gain sent by the LEDs received by the PD can be obtained by adding the LOS channel gain and all of the NLOS channel gain. Because the delays of the signals received by the PD through different paths vary, channel impulse response (CIR) includes the LOS path and several discrete NLOS paths. All reflected path gains delayed within one sample interval are summed to obtain discrete NLOS path gains. The discrete-time CIR, of which the channel path number is L, can be denoted as

$$\mathbf{h}\textrm{ = }\left[ {\begin{array}{ccccc} {h(0)}&{h(1)}& \cdots &{h(L - 1)} \end{array}} \right]$$
and the l-th path gain is expressed as
$$h(l) = \left\{ \begin{array}{ll} {h_{LOS}}&l = 0\\ \int_{{\tau_0} + (k - 1) \cdot {T_s}}^{{\tau_0} + k \cdot {T_s}} {{h_{Ref}}(t)dt,}&1 \le l \le l - 1 \end{array} \right.$$
where, ${\tau _0}$ is the transmission delay of the LOS signal between LED and PD, ${T_s}$ is the sampling period of the received signal, ${h_{LOS}}$ is the LOS path channel gain, and ${h_{Ref}}(t)$ is the representation of the NLOS path channel gain through different reflection paths. The LOS path channel gain can be given by
$${h_{LOS}} = \frac{{(m + 1){A_{PD}}}}{{2\pi {d^2}}}{\cos ^m}(\varphi ){T_s}(\theta )g(\theta )\cos (\theta )\;$$
where, $m$ is the Lambertian emission order, ${A_{PD}}$ is the effective area of the PD, $d$ denotes the distance from LED to PD, $\theta$ reflects the PD’s incidence angle, the PD’s semi-angle of the field of view (FOV) is denoted as ${\theta _{FOV}}$, and $0 < \theta < {\theta _{FOV}}$. $\varphi$ is the irradiance angle of LED, $g(\theta )$ is the gain of the optical concentrator, ${T_s}(\theta )$ is the gain of the optical filter.

The NLOS path channel gain can be given by

$${h_{Ref}}(t) = \frac{{(m + 1){A_{PD}}\rho {A_{Ref}}}}{{2{\pi ^2}{d_1}^2{d_2}^2}}{\cos ^m}(\varphi ^{\prime})\cos (\alpha )\cos (\beta ){T_s}(\theta ^{\prime})g(\theta ^{\prime})\cos (\theta ^{\prime})\delta (t - \frac{{{d_1} + {d_2}}}{c})\;$$
where, $\rho$ is the reflection coefficient, ${A_{Ref}}$ is the area of the reflection point, ${d_1}$ is the distance from the LED to a reflection point, ${d_2}$ is the distance from a reflection point to the PD, $\varphi ^{\prime}$ is the angle of irradiance to the reflective point, $\alpha$ and $\beta$ are the incidence angle of a reflection point and the irradiance angle to the PD, $\theta ^{\prime}$ is the incidence angle of the PD.

This work assumes that the surfaces of both PD and LED are parallel, so $\varphi$ and $\theta$ are the same, which can be denoted as $\cos (\varphi ) = \cos (\theta ) = {H / d}$, where $H$ is the vertical distance between LED and PD. In the receiver, the PD can measure the received power from various LEDs by TDMA. The received optical power from LED-Txi can be computed by

$$P_r^i = {P_t} \cdot \sum\limits_{l = 0}^{L-1} {{h^i}(l)} {\kern 1cm}(i = 1, \cdots ,{N_L})$$
where ${P_t}$ is the average optical power, and ${N_L}$ denotes the number of LEDs.

2.3 DCO-OFDM VLC system model

In this work, DCO-OFDM is utilized to modulate the positioning signal, including training symbols for timing synchronization, data symbols carrying positioning information, and pilot symbols of length ${N_P}$ for channel estimation. The specific details are presented in our previous work [19]. The pilot sequence is generated by BPSK modulation of the first ${{{N_P}} / 2} - 1$ values in the Shapiro-Rudin sequence. The inverse fast Fourier transformation (IFFT) is employed to transform frequency-domain modulated symbols into the time domain. Then a clipped real-valued DCO-OFDM symbol ${x_{clip}}(n)$ is formed by adding the cyclic prefix and clipping. Digital-to-analog (D/A) conversion for signal ${x_{clip}}(n)$ and the addition of DC bias ${B_{DC}}$ yields the signal to drive the LED, which can be expressed as

$${x_{DCO}}(t) = {x_{clip}}(t) + {B_{DC}}$$

At the receiver, the received optical signal is converted into an electric signal via the PD. Analog-to-digital (A/D) conversion is utilized for generating a baseband signal $r(n)$, which is expressed as

$$r(n) = \gamma \cdot h(n) \otimes {x_{\textrm{DCO}}}(n) + w(n)$$
where $\gamma$ is the responsivity of PD, $h(n)$ denotes the CIR, and ${\otimes} $ indicates convolution. $w(n)$ is the additive white Gaussian noise (AWGN), which includes shot noise $\sigma _{shot}^2$ and thermal noise $\sigma _{thermal}^2$, and the total electrical domain noise power $\sigma _{noise}^2$ was illustrated by [19].

The a-th $(a = 0,1, \cdots ,N1 - 1)$ received frequency-domain pilot vector ${\hat{\mathbf{X}}^{({i,a} )}}$ from LED-Txi is acquired by FFT and CP removal, which can be expressed as

$${\hat{\mathbf{X}}^{({i,a} )}} = \left[ {\begin{array}{ccccc} {\hat{X}_0^{({i,a} )}}&{\hat{X}_1^{({i,a} )} \cdots \hat{X}_{{N_P}/2 - 1}^{({i,a} )}}&{\hat{X}_{{N_P}/2}^{({i,a} )}}&{{{({\hat{X}_{{N_P}/2 - 1}^{({i,a} )}} )}^\ast } \cdots {{({\hat{X}_1^{({i,a} )}} )}^\ast }} \end{array}} \right]\; = {\mathbf{H}^i}\mathbf{X} + {\mathbf{W}^{({i,a} )}}$$
where $N1$ indicates the number of pilot symbols used, ${\mathbf{H}^i}$ is the channel frequency response (CFR) of LED-Txi, ${\mathbf{W}^{({i,a} )}}$ is the AWGN vector, and $\mathbf{X}$ represents the pilot sequence saved locally. The estimated CFR can be obtained by
$${\hat{\mathbf{H}}^{({i,a} )}} = {{{{\hat{\mathbf{X}}}^{({i,a} )}}} / \mathbf{X}} = [{0\textrm{ }{{\hat{X}_1^{({i,a} )}} / {X_1^{}\textrm{ } \cdots {{\textrm{ }\hat{X}_{{N_P}/2 - 1}^{({i,a} )}} / {X_{{N_P}/2 - 1}^{}\textrm{ }0\textrm{ }{{{{({\hat{X}_{{N_P}/2 - 1}^{({i,a} )}} )}^\ast }} / {X_{{N_P}/2 - 1}^\ast }}\textrm{ } \cdots \textrm{ }{{{{({\hat{X}_1^{({i,a} )}} )}^\ast }} / {X_1^\ast }}}}}}} ]$$

The real-valued estimated CIR of length ${N_P}$ is attained from CFR through IFFT, which is denoted as ${\bar{\mathbf{h}}^{({i,a} )}} = \left[ {\begin{array}{ccccc} {{{\bar{h}}^{({i,a} )}}(0)}&{{{\bar{h}}^{({i,a} )}}(1)}& \cdots &{{{\bar{h}}^{({i,a} )}}({N_P} - 1)} \end{array}} \right]$.

3. Methodology

Given that there will be random moving individuals or objects in the indoor environment, the LOS signal, which accounts for a large proportion of the received signal, will be blocked. In this case, the positioning accuracy of the traditional RSS fingerprinting method will be considerably reduced. To deal with this challenge, a CSI-based fingerprinting method is proposed, which makes use of a sliding window. The proposed methods dramatically enhance positioning accuracy when LOS signals are blocked.

3.1 Establishment of the fingerprint database

In the offline phase, the estimated CIR is used as the fingerprint in the indoor positioning environment to build an offline fingerprint database based on CSI. Next, the collection of fingerprint data on a reference point is taken as an example to illustrate the specific steps of establishing this fingerprint database, as shown in Fig. 2.

 figure: Fig. 2.

Fig. 2. Schematic diagram of fingerprint generation.

Download Full Size | PDF

Step 1. Collection of CIR data: The indoor environment is evenly divided into small grids of equal area, with the intersection of the grids serving as a reference point for placing PDs to collect CIRs from different LEDs. There are $N1$ pilot symbols in a complete OFDM frame of the positioning signal, so $N1$ estimated CIR vectors can be obtained in a collected positioning signal.

Step 2. The pre-processing of fingerprint data: The $N1$ estimated CIR vectors are averaged to reduce the impact of noise and thus improve the accuracy of the CIR estimation. The averaged CIR vector of the i-th LED is denoted as ${\bar{\mathbf{h}}_i} = \sum\limits_{a = 0}^{N1 - 1} {{{\bar{\mathbf{h}}}^{(i,a)}}} /N1$. Because noise is superimposed on the estimated CIR, part of the path gain is negative. It is inconsistent with the fact, thus the negative values in the CIR vector should be clipped to 0, which is exhibited as ${\hat{\mathbf{h}}_i} = \textrm{max}(0,{\bar{\mathbf{h}}_i})$.

Step 3. The storage of fingerprints: For estimating the complete channel path information, the length ${N_P}$ of the pilot symbol should satisfy ${N_P} \ge 2({L_{\max }} + 1)$, where ${L_{\max }}$ denotes the maximum number of channel paths in the simulation environment, which can be estimated by modeling the indoor wireless optical channel. Based on the fact that the actual maximum path length will not exceed ${N_P}/2 - 1$, the length of the sliding window is set to ${N_P}/2 - 1$ for computational convenience in this work. Due to the need to compute two different matching values, only the first ${N_P}/2$ elements in the averaged CIR vector are reserved as fingerprint data.

In the offline phase, the PD is sequentially placed on the reference points to collect the CIR data of different LEDs, and the fingerprint of each reference point is stored in the fingerprint database. In order to improve the accuracy of offline fingerprint data, the positioning signals from different LEDs are collected Q times at each reference point and then averaged. The offline fingerprint at the j-th reference point can be illustrated as $\hat{\mathbf{h}}_{FP}^j = [\begin{array}{ccccc} {\hat{\mathbf{h}}_1^j}&{\hat{\mathbf{h}}_2^j}& \cdots &{\hat{\mathbf{h}}_{{N_L}}^j} \end{array}]$, where ${\hat{\mathbf {h}}}_i^j\textrm{ = }{[{\hat{h}_i^j(0)\textrm{ }\;\hat{h}_i^j(1)\;\; \cdots \;\;\hat{h}_i^j({N_P}/2 - 1)} ]^T}$, $j = 1,2, \cdots ,{N_R}$, and ${N_R}$ represents the number of offline reference points in the simulation environment. The complete offline fingerprint database comprises CSI of ${N_R}$ reference points, which can be expressed as $[\begin{array}{cccc} {\hat{\mathbf{h}}_{FP}^1}&{\hat{\mathbf{h}}_{FP}^2}& \cdots &{\hat{\mathbf{h}}_{FP}^{{N_R}}} \end{array}]$.

3.2 Sliding window fingerprinting (SWF) algorithm

In order to improve the positioning accuracy when the signal is blocked, the SWF algorithm is proposed to estimate the location of the receiver. In the online stage, the operations of collecting CSI are the same as that in the offline stage. The CSI-based fingerprint gathered on the position of the target point in the online phase is expressed as ${\tilde{\mathbf{h}}_{Online}} = [\begin{array}{cccc} {{{\tilde{\mathbf{h}}}_1}}&{{{\tilde{\mathbf{h}}}_2}}& \cdots &{{{\tilde{\mathbf{h}}}_{{N_L}}}} \end{array}]$, where ${\tilde{\mathbf{h}}_i}$ denotes the first ${N_P}/2 - 1$ elements of the received CIR vector from the i-th LED, which can be shown as ${{\tilde{\mathbf{h}}}_i} = {[{{{\tilde{h}}_i}(0)\textrm{ }{{\tilde{h}}_i}(1)\textrm{ } \cdots \textrm{ }{{\tilde{h}}_i}({N_P}/2 - 2)} ]^T}$.

To facilitate the understanding of the proposed method, the matching process of the target point and the k-th reference point is used as an instance, which is depicted in Fig. 3. The offline fingerprint of the k-th reference point can be expressed as ${\hat{\mathbf{h}}}_{FP}^k = [{\hat{\mathbf{h}}}_1^k\textrm{ }\;{\hat{\mathbf{h}}}_2^k\;\; \cdots \;\;{\hat{\mathbf{h}}}_{{N_L}}^k]$. The following are the specific steps of fingerprinting.

 figure: Fig. 3.

Fig. 3. Schematic diagram of sliding window fingerprinting.

Download Full Size | PDF

Step 1. Calculation of the first matching value: Each CIR vector in ${\tilde{\mathbf h}}_{Online}^{}$ matches the first ${N_P}/2 - 1$ elements in each CIR vector in ${\hat{\mathbf h}}_{FP}^k$ to obtain the first matching value. The first matching value of the i-th LED can be calculated as:

$$M_1^i = \sqrt {{{[\hat{h}_i^k(0) - {{\tilde{h}}_i}(0)]}^2} + {{[\hat{h}_i^k(1) - {\tilde{h}}_i(1)]}^2} + \cdots + {{[\hat{h}_i^k({{{N_P}} / 2} - 2) - {{\tilde{h}}}_i({{{N_P}} / 2} - 2)]}^2}}$$

The first matching vector comprises ${N_L}$ matching values, denoted as $\mathbf{M}_1^{} = {[M_1^1\;\;M_1^2\;\; \cdots \;\;M_1^{{N_L}}]^T}$.

Step 2. Calculation of the second matching value: The matching window slides to the right with a step of 1 on the offline fingerprint ${\hat{\mathbf h}}_{FP}^k$ for calculating the second matching value, which is calculated as

$$M_2^i = \sqrt {{{[\hat{h}_i^k(1) - {{\tilde{h}}_i}(0)]}^2} + {{[\hat{h}_i^k(2) - {{\tilde{h}}_i}(1)]}^2} + \cdots + {{[\hat{h}_i^k({{{N_P}} / {2 - 1}}) - {{\tilde{h}}_i}({{{N_P}} / 2} - 2)]}^2}}$$

The second matching vector is denoted as $\mathbf{M}_2^{} = {[M_2^1\;\;M_2^2\;\; \cdots \;\;M_2^{{N_L}}]^T}$.

Step 3. Construction of matching matrix: The matching matrix consists of the first as well as the second matching vectors, which can be illustrated as follows

$$\mathbf{M}\textrm{ = }\left[ {\begin{array}{cc} {{\mathbf{M}_\mathbf{1}}}&{{\mathbf{M}_\mathbf{2}}} \end{array}} \right] = \left[ {\begin{array}{cc} {M_1^1}&{M_2^1}\\ {M_1^2}&{M_2^2}\\ \vdots & \vdots \\ {M_1^{{N_L}}}&{M_2^{{N_L}}} \end{array}} \right]$$
Step 4. Generation of final matching value: The final matching value ${R_k}$ of k-th reference point can be obtained by
$${R_k} = \sum\limits_{i = 1}^{{N_L}} {\min (M_1^i,M_2^i)}$$

The matching process of the target point and other reference points is shown in the above steps. At the end of the matching phase, the matching result of all reference points is denoted as $\mathbf{R} = \left[ {\begin{array}{ccccc} {{R_1}}& \cdots &{{R_k}}& \cdots &{{R_{{N_R}}}} \end{array}} \right]$. The estimated position $(x^{\prime},y^{\prime})$ of the target point is the coordinate of the reference point corresponding to the minimum value in $\mathbf{R}$.

3.3 Weighted sliding window fingerprinting (W-SWF) algorithm

From the overall trend, the path gain in the CIR vector closer to the first path is greater. This is because the smaller the propagation delay in the signal transmission, the smaller the path loss should be. This means the contribution of different paths to the calculation of the matching value differs when it comes to computing the matching value. To further improve localization performance, weights are introduced during fingerprint matching to highlight the importance of different paths. The weight is used to calculate the first matching value in Eq. (10), and the first weighted matching value is calculated by

$$M_1^i = \sqrt {\sum\limits_{l = 0}^{{{{N_P}} / 2} - 2} {[{{{(\hat{h}_i^k(l) - {{\tilde{h}}_i}(l))}^2} \cdot W(l)} ]} }$$
where $W(l)$ is the weight and can be expressed as
$$W(l) = \exp (\frac{1}{{l + 1}})\;\;\;(l = 0,1, \cdots ,{{{N_P}} / 2} - 2)$$

The calculation of the second weighted matching value is similar to Eq. (14), which can be expressed as

$$M_2^i = \sqrt {\sum\limits_{l = 0}^{{{{N_P}} / 2} - 2} {[{{{(\hat{h}_i^k(l + 1) - {{\tilde{h}}_i}(l))}^2} \cdot W(l)} ]} }$$

It should be emphasized that the weights used here are simply selected to verify the importance of the elements in the CIR vector and can be further optimized.

4. Results and discussion

4.1 Simulation setup

The proposed SWF and W-SWF algorithms are simulated in an indoor scene as shown in Fig. 1 to evaluate their performance, and traditional RSS-based fingerprinting, WKNN algorithm, as well as ARWKNN algorithm, are selected as the comparison methods. Considering that the LED positions in the simulated scene are symmetrical, a quarter of the indoor area is selected for simulation. Specifically, the positioning area used for simulation is an area with the x-axis range of [0 m, 2 m] and the y-axis range of [0 m, 2 m]. While establishing the offline fingerprint database, the ground of 2m × 2 m is divided into grids of 1cm × 1 cm, with the intersection of the grids being chosen as the reference point for collecting CSI. This implies that the offline fingerprint database has 40,000 offline reference points. In the online phase and offline phase, the receiver repeatedly receives 10 positioning signals at each reference point and averages them to measure the positioning performance. Table 1 summarizes the main simulation parameters.

Tables Icon

Table 1. Simulation parameters

4.2 Performance evaluation

This section discusses the positioning performance obtained by the proposed methods (SWF, W-SWF) and comparison methods in a variety of situations. To evaluate the performance of the positioning method, the root mean square error (RMSE) and mean error (ME) can serve as the evaluation criteria for positioning accuracy, where ME and RMSE are defined as

$$ME = \frac{1}{{{N_R}}}\sum\limits_{i = 1}^{{N_R}} {\sqrt {{{({x_i} - {{x^{\prime}}_i})}^2} + {{({y_i} - {{y^{\prime}}_i})}^2}} }$$
$$RMSE = \sqrt {\frac{1}{{{N_R}}}\sum\limits_{i = 1}^{{N_R}} {[{{{({x_i} - {{x^{\prime}}_i})}^2} + {{({y_i} - {{y^{\prime}}_i})}^2}} ]} }$$
where $({x_i},{y_i})$ is the real coordinates of the receiver, $({x^{\prime}_i},{y^{\prime}_i})$ reflects the estimated coordinates of the receiver, and ${N_R}$ is the representation of the number of reference points.

Figure 4 shows the distribution of the positioning error of RSS-based fingerprinting and the proposed methods under different numbers of LOS signals occluded, where the blocked LEDs are random. The simulation results show that SWF and W-SWF significantly improve the localization performance compared with RSS-based fingerprinting. It can be seen from Eq. (5) that when the LOS signal is blocked, the first value of the channel gain in Eq. (5) will disappear. Therefore, the received optical power from LED-Txi under the signal blocking environment is described as

$$P_{NLOS}^i = {P_t} \cdot \sum\limits_{l = 1}^{L-1} {{h^i}(l)}$$

In this case, RSS-based fingerprinting using received optical power to match fingerprints in the offline fingerprint database would produce matching errors. Without considering noise, the matching error can be expressed as

$$\scalebox{0.87}{$\displaystyle\Delta = \sqrt {\sum\limits_{i = 1}^{{N_L}} {{{(P_{FP}^i - P_{NLOS}^i)}^2}} } = \sqrt {\sum\limits_{i = 1}^{{N_L}} {{{\left( {{P_t} \cdot \sum\limits_{l = 0}^{L-1} {{h^i}(l)} - {P_t} \cdot \sum\limits_{l = 1}^{L-1} {{h^i}(l)} } \right)}^2}} } {\kern 1pt} {\kern 1pt} {\kern 1pt} \textrm{ = }{P_t} \cdot \sqrt {\sum\limits_{i = 1}^{{N_L}} {{{(h_{LOS}^i)}^2}} }$}$$
where $P_{FP}^i$ indicates the received optical power of the LED-Txi in the fingerprint database.

 figure: Fig. 4.

Fig. 4. Distribution diagram of positioning error of RSS-based fingerprinting, SWF, and W-SWF when the LOS signal of one or two LEDs is blocked. See Data File 1 for underlying values.

Download Full Size | PDF

From the above equation, it can be seen that the matching error of the RSS-based fingerprinting is mainly caused by the loss of the LOS signal, and is proportional to the ratio of the LOS path gain to the total path gain. Considering that the LOS signal is dominant in the central area of the room, the localization error of the RSS-based fingerprinting in the central region is significantly higher than that in the edge, which is consistent with the simulation results in Fig. 4. This is because the channel of the central area is less affected by multipath effect, and the LOS path gain $h_{LOS}^i$ accounts for a large proportion of the total channel gain, resulting in a large matching error.

In contrast, the proposed method computes two matching values using a sliding window in the online phase, and the minimum of them is chosen as the matching result. Assuming that the LOS signal from the LED-Txi is blocked, without considering the noise, the first matching value calculated by Eq. (10) can be expressed as

$$M_1^i = \sqrt {\sum\limits_{l = 0}^{{{{N_P}} / 2} - 2} {{{(\hat{h}_i^{}(l) - {{\tilde{h}}_i}(l))}^2}} } = \sqrt {\sum\limits_{l = 0}^{{{{N_P}} / 2} - 2} {{{(\hat{h}_i^{}(l) - \hat{h}_i^{}(l + 1))}^2}} } > 0$$

It can be seen from Eq. (21) that since the CIR fingerprint collected in the online phase lacks the LOS signal, the first item of the online fingerprint is equivalent to the second item of the offline fingerprint, so the vector will be misplaced during matching, resulting in the first matching value being a value greater than 0. On the other hand, the second matching value calculated by Eq. (11) can be expressed as

$$M_2^i = \sqrt {\sum\limits_{l = 0}^{{{{N_P}} / 2} - 2} {{{(\hat{h}_i^{}(l + 1) - {{\tilde{h}}_i}(l))}^2}} } = \sqrt {\sum\limits_{l = 0}^{{{{N_P}} / 2} - 2} {{{(\hat{h}_i^{}(l + 1) - \hat{h}_i^{}(l + 1))}^2}} } \textrm{ = }0$$

This means that when the LOS signal from LED Txi is blocked, the corresponding second matching value is 0. Without considering noise, the matching error of the proposed method can be expressed as

$$\Delta ^{\prime} = \sum\limits_{i = 1}^{{N_L}} {\min (M_1^i,M_2^i)} = \sum\limits_{i = 1}^{{N_L}} {\min (M_1^i,0)} = 0$$

Observing Eq. (23), it can be seen that the proposed method selects the smaller of the two matching values, which is 0, for accumulation, so the matching error can be eliminated. Conversely, if the LOS signal from the LED-Txi is not blocked, the first matching value calculated by Eq. (10) will be 0 and the second matching value calculated by Eq. (11) will be greater than 0. The final matching value calculated by this method is still 0, resulting in an accurate positioning matching result.

In Fig. 4, through simulation comparison, it is found that the positioning performance of the proposed methods in the center of the room is better in comparison to that in the edge area. This is because the reference point in the corner of the room is the farthest away from the LED, and the signal path loss is the largest, resulting in the greatest interference of noise to the CIR collected by the reference point in the corner of the room. When the LOS path is blocked, the accuracy of fingerprint positioning in the edge area affects substantially. It should be noted that when the LOS signals of the 2 LEDs are randomly blocked, the proposed SWF and W-SWF obtain large positioning errors in the symmetrical position of the room. This is due to the fact that LED-Tx2 and LED-Tx3 are symmetric for the positioning area, so their reflected path gains at a reference point are similar, except for the direct path gain. Therefore, when the LOS paths of the LED-Tx2 and LED-Tx3 are blocked, the problem of fuzzy symmetry will occur in fingerprint matching, which results in a significant positioning error in the symmetrical position of the positioning area. This problem could be solved by using PD arrays.

The cumulative distribution function (CDF) curves of the positioning errors for different methods under the condition of randomly occluding different numbers of LEDs are depicted in Fig. 5. The results reveal that no matter when the LOS signal of 1 or 2 LEDs is blocked, the proposed SWF and W-SWF outperform the comparison methods, demonstrating a significant reduction in the positioning error. Specifically, among the three comparison methods, ARWKNN algorithm is slightly better than WKNN algorithm and RSS-based fingerprinting, but their positioning performance is very close when the signal is blocked. In the case of randomly occluding the LOS path of 1 LED shown in Fig. 5(a), when the CDF is equal to 99%, the positioning errors of the RSS-based fingerprinting, WKNN algorithm, and ARWKNN algorithm are 190.9 cm, 191 cm, and 190.7 cm respectively. Similarly, in the case of randomly occluding the LOS paths of 2 LEDs, the positioning errors of the three comparison methods will further increase.

 figure: Fig. 5.

Fig. 5. CDF curves of the positioning error of different methods.

Download Full Size | PDF

To observe the localization accuracy of the proposed methods more intuitively, Fig. 6 presents the CDF curves of their localization errors in the edge area, the central area, and the entire room. As shown in Fig. 6(a), when the LOS path of 1 LED is blocked randomly, and the CDF is 99%, the positioning errors of SWF and W-SWF in the entire area are 2.83 cm and 2.24 cm respectively. Similarly, with a CDF of 99%, the localization errors of SWF in the central and edge regions are 1 cm and 3 cm respectively, while the W-SWF obtains localization errors of 1 cm and 2.83 cm respectively. In Fig. 6(b), without the help of the LOS signal of 2 LEDs, the average error of SWF and W-SWF is 18.3 cm and 15.8 cm in the entire area when the CDF is 99%. Through comparative analysis, as the number of LEDs that lack the LOS path increases, the positioning performance of the proposed methods decreases. This is because the number of LEDs with reliable CIR is decreasing, which affects the positioning accuracy. Furthermore, the localization accuracy of W-SWF is better than that of SWF because the preceding elements in the CIR vector have larger magnitudes and are less affected by noise. This makes the preceding elements contribute more and have a higher proportion in the computation of matching values, so assigning larger weights to the preceding paths helps to further improve the localization performance.

 figure: Fig. 6.

Fig. 6. Comparison of the positioning errors of SWF and W-SWF under different conditions.

Download Full Size | PDF

To verify the generality of the proposed method, we set up two different scenes for fingerprinting, in which scene 1 is the indoor environment shown in Fig. 1, and scene 2 contains 5 LEDs with plane coordinates of (0 m, 2 m), (2 m, 0 m), (2 m, 2 m), (2 m, 4 m), and (4 m, 2 m) respectively.

Figure 7 shows the CDF curves of localization errors of proposed methods and comparison methods under randomly occluding 1 and 2 LEDs’ LOS signals in scene 2. In scene 2, the localization performance of the proposed method is also excellent compared to the comparison methods. Specifically, when the LOS signal of 1 LED is blocked, the positioning error of W-SWF is 1 cm, while that of SWF is 1.41 cm when CDF is equal to 0.99. With the LOS signal of 2 LEDs blocked, the positioning errors of SWF and W-SWF are 3.60 cm and 3 cm respectively, when CDF is 0.99. Consequently, as shown in Fig. 7, the localization performance of W-SWF is slightly better than that of SWF. The positioning error of different methods under different occlusion conditions in different scenes is shown in Table 2.

 figure: Fig. 7.

Fig. 7. CDF curves of the positioning error of different methods in scene 2.

Download Full Size | PDF

Tables Icon

Table 2. Comparison of positioning performance of different methods

In the entire room of scene 1, when the LOS path of 1 LED is randomly blocked, the ME of the proposed SWF and W-SWF are 0.28 cm and 0.20 cm, and the RMSE is 0.69 cm and 0.58 cm, respectively. In contrast, the comparison algorithms using RSS information can only obtain a positioning accuracy of more than 60 cm, which is about 99% lower than the proposed method. This is because, in the case of missing the LOS signal, there is a huge difference between the RSS fingerprint collected online and the fingerprint database, which leads to a large error in fingerprint matching. When the LOS paths of 2 LEDs are occluded, the localization accuracy of the proposed method in the entire room is improved by about 98% compared with the comparison methods. As the number of blocked LEDs increases, all algorithms’ positioning performance is degraded. Under the condition of occlusion of the LOS path of 3 LEDs, the MEs of the two proposed methods are 14.26 cm and 14.76 cm. Compared to the comparison methods, the proposed methods significantly improve localization performance by 88.7%. Furthermore, with 4 LEDs’ LOS path blocked, the SWF and W-SWF achieved ME of 41.1 cm and 36.43 cm, 76% lower than the comparison methods. With the increase in the number of blocked LEDs, the positioning performance improvement of the proposed method decreases due to the decrease in the number of reliable LOS signals. Furthermore, the proposed method achieves stable localization accuracy in the central and edge regions, proving its stronger robustness. In general, the dramatic drop in positioning error verifies that the presented method can effectively reduce the impact of LOS signal loss by utilizing CIR, thereby improving the accuracy of fingerprinting.

In scene 2, due to the use of 5 LEDs, the number of remaining LEDs with reliable CSI increases under the conditions of the same LOS occlusion as in scene 1, resulting in a further improvement in the localization performance of the proposed method over scene 1. For the entire area, with 1 LED’s LOS path blocked, the SWF and W-SWF achieved ME of 0.12 cm and 0.07 cm, and RMSE of 0.36 cm and 0.27 cm, respectively. Under the condition of occlusion of the LOS path of 2 LEDs, the proposed methods also achieve excellent performance with the ME of 0.39 cm and 0.27 cm, respectively. When the LOS signal of 3 LEDs is blocked, the ME of SWF and W-SWF is 1.59 cm and 1.22 cm, which achieves a localization performance improvement of 98.5% compared with the comparison methods. In the case of 4 LEDs’ LOS path being blocked, the proposed methods achieved the positioning performance with ME of 17.36 cm and 15.86 cm, which also achieve huge improvement compared to the comparison methods. W-SWF achieves excellent performance under different occlusions in different scenes, thanks to assigning larger weights to the preceding elements in the fingerprint vector to highlight their importance. This is based on the fact that the preceding elements in the CIR vector have greater magnitude and are less affected by noise.

For a more comprehensive analysis of the proposed methods’ performance, the proposed methods’ time complexity is evaluated next. The time complexity of the SWF method can be obtained by the following analysis. In step 1, the input to the calculation of the first matching vector is ${N_L}$ fingerprint vector of length ${N_P}/2 - 1$, hence the time complexity is $\textrm{O}({N_L} \cdot {{({N_P}} / 2} - 1))$. Similarly, the time complexity of the second step is $\textrm{O}({N_L} \cdot {{({N_P}} / 2} - 1))$. Considering the time complexity of finding the minimum from two elements is $\textrm{O}(2)$, so obtaining the final matching value in the third step has a time complexity of $\textrm{O}(2{N_L})$. Therefore, the time complexity of obtaining final matching values for all reference points is $\textrm{O}({N_R} \cdot {N_L} \cdot {N_P})$. The time complexity of searching for the minimum value in a vector of matching results of length ${N_R}$ is $\textrm{O}({N_R} \cdot {\log _2}{N_R})$. Finally, the time complexity of SWF is $\textrm{O}({N_R} \cdot ({N_L} \cdot {N_P} + {\log _2}{N_R}))$. In the same way, W-SWF has the same time complexity as SWF. For the traditional RSS algorithm, WKNN algorithm, and ARWKNN algorithm, the time complexities are $\textrm{O}({N_R} \cdot ({N_L} + {\log _2}{N_R}))$, $\textrm{O}({N_R} \cdot ({N_L} + {\log _2}{N_R})\textrm{ + }K)$, and $\textrm{O}({N_R} \cdot ({N_L} + {\log _2}{N_R})\textrm{ + }{N_L} \cdot ({K_{\max }} + 1) \cdot {{{K_{\max }}} / 2})$, respectively, where K represents the number of the nearest neighbor in the WKNN algorithm, which is set to 4 in [14], and ${K_{\max }}$ indicates the maximum number of the nearest neighbors in the ARWKNN algorithm, which is set to 8 in [15]. Table 3 shows the time complexity comparison of different methods. As can be observed that the time complexity of the proposed methods is directly related to the length of the CIR employed. Under the parameters set in this paper, the RSS-based fingerprinting and KNN algorithm have the smallest time complexity, the time complexity of the ARWKNN algorithm is second, and the time complexity of SWF/W-SWF is the highest. Specifically, the time complexity of SWF/W-SWF is about 7.4 times that of other algorithms. Although the proposed method has higher time complexity, this complexity should be acceptable given the improved positioning accuracy obtained and the availability of hardware computing resources. We will also take into account the usage of machine learning for optimization in terms of algorithm complexity in the next work.

Tables Icon

Table 3. Time complexity comparison

5. Conclusions

In this paper, a CSI-based fingerprinting method for VLP is investigated. Unlike the current popular fingerprinting algorithm combining RSS and machine learning, this method has no model training cost and is not limited by specific scenarios. The proposed SWF method utilizes CIR fingerprints and sliding matching to effectively reduce the fingerprint matching error in the presence of LOS path occlusion. The simulation findings point out that when the LOS paths of different numbers of LEDs are blocked, the presented SWF method has significant advantages over the comparison method in the entire room, edge area, and central area, as well as excellent robustness. Furthermore, the W-SWF method achieves the best positioning performance by setting the weights to the elements in the CIR vector, which verifies that different paths contribute differently to calculating matching values. It should be emphasized that the cost of improving the performance of the proposed method is relatively high complexity, and we will further optimize the complexity of the proposed method in our future work.

Funding

Key Projects of Basic and Applied Basic Research in Jiangmen (2021030103250006686); Guangdong Provincial Department of Education Youth Innovative Talents Project (2022KQNCX096).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are available in Data File 1, Ref. [20].

References

1. P. S. Farahsari, A. Farahzadi, J. Rezazadeh, and A. Bagheri, “A Survey on Indoor Positioning Systems for IoT-Based Applications,” IEEE Internet Things J. 9(10), 7680–7699 (2022). [CrossRef]  

2. F. J. Aranda, F. Parralejo, F. J. Álvarez, and J. A. Paredes, “Performance analysis of fingerprinting indoor positioning methods with BLE,” Expert Syst. Appl. 202, 117095 (2022). [CrossRef]  

3. T. Zhou, J. Ku, B Lian, and Y. Zhang, “Indoor positioning algorithm based on improved convolutional neural network,” Neural Computing and Applications 34(9), 6787–6798 (2022). [CrossRef]  

4. P. Bencak, D. Hercog, and T. Lerher, “Indoor Positioning System Based on Bluetooth Low Energy Technology and a Nature-Inspired Optimization Algorithm,” Electronics 11(3), 308 (2022). [CrossRef]  

5. Z. Zhao, Z. Lou, R. Wang, Q. Li, and X. Xu, “I-WKNN: Fast-speed and high-accuracy WIFI positioning for intelligent sports stadiums,” Computers & Electrical Engineering 98, 107619 (2022). [CrossRef]  

6. C. Zhu, S. Zhao, Y. Xia, and L. Li, “An improved three-point localization method based on RSS for transceiver separation RFID systems,” Measurement 187, 110283 (2022). [CrossRef]  

7. W. Xie, B. Li, Y. Peng, H. Zhu, F. AL-Hazemi, and M. M. Mirza, “Secrecy Enhancement for SSK-Based Visible Light Communication Systems,” Electronics 11(7), 1150 (2022). [CrossRef]  

8. H. Zhao and J. Wang, “A Novel Three-Dimensional Algorithm Based on Practical Indoor Visible Light Positioning,” IEEE Photonics J. 11(3), 1–8 (2019). [CrossRef]  

9. C. Hong, Y. Wu, Y. Liu, C. Chow, C. Yeh, K. Hsu, D. Lin, X. Liao, K. Lin, and Y. Chen, “Angle-of-arrival (AOA) visible light positioning (VLP) system using solar cells with third-order regression and ridge regression algorithms,” IEEE Photonics J. 12(3), 1–5 (2020). [CrossRef]  

10. P. Du, S. Zhang, C. Chen, A. Alphones, and W. Zhong, “Demonstration of a Low-Complexity Indoor Visible Light Positioning System Using an Enhanced TDOA Scheme,” IEEE Photonics J. 10(4), 1–10 (2018). [CrossRef]  

11. X. Sun, Y. Zhuang, J. Huai, L. Hua, D. Chen, Y. Li, Y. Cao, and R. Chen, “RSS-based Visible Light Positioning Using Non-linear Optimization,” IEEE Internet Things J. 9(15), 14137–14150 (2022). [CrossRef]  

12. C. Jia, T. Yang, C. Wang, and M. Sun, “High-Accuracy 3D Indoor Visible Light Positioning Method Based on the Improved Adaptive Cuckoo Search Algorithm,” Arab. J. Sci. Eng. 47(2), 2479–2498 (2022). [CrossRef]  

13. X. Meng, C. Jia, C. Cai, F. He, and Q. Wang, “Indoor High-Precision 3D Positioning System Based on Visible-Light Communication Using Improved Whale Optimization Algorithm,” Photonics 9(2), 93 (2022). [CrossRef]  

14. A. Bakar, T. Glass, H. Tee, F. Alam, and M. Legg, “Accurate visible light positioning using multiple-photodiode receiver and machine learning,” IEEE Trans. Instrum. Meas. 70, 1–12 (2021). [CrossRef]  

15. S. Xu, C. Chen, Y. Wu, X. Wang, and F. Wei, “Adaptive Residual Weighted K-Nearest Neighbor Fingerprint Positioning Algorithm Based on Visible Light Communication,” Sensors 20(16), 4432 (2020). [CrossRef]  

16. Y. Chen, W. Guan, J. Li, and H. Song, “Indoor Real-Time 3-D Visible Light Positioning System Using Fingerprinting and Extreme Learning Machine,” IEEE Access 8, 13875–13886 (2020). [CrossRef]  

17. L. Hsu, D. Tsai, H. M. Chen, Y. Chang, Y. Liu, C. Chow, S. Song, and C. Yeh, “Using Data Pre-Processing and Convolutional Neural Network (CNN) to Mitigate Light Deficient Regions in Visible Light Positioning (VLP) Systems,” J. Lightwave Technol. 40(17), 5894–5900 (2022). [CrossRef]  

18. D. András, D. Gyula, R. Tamás, and A. János, “Processing indoor positioning data by goal-oriented supervised fuzzy clustering for tool management,” J. Manuf. Syst. 63, 15–22 (2022). [CrossRef]  

19. K. Wang, Y. Liu, and Z. Hong, “RSS-based visible light positioning based on channel state information,” Opt. Express 30(4), 5683–5699 (2022). [CrossRef]  

20. K. Wang and X. Huang, “Simulation results of CSI-based sliding window fingerprinting method for VLP,” figshare, (2022) https://doi.org/10.6084/m9.figshare.21671312.

Supplementary Material (1)

NameDescription
Data File 1       This Data File contains simulation data for different fingerprinting methods for VLP, including the traditional RSS algorithm, the weighted k-nearest neighbor (WKNN) algorithm, the adaptive residual weighted k-nearest neighbor (ARWKNN) algorithm, and

Data availability

Data underlying the results presented in this paper are available in Data File 1, Ref. [20].

20. K. Wang and X. Huang, “Simulation results of CSI-based sliding window fingerprinting method for VLP,” figshare, (2022) https://doi.org/10.6084/m9.figshare.21671312.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Geometric model of an LED-based VLP system.
Fig. 2.
Fig. 2. Schematic diagram of fingerprint generation.
Fig. 3.
Fig. 3. Schematic diagram of sliding window fingerprinting.
Fig. 4.
Fig. 4. Distribution diagram of positioning error of RSS-based fingerprinting, SWF, and W-SWF when the LOS signal of one or two LEDs is blocked. See Data File 1 for underlying values.
Fig. 5.
Fig. 5. CDF curves of the positioning error of different methods.
Fig. 6.
Fig. 6. Comparison of the positioning errors of SWF and W-SWF under different conditions.
Fig. 7.
Fig. 7. CDF curves of the positioning error of different methods in scene 2.

Tables (3)

Tables Icon

Table 1. Simulation parameters

Tables Icon

Table 2. Comparison of positioning performance of different methods

Tables Icon

Table 3. Time complexity comparison

Equations (23)

Equations on this page are rendered with MathJax. Learn more.

$$\mathbf{h}\textrm{ = }\left[ {\begin{array}{ccccc} {h(0)}&{h(1)}& \cdots &{h(L - 1)} \end{array}} \right]$$
$$h(l) = \left\{ \begin{array}{ll} {h_{LOS}}&l = 0\\ \int_{{\tau_0} + (k - 1) \cdot {T_s}}^{{\tau_0} + k \cdot {T_s}} {{h_{Ref}}(t)dt,}&1 \le l \le l - 1 \end{array} \right.$$
$${h_{LOS}} = \frac{{(m + 1){A_{PD}}}}{{2\pi {d^2}}}{\cos ^m}(\varphi ){T_s}(\theta )g(\theta )\cos (\theta )\;$$
$${h_{Ref}}(t) = \frac{{(m + 1){A_{PD}}\rho {A_{Ref}}}}{{2{\pi ^2}{d_1}^2{d_2}^2}}{\cos ^m}(\varphi ^{\prime})\cos (\alpha )\cos (\beta ){T_s}(\theta ^{\prime})g(\theta ^{\prime})\cos (\theta ^{\prime})\delta (t - \frac{{{d_1} + {d_2}}}{c})\;$$
$$P_r^i = {P_t} \cdot \sum\limits_{l = 0}^{L-1} {{h^i}(l)} {\kern 1cm}(i = 1, \cdots ,{N_L})$$
$${x_{DCO}}(t) = {x_{clip}}(t) + {B_{DC}}$$
$$r(n) = \gamma \cdot h(n) \otimes {x_{\textrm{DCO}}}(n) + w(n)$$
$${\hat{\mathbf{X}}^{({i,a} )}} = \left[ {\begin{array}{ccccc} {\hat{X}_0^{({i,a} )}}&{\hat{X}_1^{({i,a} )} \cdots \hat{X}_{{N_P}/2 - 1}^{({i,a} )}}&{\hat{X}_{{N_P}/2}^{({i,a} )}}&{{{({\hat{X}_{{N_P}/2 - 1}^{({i,a} )}} )}^\ast } \cdots {{({\hat{X}_1^{({i,a} )}} )}^\ast }} \end{array}} \right]\; = {\mathbf{H}^i}\mathbf{X} + {\mathbf{W}^{({i,a} )}}$$
$${\hat{\mathbf{H}}^{({i,a} )}} = {{{{\hat{\mathbf{X}}}^{({i,a} )}}} / \mathbf{X}} = [{0\textrm{ }{{\hat{X}_1^{({i,a} )}} / {X_1^{}\textrm{ } \cdots {{\textrm{ }\hat{X}_{{N_P}/2 - 1}^{({i,a} )}} / {X_{{N_P}/2 - 1}^{}\textrm{ }0\textrm{ }{{{{({\hat{X}_{{N_P}/2 - 1}^{({i,a} )}} )}^\ast }} / {X_{{N_P}/2 - 1}^\ast }}\textrm{ } \cdots \textrm{ }{{{{({\hat{X}_1^{({i,a} )}} )}^\ast }} / {X_1^\ast }}}}}}} ]$$
$$M_1^i = \sqrt {{{[\hat{h}_i^k(0) - {{\tilde{h}}_i}(0)]}^2} + {{[\hat{h}_i^k(1) - {\tilde{h}}_i(1)]}^2} + \cdots + {{[\hat{h}_i^k({{{N_P}} / 2} - 2) - {{\tilde{h}}}_i({{{N_P}} / 2} - 2)]}^2}}$$
$$M_2^i = \sqrt {{{[\hat{h}_i^k(1) - {{\tilde{h}}_i}(0)]}^2} + {{[\hat{h}_i^k(2) - {{\tilde{h}}_i}(1)]}^2} + \cdots + {{[\hat{h}_i^k({{{N_P}} / {2 - 1}}) - {{\tilde{h}}_i}({{{N_P}} / 2} - 2)]}^2}}$$
$$\mathbf{M}\textrm{ = }\left[ {\begin{array}{cc} {{\mathbf{M}_\mathbf{1}}}&{{\mathbf{M}_\mathbf{2}}} \end{array}} \right] = \left[ {\begin{array}{cc} {M_1^1}&{M_2^1}\\ {M_1^2}&{M_2^2}\\ \vdots & \vdots \\ {M_1^{{N_L}}}&{M_2^{{N_L}}} \end{array}} \right]$$
$${R_k} = \sum\limits_{i = 1}^{{N_L}} {\min (M_1^i,M_2^i)}$$
$$M_1^i = \sqrt {\sum\limits_{l = 0}^{{{{N_P}} / 2} - 2} {[{{{(\hat{h}_i^k(l) - {{\tilde{h}}_i}(l))}^2} \cdot W(l)} ]} }$$
$$W(l) = \exp (\frac{1}{{l + 1}})\;\;\;(l = 0,1, \cdots ,{{{N_P}} / 2} - 2)$$
$$M_2^i = \sqrt {\sum\limits_{l = 0}^{{{{N_P}} / 2} - 2} {[{{{(\hat{h}_i^k(l + 1) - {{\tilde{h}}_i}(l))}^2} \cdot W(l)} ]} }$$
$$ME = \frac{1}{{{N_R}}}\sum\limits_{i = 1}^{{N_R}} {\sqrt {{{({x_i} - {{x^{\prime}}_i})}^2} + {{({y_i} - {{y^{\prime}}_i})}^2}} }$$
$$RMSE = \sqrt {\frac{1}{{{N_R}}}\sum\limits_{i = 1}^{{N_R}} {[{{{({x_i} - {{x^{\prime}}_i})}^2} + {{({y_i} - {{y^{\prime}}_i})}^2}} ]} }$$
$$P_{NLOS}^i = {P_t} \cdot \sum\limits_{l = 1}^{L-1} {{h^i}(l)}$$
$$\scalebox{0.87}{$\displaystyle\Delta = \sqrt {\sum\limits_{i = 1}^{{N_L}} {{{(P_{FP}^i - P_{NLOS}^i)}^2}} } = \sqrt {\sum\limits_{i = 1}^{{N_L}} {{{\left( {{P_t} \cdot \sum\limits_{l = 0}^{L-1} {{h^i}(l)} - {P_t} \cdot \sum\limits_{l = 1}^{L-1} {{h^i}(l)} } \right)}^2}} } {\kern 1pt} {\kern 1pt} {\kern 1pt} \textrm{ = }{P_t} \cdot \sqrt {\sum\limits_{i = 1}^{{N_L}} {{{(h_{LOS}^i)}^2}} }$}$$
$$M_1^i = \sqrt {\sum\limits_{l = 0}^{{{{N_P}} / 2} - 2} {{{(\hat{h}_i^{}(l) - {{\tilde{h}}_i}(l))}^2}} } = \sqrt {\sum\limits_{l = 0}^{{{{N_P}} / 2} - 2} {{{(\hat{h}_i^{}(l) - \hat{h}_i^{}(l + 1))}^2}} } > 0$$
$$M_2^i = \sqrt {\sum\limits_{l = 0}^{{{{N_P}} / 2} - 2} {{{(\hat{h}_i^{}(l + 1) - {{\tilde{h}}_i}(l))}^2}} } = \sqrt {\sum\limits_{l = 0}^{{{{N_P}} / 2} - 2} {{{(\hat{h}_i^{}(l + 1) - \hat{h}_i^{}(l + 1))}^2}} } \textrm{ = }0$$
$$\Delta ^{\prime} = \sum\limits_{i = 1}^{{N_L}} {\min (M_1^i,M_2^i)} = \sum\limits_{i = 1}^{{N_L}} {\min (M_1^i,0)} = 0$$
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.