Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Received signal strength assisted perspective-three-point algorithm for indoor visible light positioning

Open Access Open Access

Abstract

In this paper, a received signal strength assisted perspective-three-point positioning algorithm (R-P3P) is proposed for visible light positioning (VLP) systems. Due to the directional propagation of visible light, the orientations of light-emitting diodes (LEDs) and receivers can affect the positioning accuracy seriously. To circumvent this challenge, R-P3P is proposed to mitigate the limitation on LEDs’ and receiver’s orientation in VLP systems. The basic idea of R-P3P is to jointly utilize visual and strength information to estimate the receiver position using 3 LEDs regardless of the orientations of LEDs and receivers. Simulation results show that R-P3P can achieve positioning accuracy within 10 cm over 70% of an indoor area with low complexity.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Indoor positioning has attracted increasing attention recently due to its numerous applications including indoor navigation, robot movement control and advertisements in shopping malls. In this research field, visible light positioning (VLP) is one of the most promising technologies due to its high accuracy and low cost [1,2]. Visible light has strong directionality and low multipath interference, and thus VLP can achieve high accuracy positioning performance [2]. Additionally, VLP utilizes light-emitting diodes (LEDs) as transmitters. Benefited from the increasing market share of LEDs, VLP has relatively low cost on infrastructure [2].

VLP typically equips photodiodes (PDs) or cameras as the receiver. Positioning algorithms using PDs include proximity [3], fingerprinting [4], time of arrival (TOA) [5], angle of arrival (AOA) [6] and received signal strength (RSS) [7,8]. Positioning algorithms using cameras are termed as image sensing [9]. Proximity is the simplest technique, while it only provides proximity location based on the received signal from a single LED with a unique identification code. Fingerprinting algorithms can achieve enhanced accuracy at a high cost for building and updating a database. TOA and AOA algorithms require complicated hardware implementation [1]. In contrast, RSS and image sensing algorithms are the most widely-used methods due to their high accuracy and moderate cost [1].

However, the RSS and image sensing algorithms also have their own inherent limitations. In particular, RSS algorithms determine the position of the receiver based on the power of the received signal from at least 3 LEDs, and they have the following limitations. 1) RSS algorithms limit the orientation of LEDs. Typically the LED orientations are required to face vertically downwards [8,10]. However, when the ceiling is not strictly even, and the LEDs have slight orientation deviations, the accuracy of the RSS algorithms can be affected seriously. 2) Additionally, RSS algorithms require the orientation of the receiver to face vertically upward to the ceiling [7,8,11], which is inflexible for portable devices. A slight perturbation of the receiver can affect the positioning accuracy significantly [12]. The work in [7] exploits additional sensors to measure the receiver orientation. However, it also induces measurement errors, which will further impair positioning accuracy. In summary, RSS algorithms limit the orientation of LEDs and receivers, which constrains the applications of the algorithms.

As for image sensing algorithms, they determine the receiver position by analyzing the geometric relations between 3 dimensional (3D) LEDs and their 2 dimensional (2D) projections on the image plane. Image sensing algorithms can be classified into two types: single-view geometry and vision triangulation [1]. Single-view geometry methods exploit a single known camera to capture the image of multiple LEDs [13], and vision triangulation methods exploit multiple known cameras for 3D position measurement [14]. Nowadays, most of popular devices such as smartphones only occupy a single front camera due to the limited space. Therefore, single-view geometry methods can be applied in more applications. perspective-n-point (PnP) is a typical single-view geometry algorithm that has been extensively studied [9,15,16]. However, PnP algorithms require at least 4 LEDs to obtain a deterministic 3D position [15]. This means that PnP algorithms cannot be used in scenarios where less than 4 LEDs are deployed. Additionally, in scenarios where the LEDs are deployed sparsely, PnP algorithms also cannot be used since it is difficult for the camera to capture enough LEDs for positioning due to the limited field of view (FoV).

To address the problems in both the RSS and the PnP algorithms, in our previous work [12], we proposed a camera-assisted received signal strength ratio algorithm (CA-RSSR). CA-RSSR exploits both the strength and visual information of visible light and it achieves centimeter-level 2D positioning accuracy with 3 LEDs regardless of the receiver orientation without any additional sensors. However, CA-RSSR still requires LEDs to face vertically downwards. Additionally, CA-RSSR uses the non-linear least square (NLLS) method for positioning, which means the accuracy depends on the starting values of the NLLS estimator and the NLLS method increases the complexity. In addition, CA-RSSR requires at least 5 LEDs to achieve 3D positioning, which is even more than that required by PnP algorithms. Based on CA-RSSR, we then proposed an enhanced CA-RSSR algorithm (eCA-RSSR) to reduce the complexity and the required number of LEDs [17]. However, eCA-RSSR still limits the LEDs’ orientation. Therefore, the VLP algorithm which can achieve wide applications still remains to be developed.

Against the aforementioned background, we propose a novel RSS assisted Perspective-three-Point algorithm (R-P3P). R-P3P first exploits the visual information captured by the camera to estimate the incidence angles of visible light based on the single-view geometry theory. Then, R-P3P estimates the distances between the LEDs and the receiver based on the law of cosines and Wu-Ritt’s zero decomposition method as typical P3P algorithms [18]. Due to the positioning principle of P3P algorithms, we can obtain up to four solution sets for the distances, where only one solution set is the desired one. Next, based on all solution sets of the distances, the estimated incidence angles and the strength information captured by the PD, four solution sets for irradiance angles of visible light can be obtained. Based on the semi-angles of the LEDs, the desired solution set of irradiance angles can be determined by a simple method, and then the desired distances between the LEDs and the receiver can be determined. Finally, based on the estimated distances, the position of the receiver can be obtained by the linear least square (LLS) method. Therefore, compared with CA-RSSR and eCA-RSSR, R-P3P no longer relies on RSSRs to estimate the distances, which does not limit the LEDs’ orientation. Additionally, R-P3P can avoid the potential side effect of the starting values of the NLLS method and requires lower computation cost than CA-RSSR. On the other hand, compared with the PnP algorithms, R-P3P only requires 3 LEDs for 3D positioning. In this way, R-P3P can achieve wider applications than CA-RSSR and PnP algorithms. Same as CA-RSSR, R-P3P requires both a PD and a camera for positioning. Nowadays, both the PD and the camera are essential parts of popular devices such as smartphones, indicating that R-P3P can be easily implemented in such popular devices [1]. Simulation results show that R-P3P can achieve positioning accuracy within 10 cm over 70% indoor area with low complexity regardless of the orientations of LEDs and receivers.

The rest of the paper is organized as follows. Section 2. introduces the system model. The proposed positioning algorithm is detailed in Section 3. Simulation results are presented in Section 4. Finally, the paper is concluded in Section 5.

2. System model

The system diagram is illustrated in Fig. 1. Four coordinate systems are utilized for positioning, which are the pixel coordinate system (PCS) $o^{\textrm {p}}-u^{\textrm {p}}v^{\textrm {p}}$ on the image plane, the image coordinate system (ICS) $o^{\textrm {i}}-x^{\textrm {i}}y^{\textrm {i}}$ on the image plane, the camera coordinate system (CCS) $o^{\textrm {c}}-x^{\textrm {c}}y^{\textrm {c}}z^{\textrm {c}}$ and the world coordinate system (WCS) $o^{\textrm {w}}-x^{\textrm {w}}y^{\textrm {w}}z^{\textrm {w}}$. As shown in Fig. 1, different colors represent different coordinate systems. The image plane in Fig. 1 is a virtual plane. In this paper, we utilize a standard pinhole camera. The actual image plane is behind the camera optical center (i.e., the pinhole), $o^{\textrm {c}}$. In order to show the geometric relations more clearly, the virtual image plane is set up in front of $o^{\textrm {c}}$ as done in many papers [16,18,19]. In particular, the virtual image plane and the actual image plane are centrally symmetric, and $o^{\textrm {c}}$ is the center of symmetry. In PCS, ICS and CCS, the axes $u^{\textrm {p}}$, $x^{\textrm {i}}$ and $x^{\textrm {c}}$ are parallel to each other and, similarly, $v^{\textrm {p}}$, $y^{\textrm {i}}$ and $y^{\textrm {c}}$ are also parallel to each other. Additionally, $o^{\textrm {p}}$ is at the upper left corner of the image plane and $o^{\textrm {i}}$ is at the center of the image plane. In addition, $o^{\textrm {i}}$ is termed as the principal point, whose pixel coordinate is $\left (u_{0},v_{0}\right )$. In contrast, $o^{\textrm {c}}$ is termed as the camera optical center. Furthermore, $o^{\textrm {i}}$ and $o^{\textrm {c}}$ are on the optical axis. The distance between $o^{\textrm {c}}$ and $o^{\textrm {i}}$ is the focal length $f$, and thus the $z$-coordinate of the image plane in CCS is $z^{\mathrm {c}}=f$.

 figure: Fig. 1.

Fig. 1. The system diagram of the VLP system.

Download Full Size | PDF

In the proposed positioning system, $K\left (K\geq 3\right )$ LEDs are mounted on the ceiling. The receiver is composed of a PD and a standard pinhole camera, and they are close to each other. As shown in Fig. 1, $\mathbf {n}_{\textrm {LED},i}^{\textrm {w}}$ denotes the unknown unit normal vector of the $i$th ($i\in \left \{ 1,2,\ldots ,K\right \}$) LED, $\mathrm {T}_{i}$, in the WCS. Additionally, $\mathbf {s}_{i}^{\textrm {w}}=\left (x_{i}^{\textrm {w}},y_{i}^{\textrm {w}},z_{i}^{\textrm {w}}\right )$ is the coordinate of $\mathrm {T}_{i}$ in WCS, which are assumed to be known at the transmitter and can be obtained by the receiver through visible light communications (VLC). In contrast, $\mathbf {r}^{\textrm {w}}=\left (x_{r}^{\textrm {w}},y_{r}^{\textrm {w}},z_{r}^{\textrm {w}}\right )$ is the world coordinate of the receiver to be positioned. In addition, $\phi _{i}$ and $\psi _{i}$ are the irradiance angle and the incidence angle of the visible lights, respectively. Furthermore, $\mathbf {w}_{i}^{\textrm {c}}$ and $\mathbf {d}_{i}^{\textrm {w}}$ denote the vectors from the receiver to $\mathrm {T}_{i}$ in CCS and WCS, respectively.

LEDs with Lambertian radiation pattern are considered. The line of sight (LoS) link is the dominant component in the optical channel, and thus this work only considers the LoS channel for simplicity [20]. The channel direct current (DC) gain between the $i$th LED and the PD is given by [21]

$$H_{i}=\frac{\left(m+1\right)A}{2\pi d_{i}^{2}}\cos^{m}\left(\phi_{i}\right)T_{s}\left(\psi_{i}\right)g\left(\psi_{i}\right)\cos\left(\psi_{i}\right)$$
where $m$ is the Lambertian order of the LED, given by $m=\frac {-\ln 2}{\ln \left (\cos \Phi _{\frac {1}{2}}\right )}$, where $\Phi _{\frac {1}{2}}$ denotes the semi-angles of the LED. In addition, $d_{i}=\left \Vert \mathbf {d}_{i}^{\textrm {w}}\right \Vert =\left \Vert \mathbf {s}_{i}^{\textrm {w}}-\mathbf {r}^{\textrm {w}}\right \Vert$, where $\left \Vert \cdot \right \Vert$ denotes Euclidean norm of vectors, $A$ is the physical area of the detector at the PD, $T_{s}\left (\psi _{i}\right )$ is the gain of the optical filter, and $g\left (\psi _{i}\right )=\begin {cases} \frac {n^{2}}{\sin ^{2}\Psi _{c}}, & 0\leq \psi _{i}\leq \Psi _{c}\\ 0, & \psi _{i}\geq \Psi _{c} \end {cases}$ is the gain of the optical concentrator, where $n$ is the refractive index of the optical concentrator and $\Psi _{c}$ is the FoV of the PD. The received optical power from the $i$th LED can be expressed as
$$P_{r,i}=P_{t}H_{i}=\frac{C}{d_{i}^{2}}\cos^{m}\left(\phi_{i}\right)\cos\left(\psi_{i}\right)$$
where $P_{t}$ denotes the optical power of the LEDs and $C=P_{t}\frac {\left (m+1\right )A}{2\pi }T_{s}\left (\psi _{i}\right )g\left (\psi _{i}\right )$ is a constant. The signal-to-noise ratio (SNR) is calculated as $SNR_{i}=10\log _{10}\frac {\left (P_{r,i}R_{p}\right )^{2}}{\sigma _{\mathrm {noise},i}^{2}}$, where $R_{p}$ is the efficiency of the optical to electrical conversion and $\sigma _{\mathrm {noise},i}^{2}$ is the total noise variance.

3. Received signal strength assisted perspective-three-point algorithm (R-P3P)

In this section, a novel visible light positioning algorithm, termed as R-P3P is proposed. R-P3P consists of four steps. In the first step, the incidence angle is estimated according to the visual information captured by the camera based on the single-view geometry theory. Then, the distances between the LEDs and the receiver can be obtained based on the law of cosines and Wu-Ritt’s zero decomposition method as the typical P3P algorithms [18]. Due to the positioning principle of P3P algorithms, we can obtain up to four solution sets for the distances, where only one solution set can obtain the desired position. Next, based on all solution sets for the distances, the estimated incidence angles and the RSS received by the PD, up to four solution sets for the irradiance angles can be obtained. Based on the semi-angles of the LEDs, the desired solution set for the irradiance angles can be obtained by a simple method, and then the desired distances between LEDs and the receiver can be obtained. Finally, based on the estimated distances, the position of the receiver is estimated by the LLS algorithm.

3.1 Incidence angle estimation

In the pinhole camera, the pixel coordinate of the projection of the $i$th LED is denoted by $\mathbf {s}_{i}^{\textrm {p}}=\left (u_{i}^{\textrm {p}},v_{i}^{\textrm {p}}\right )$, and this coordinate can be obtained by the camera through image processing [9]. Based on the single-view geometry theory, the $i$th LED, the projection of the $i$th LED onto the image plane and $o^{\textrm {c}}$ are on the same straight line. Therefore, the camera coordinates of the $i$th LED, $\mathbf {s}_{i}^{\textrm {c}}=\left (x_{i}^{\textrm {c}},y_{i}^{\textrm {c}},z_{i}^{\textrm {c}}\right )$, can be expressed as follows

$$\left(\mathbf{s}_{i}^{\textrm{c}}\right)^{\textrm{T}}=\begin{bmatrix}x_{i}^{\textrm{c}}\\ y_{i}^{\textrm{c}}\\ z_{i}^{\textrm{c}} \end{bmatrix}=\mathbf{M^{-1}}\cdot z_{i}^{\textrm{c}}\begin{bmatrix}u_{i}^{\textrm{p}}\\ v_{i}^{\textrm{p}}\\ 1 \end{bmatrix}$$
where $\left (\cdot \right )^{\textrm {T}}$ denotes the transposition of matrices. Additionally, $\mathbf {M}=\begin {bmatrix}f_{u} & 0 & u_{0}\\ 0 & f_{v} & v_{0}\\ 0 & 0 & 1 \end {bmatrix}$ is the intrinsic parameter matrix of the camera, which can be calibrated in advance [16]. In addition, $f_{u}=\frac {f}{d_{x}}$ and $f_{v}=\frac {f}{d_{y}}$ denote the focal ratio along $u$ and $v$ axes in pixels, respectively. Furthermore, $d_{x}$ and $d_{y}$ are the physical size of each pixel in the $x$ and $y$ directions on the image plane, respectively.

In CCS, the vector from $o^{\textrm {c}}$ to the $i$th LED, $\mathbf {w}_{i}^{\textrm {c}}$, can be expressed as

$$\mathbf{w}_{i}^{\textrm{c}}=\mathbf{s}_{i}^{\textrm{c}}-\mathbf{o}^{\textrm{c}}=\left(x_{i}^{\textrm{c}},y_{i}^{\textrm{c}},z_{i}^{\textrm{c}}\right)$$
where $\mathbf {o}^{\textrm {c}}=\left (0^{\textrm {c}},0^{\textrm {c}},0^{\textrm {c}}\right )$ is the camera coordinate of $o^{\textrm {c}}$. The estimated incidence angle of the $i$th LED, $\psi _{i,\textrm {est}}$, can be calculated as
$$\psi_{i,\textrm{est}}=\arccos\frac{\mathbf{w}_{i}^{\textrm{c}}\cdot\mathbf{n}_{\textrm{cam}}^{\textrm{c}}}{\left\Vert \mathbf{w}_{i}^{\textrm{c}}\right\Vert }$$
where $\mathbf {n}_{\textrm {cam}}^{\textrm {c}}=\left (0^{\textrm {c}},0^{\textrm {c}},1^{\textrm {c}}\right )$ is the unit normal vector of the camera in CCS. Since the absolute value of $\psi _{i,\textrm {est}}$ remains the same in different coordinate systems, the estimated incidence angles in WCS are also given by Eq. (5). In this way, R-P3P is able to obtain the incidence angles regardless of the receiver orientations.

3.2 Distance estimation

Figure 2 shows the geometric relations among three LEDs and the camera. As shown in Fig. 2, $\textrm {T}_{i}$ ($i\in \left \{ 1,2,3\right \}$) is the $i$th LED and $o^{\textrm {c}}$ is the camera optical center. The distance between $\textrm {T}_{i}$ and $\textrm {T}_{j}$, $d_{ij}$ ($i,j\in \left \{ 1,2,3\right \} ,i\neq j$), is known in advance. Additionally, $d_{i}$ ($i\in \left \{ 1,2,3\right \}$) is the distance between the $i$th LED and the receiver, which is to be determined in this subsection. In addition, $\mathbf {w}_{i}^{\textrm {c}}$ ($i\in \left \{ 1,2,3\right \}$) calculated by Eq. (4), are the vectors from the receiver to $\textrm {T}_{i}$ in CCS. Furthermore, $\alpha _{ij}$ ($i,j\in \left \{ 1,2,3\right \} ,i\neq j$) is the angle between $\mathbf {w}_{i}^{\textrm {c}}$ and $\mathbf {w}_{j}^{\textrm {c}}$, i.e., $\alpha _{ij}=\angle \textrm {T}_{i}o^{\textrm {c}}\textrm {T}_{j}$, which can be calculated as

$$\alpha_{ij}=\arccos\frac{\mathbf{w}_{i}^{\textrm{c}}\cdot\mathbf{w}_{j}^{\textrm{c}}}{\left\Vert \mathbf{w}_{i}^{\textrm{c}}\right\Vert \left\Vert \mathbf{w}_{j}^{\textrm{c}}\right\Vert }.$$

 figure: Fig. 2.

Fig. 2. The geometric relations among LEDs and the camera optical center for the use of the law of cosines.

Download Full Size | PDF

We define $\triangle \textrm {T}_{i}o^{\textrm {c}}\textrm {T}_{j}$ as the triangle constructed by the vertices $\textrm {T}_{i}$, $o^{\textrm {c}}$ and $\textrm {T}_{j}$. According to the law of cosines, in the triangle $\triangle \textrm {T}_{i}o^{\textrm {c}}\textrm {T}_{j}$, we have

$$d_{i}^{2}+d_{j}^{2}-2d_{i}d_{j}\cos\alpha_{ij}=d_{ij}^{2}.$$
To further calculate the distances between the LEDs and the camera, we simplify Eq. (7) by variable transformation. Let
$$\begin{cases} {} r=2\cos\alpha_{12}\\ q=2\cos\alpha_{13}\\ p=2\cos\alpha_{23}, \end{cases}$$
$$\begin{cases} {} x=\frac{d_{1}}{d_{3}}\\ y=\frac{d_{2}}{d_{3}}, \end{cases}$$
and
$$\begin{cases} {} t=\frac{d_{12}^{2}}{d_{3}^{2}} \\ a=\frac{d_{23}^{2}}{d_{12}^{2}}=\frac{d_{23}^{2}}{vd_{3}^{2}} \\ b=\frac{d_{13}^{2}}{d_{12}^{2}}=\frac{d_{13}^{2}}{vd_{3}^{2}}. \end{cases}$$
Since $d_{3}\neq 0$, substituting Eqs. (8)–(15) into Eq. (7), we can obtain the following equation system which is equivalent to Eq. (7)
$$\begin{cases} {} t=x^{2}+y^{2}-xyr \\ bt=x^{2}+1-xq \\ at=1+y^{2}-yp.\end{cases}$$
We have $r<2$ from Eq. (8) since $0<\alpha _{12}<\pi$. Then, we have $t>0$ from Eq. (16) since $t=x^{2}+y^{2}-xyr>\left (x-y\right )^{2}\geq 0$. Thus, $d_{3}$ can be uniquely determined by $d_{3}=\frac {d_{12}}{\sqrt {t}}=\frac {d_{12}}{\sqrt {x^{2}+y^{2}-xyr}}$. Following is the process of calculating $x$ and $y$. We can eliminate $t$ from Eqs. (17) and (18), and thus we have
$$\begin{cases} {} \left(1-a\right)y^{2}-ax^{2}+axyr-yp+1=0 \\ \left(1-b\right)x^{2}-by^{2}+bxyr-xq+1=0. \end{cases}$$
Following the same method in [18], $x$ and $y$ can be obtained by solving Eqs. (19) and (20) based on Wu-Ritt’s zero decomposition method [22]. Then, with $x$ and $y$, $d_{i}$ ($i\in \left \{1,2,3\right \}$) can be expressed as follows
$$\begin{cases} {} d_{1}=xd_{3} \\ d_{2}=yd_{3} \\ d_{3}=\frac{d_{12}}{\sqrt{x^{2}+y^{2}-xyr}}. \end{cases}$$

Note that four candidate solution sets for $x$ and $y$ can be obtained by solving Eqs. (19) and (20), where only one solution set is composed of the desired $x$ and $y$ [18]. This means that we can obtain four solution sets for $d_{i}$ ($i \in \{ 1,2,3\}$) by solving Eqs. (21)–(23). P3P methods require an extra beacon, i.e., the fourth beacon, to obtain deterministic $d_{i}$ ($i \in \{ 1,2,3\}$) from all the four solution sets of $d_{i}$ ($i \in \{ 1,2,3\}$) [15,18,23]. In contrast, we will obtain the right solution set based on the RSS captured by the PD using only 3 LED regardless of the LED orientations in the next subsection.

3.3 Irradiance angle estimation

In RSS algorithms, it is difficult to calculate irradiance angles when the orientations of the receiver and LEDs are not limited. In this subsection, we will calculate irradiance angles without the limitation on the orientations of both the receiver and LEDs.

According to Eq. (2), the RSS captured by the PD from the $i$th ($i\in \left \{ 1,2,3\right \}$) LED can be expressed as

$$P_{r,i}=\frac{C}{d_{i}^{2}}\cos^{m}\left(\phi_{i}\right)\cos\left(\psi_{i}\right).$$
Since the distance between the PD and the camera, $d_{\textrm {PC}}$, is much smaller than the distances between the LEDs and the receiver, we omit $d_{\textrm {PC}}$ in the algorithm. However, the effect of $d_{\textrm {PC}}$ on R-P3P’s performance will be evaluated in the simulations. Therefore, with the incidence angle estimated by Eq. (5), we can obtain the irradiance angle $\phi _{i}$ ($i\in \left \{ 1,2,3\right \}$) as follows
$$\cos\left(\phi_{i}\right)=\left(\frac{P_{r,i}\cdot d_{i}^{2}}{C\cdot\cos\left(\psi_{i,\mathrm{est}}\right)}\right)^{\frac{1}{m}}.$$
With the four solution sets of $d_{i}$ ($i\in \left \{ 1,2,3\right \}$) obtained by Eqs. (21)–(23), we can obtain four solution sets for $\phi _{i}$ ($i\in \left \{ 1,2,3\right \}$) according to Eq. (25). Fortunately, the semi-angles of the LEDs, $\Phi _{\frac {1}{2}}$, are known in advance. This means that the right solution set of $\phi _{i}$ ($i\in \left \{ 1,2,3\right \}$) has to satisfy the following constraints
$$\cos\left(\Phi_{\frac{1}{2}}\right)\leq\cos\left(\phi_{i}\right)\leq\cos\left(0\right).$$
The estimated $\phi _{i}$ ($i\in \left \{ 1,2,3\right \}$) that dost not satisfy Eq. (26) can be easily eliminated. However, consider the effect of noise and $d_{\textrm {PC}}$, there may have no solution set of $\phi _{i}$ ($i\in \left \{1,2,3\right \}$) satisfying Eq. (26) or there may have more than one solution sets of $\phi _{i}$ ($i\in \left \{1,2,3\right \}$) satisfying Eq. (26). For the former case, we choose one solution set of $\phi _{i}$ ($i\in \left \{1,2,3\right \}$) closest to satisfying Eq. (26) as the final solution set for $\phi _{i}$ ($i\in \left \{ 1,2,3\right \}$). For the latter case, we choose any solution set of $\phi _{i}$ ($i\in \left \{1,2,3\right \}$) randomly from all the solution sets that satisfies Eq. (26). These strategies will introduce positioning errors due to the fact that Eq. (26) cannot guarantee to find the unique $d_{i}$ ($i\in \left \{ 1,2,3\right \}$). Fortunately, the probability of errors is very low. This will be verified by simulations shown in Section 4. In this way, R-P3P can mitigate the limitation on LED orientations in RSS algorithms.

Based on the estimated $\phi _{i}$ ($i\in \left \{ 1,2,3\right \}$), unique $d_{i}$ ($i\in \left \{ 1,2,3\right \}$) can be further determined. In this way, we can estimate the distances between the LEDs and the receiver using only 3 LEDs regardless of the orientations of LEDs and receivers.

3.4 Position estimation

The distances between the LEDs and the receiver obtained in subSection 3.3 can be expressed as follows

$$\begin{cases} {} d_{1}=\left\Vert \mathbf{s}_{1}^{\textrm{w}}-\mathbf{r}^{\textrm{w}}\right\Vert _{\textrm{est}} \\ d_{2}=\left\Vert \mathbf{s}_{2}^{\textrm{w}}-\mathbf{r}^{\textrm{w}}\right\Vert _{\textrm{est}} \\ d_{3}=\left\Vert \mathbf{s}_{3}^{\textrm{w}}-\mathbf{r}^{\textrm{w}}\right\Vert _{\textrm{est}}\textrm{.} \end{cases}$$
In practice, LEDs are usually deployed at the same height (i.e., $z_{1}^{\textrm {w}}=z_{2}^{\textrm {w}}=z_{3}^{\textrm {w}}$) and hence R-P3P can estimate the 2D position of the receiver $\left (x_{r}^{\textrm {w}},y_{r}^{\textrm {w}}\right )$ based on the following standard LLS estimator
$$\mathbf{\hat{X}=(A^{\mathrm{T}}A)^{\mathrm{-1}}A^{\mathrm{T}}b}$$
where $\hat {\mathbf {X}}=\begin {bmatrix}x_{r,\textrm {est}}^{\textrm {w}}\\ y_{r,\textrm {est}}^{\textrm {w}}\end {bmatrix}$ is the estimate of $\mathbf {X}=\begin {bmatrix}x_{r}^{\textrm {w}}\\ y_{r}^{\textrm {w}} \end {bmatrix}$. Additionally,
$$\mathbf{A}=\begin{bmatrix}x_{2}^{\textrm{w}}-x_{1}^{\textrm{w}} & y_{2}^{\textrm{w}}-y_{1}^{\textrm{w}}\\ x_{3}^{\textrm{w}}-x_{1}^{\textrm{w}} & y_{3}^{\textrm{w}}-y_{1}^{\textrm{w}} \end{bmatrix},$$
and
$$\mathbf{b}=\frac{1}{2}\begin{bmatrix}d_{1}^{2}-d_{2}^{2}+\left(x_{2}^{\textrm{w}}\right)^{2}+\left(y_{2}^{\textrm{w}}\right)^{2}-\left(x_{1}^{\textrm{w}}\right)^{2}-\left(y_{1}^{\textrm{w}}\right)^{2}\\ d_{1}^{2}-d_{3}^{2}+\left(x_{3}^{\textrm{w}}\right)^{2}+\left(y_{3}^{\textrm{w}}\right)^{2}-\left(x_{1}^{\textrm{w}}\right)^{2}-\left(y_{1}^{\textrm{w}}\right)^{2} \end{bmatrix}.$$
Then, since $z_{1}^{\textrm {w}}=z_{2}^{\textrm {w}}=z_{3}^{\textrm {w}}$, estimated $z$-coordinate of the receiver, $z_{r,\textrm {est}}^{\textrm {w}}$, can be calculated by substituting Eq. (30) into Eq. (27), which can be expressed as follows
$$z_{r,\textrm{est}}^{\textrm{w}}=z_{1}^{\textrm{w}}\pm\Delta$$
where $\Delta =\sqrt {d_{1}^{2}-\left (x_{1}^{\textrm {w}}-x_{r,\textrm {est}}^{\textrm {w}}\right )^{2}-\left (y_{1}^{\textrm {w}}-y_{r,\textrm {est}}^{\textrm {w}}\right )^{2}}$. Since $H_{i}$ is a quadratic function of $d_{i}$, as shown in Eq. (1), we can obtain two $z$-coordinates of the receiver. However, the ambiguous solution, $z_{r,\textrm {est}}^{\textrm {w}}=h+\Delta$, can be easily eliminated as it implies the height of the receiver is beyond the ceiling, which is implausible. In this way, R-P3P can determine the 3D position of the receiver, $\mathbf {r}_{\mathrm {est}}^{\mathrm {w}}=\left (x_{r,\textrm {est}}^{\textrm {w}},y_{r,\textrm {est}}^{\textrm {w}},z_{r,\textrm {est}}^{\textrm {w}}\right )$ by only 3 LEDs with the LLS method regardless of the orientations of LEDs and receivers. In summary, R-P3P algorithm is elaborated in Algorithm 1.

4. Simulation results and analyses

As R-P3P simultaneously utilizes visual and strength information, a typical PnP algorithm [18] and CA-RSSR [12] are conducted as the baseline schemes in this section. The PnP algorithm utilizes the visual information only. Additionally, CA-RSSR exploits both visual and strength information.

The system parameters are listed in Table 1. Assume that visible light signals are modulated by on-off keying (OOK). All statistical results are averaged over $10^{5}$ independent runs. For each simulation run, the receiver positions are selected in the room randomly. To reduce the error caused by the channel noise, the received optical power is calculated as the average of 1000 measurements [24]. The pinhole camera is calibrated and has a principal point $\left (u_{0},v_{0}\right )=(320, 240)$, and a focal ratio $f_{u}=f_{v}=800$. The image noise is modeled as a white Gaussian noise having an expectation of zero and a standard deviation of $2$ pixels [25]. Since the image noise affects the pixel coordinate of the LEDs’ projection on the image plane, the pixel coordinate is obtained by processing 10 images for the same position.

Tables Icon

Table 1. System parameters.

We evaluate the performance of R-P3P in terms of its coverage, accuracy and computational cost in the 3D-positioning case. We define coverage ratio (CR) of the positioning algorithms as

$$CR=\frac{A_{\textrm{effective}}}{A_{\textrm{total}}}$$
where $A_{\textrm {effective}}$ is the indoor area where the algorithm is feasible and $A_{\textrm {total}}$ is the entire indoor area. Additionally, the positioning error (PE) is used to quantify the accuracy performance which is defined as
$$PE=\left\Vert \mathbf{r}_{\textrm{true}}^{\textrm{w}}-\mathbf{r}_{\textrm{est}}^{\textrm{w}}\right\Vert$$
where $\mathbf {r}_{\textrm {true}}^{\textrm {w}}=\left (x_{r,\textrm {true}}^{\textrm {w}},y_{r,\textrm {true}}^{\textrm {w}},z_{r,\textrm {true}}^{\textrm {w}}\right )$ and $\mathbf {r_{\textrm {est}}^{\textrm {w}}}=\left (x_{r,\textrm {est}}^{\textrm {w}},y_{r,\textrm {est}}^{\textrm {w}},z_{r,\textrm {est}}^{\textrm {w}}\right )$ are the world coordinates of the actual and estimated positions of the receiver, respectively. Furthermore, we exploit the execution time to evaluate the computational cost. Execution time is typically used to evaluate positioning complexity [16,19,23,26], which is meaningful for evaluating the actual time required by the algorithms especially for indoor mobile positioning.

4.1 Coverage performance of R-P3P

Table 2 provides the required number of LEDs for 3D positioning for R-P3P, CA-RSSR and the PnP algorithm. As we can observe, R-P3P requires the least number of LEDs. Figure 3 shows the comparisons of the coverage ratio (CR) performance among the three algorithms with the FoVs, $\Psi _{c}$, varying from $0{^{\circ }}$ to $80{^{\circ }}$. Additionally, the LEDs tilt with a angle $\theta =0{^{\circ }}$, $\theta =10{^{\circ }}$ and $\theta =30{^{\circ }}$ for Fig. 3(a), Fig. 3(b) and Fig. 3(c), respectively. The positioning samples are chosen along the length, width and height of the room, with a five centimeters separation from each other. A SNR of 13.6 dB is assumed according to the reliable communication requirement of OOK modulation [20]. As shown in Fig. 3, R-P3P achieves the highest CR for all $\Psi _{c}$ regardless of $\theta$. It performs consistently well from $\Psi _{c}=20{^{\circ }}$ to $\Psi _{c}=80{^{\circ }}$ with the CR exceeding 90% for $\theta =0{^{\circ }}$ and $\theta =10{^{\circ }}$, and the CR exceeding 70% for $\theta =30{^{\circ }}$. The CR of R-P3P is more than 2% , 3% and 5% higher than the PnP algorithm for $\theta =0{^{\circ }}$, $\theta =10{^{\circ }}$ and $\theta =30{^{\circ }}$, respectively. Meanwhile, the CR of R-P3P is more than 8% , 10% and 18% higher than the CA-RSSR for $\theta =0{^{\circ }}$, $\theta =10{^{\circ }}$ and $\theta =30{^{\circ }}$, respectively. As we can observe from Fig. 3, as the tilt angle of the LEDs increases, the CR for all the three algorithms decreases, and the CR performance advantage of R-P3P compared with the other two algorithms increases. Additionally, the CR of R-P3P is more than 40% for all the three $\theta$ for $\Psi _{c}=10{^{\circ }}$. In contrast, the PnP algorithm and CA-RSSR almost cannot be implemented for $\Psi _{c}=10{^{\circ }}$. In addition, the CR of the three algorithms decrease slightly with large FoV since the power of shot noise increases [27].

 figure: Fig. 3.

Fig. 3. The comparison of the 3D-positioning CR performance among R-P3P, CA-RSSR and the PnP algorithm with varying FoVs of the receiver.

Download Full Size | PDF

Tables Icon

Table 2. Required number of LEDs for the positioning schemes.

4.2 Accuracy performance of R-P3P

In this subsection, we evaluate the accuracy performance of R-P3P under the influence of LED orientations, the image noise and the distance between the camera and the PD on the receiver.

1) Effect of the LED orientations

We first evaluate the effect of LED orientations on 3D-positioning accuracy of R-P3P, CA-RSSR and the PnP algorithm. CA-RSSR requires the LEDs to face vertically downwards, which may be challenging to satisfy in practice. Therefore, two cases are considered for CA-RSSR: the ideal case where the LEDs face vertically downwards, and the practical case where the LEDs tilt with a random angle perturbation $\theta \leq 5{^{\circ }}$. In contrast, R-P3P and the PnP algorithms can be implemented in the two cases, and thus only the practical case is considered for them. The accuracy performance is represented by the cumulative distribution function (CDF) of the PEs. As shown in Fig. 4, R-P3P achieves 80th percentile accuracies of about $5\:\mathrm {cm}$, which is almost the same with the PnP algorithm. This implies that the probability of the situations that more than one solution sets of $\phi _{i}$ ($i\in \left \{ 1,2,\ldots ,K\right \}$) satisfy Eq. (26) or no solution set of $\phi _{i}$ ($i\in \left \{ 1,2,\ldots ,K\right \}$) satisfies Eq. (26) is very low. Therefore, although Eq. (26) is not strict in theory, the accuracy of R-P3P is close to that of the PnP algorithm and R-P3P requires less LEDs. Additionally, CA-RSSR achieves 80th percentile accuracies of about 10 cm for the ideal case. However, the practical case of CA-RSSR presents a significant accuracy decline compared with the ideal case of the CA-RSSR. Thus, a slight LED orientations perturbation can impair the accuracy significantly for CA-RSSR.

 figure: Fig. 4.

Fig. 4. The comparison of 3D-positioning accuracy performance among R-P3P, CA-RSSR and the PnP algorithm under the case where LEDs tilt with a random angle perturbation $\theta \leq 5{^{\circ }}$.

Download Full Size | PDF

Then, we evaluate the 3D-positioning accuracy of R-P3P with varying tilt angles of LEDs. The performance is represented by the CDF of PEs, given $\theta =0{^{\circ }}$, $10{^{\circ }}$, $20{^{\circ }}$, $30{^{\circ }}$, $40{^{\circ }}$ and $60{^{\circ }}$. As shown in Fig. 5, R-P3P can achieve 80th percentile accuracies of less than 5 cm for all $\theta$. Therefore, R-P3P can be utilized widely in the scenarios where the LEDs are in any orientation. Additionally, the accuracy of R-P3P increases slightly as the tilt angle of LEDs increases since the irradiance angles decrease which further improves the received signal power.

 figure: Fig. 5.

Fig. 5. The effect of the tilt angle of LEDs on 3D-positioning accuracy performance of R-P3P.

Download Full Size | PDF

2) Effect of the image noise

Since R-P3P also exploits visual information, we then evaluate the effect of the image noise on the accuracy performance of R-P3P, CA-RSSR and the PnP algorithms for 3D positioning under the case where the LEDs tilt with a random angle perturbation $\theta \leq 5{^{\circ }}$. The image noise is modeled as a white Gaussian noise having an expectation of zero and a standard deviation ranging from 0 to $4$ pixels [25]. The mean of PEs that are affected by the image noise are shown in Fig. 6. As shown in Fig. 6, the accuracy performance of R-P3P closes to that of the PnP algorithm and is much better than that of CA-RSSR. For R-P3P, the means of PEs increase from 3 cm to 10 cm with the increasing of the image noise. For the PnP algorithm, the means of PEs increase from 0 to 8 cm. In contrast, for CA-RSSR, the means of PEs keeps at about 72 cm, which is over 50 cm higher than that of R-P3P.

 figure: Fig. 6.

Fig. 6. The comparison of the effect of the image noise on 3D-positioning accuracy performance among R-P3P, CA-RSSR and the PnP algorithm under the case where LEDs tilt with a random angle perturbation $\theta \leq 5{^{\circ }}$.

Download Full Size | PDF

3) Effect of the distance between the PD and the camera

Since R-P3P exploits the PD and the camera simultaneously, we then evaluate the effect of the distance between the PD and the camera, $d_{\textrm {PC}}$, on the accuracy performance of R-P3P. We compare CA-RSSR and R-P3P on 3D-positioning performance with varying $d_{\textrm {PC}}$ under the case where the LEDs tilt with a random angle perturbation $\theta \leq 5{^{\circ }}$. This performance is represented by the CDF of the PEs with $d_{\textrm {PC}}=0\;\mathrm {cm}$, $3\;\mathrm {cm}$, $6\;\mathrm {cm}$ and $10\;\mathrm {cm}$. In particular, $d_{\textrm {PC}}=0\;\mathrm {cm}$ indicates the ideal case that the PD and the camera overlap. As shown in Fig. 7, compared with CA-RSSR, R-P3P can achieve better performance. In specific, R-P3P can achieve 80th percentile accuracies of about 5 cm regardless of $d_{\textrm {PC}}$. In contrast, CA-RSSR can only achieve 40th percentile accuracies of about 30 cm for all $d_{\textrm {PC}}$. As we can observe from Fig. 7, $d_{\textrm {PC}}$ has little effect on positioning accuracy of R-P3P. This means that R-P3P can be widely used on devices with various $d_{\textrm {PC}}$.

 figure: Fig. 7.

Fig. 7. The comparison of 3D-positioning accuracy performance for R-P3P and CA-RSSR with varying distances between the PD and the camera under the case where LEDs tilt with a random angle perturbation $\theta \leq 5{^{\circ }}$.

Download Full Size | PDF

4.3 Computational cost

In this subsection, we compare execution time of R-P3P, CA-RSSR and the PnP algorithm for 3D positioning to evaluate the computational cost performance. To have a fair comparison, all algorithms have been implemented in Matlab on a $1.6\:\mathrm {GHz}\times 4$ Core laptop. The simulation consists of $10^{5}$ runs. The results are shown in Fig. 8. Since R-P3P estimates the position of the receiver by the LLS method, the computational cost of R-P3P is lower than that of CA-RSSR, and the execution time of it is shorter than $0.001\:\mathrm {s}$ for almost 100% of the $10^{5}$ runs. Considering a typical indoor walking speed 1.3 m/s, the execution delay of R-P3P only causes 0.2 cm positioning error, which is acceptable for most applications. Additionally, the computational cost of the PnP algorithm is over $0.002\:\mathrm {s}$ for about 90% of the $10^{5}$ runs, which means the computational cost of R-P3P is less than 50% of that of the PnP algorithm.

 figure: Fig. 8.

Fig. 8. The computational cost of 3D-positioning for R-P3P, CA-RSSR and the PnP algorithm.

Download Full Size | PDF

5. Conclusion

We proposed a novel indoor positioning algorithm named R-P3P that simultaneously utilizes visual and strength information. Based on the combination of visual and strength information, R-P3P can mitigate the limitation on LEDs’ and receiver’s orientations. Additionally, compared with CA-RSSR, R-P3P can achieve better accuracy performance with low complexity due to the use of the LLS method. In addition, R-P3P requires less LEDs than PnP algorithms. Simulation results indicate that R-P3P can achieve positioning accuracy within 10 cm over 70% indoor area with low complexity regardless of the orientations of LEDs and receivers. Therefore, R-P3P is a promising indoor VLP approach, which can achieve wide applications. In the future, we will experimentally implement R-P3P and evaluate it using a dedicated test bed, which will be meaningful for future indoor positioning applications.

Funding

National Natural Science Foundation of China (61871047, 61901047); Natural Science Foundation of Beijing Municipality (4204106); China Postdoctoral Science Foundation (2018M641278).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. T.-H. Do and M. Yoo, “An in-depth survey of visible light communication based positioning systems,” Sensors 16(5), 678 (2016). [CrossRef]  

2. P. Pathak, X. Feng, P. Hu, and P. Mohapatra, “Visible light communication, networking and sensing: Potential and challenges,” IEEE Commun. Surv. Tutorials 17(4), 2047–2077 (2015). [CrossRef]  

3. K. Gligorić, M. Ajmani, D. Vukobratović, and S. Sinanović, “Visible light communications-based indoor positioning via compressed sensing,” IEEE Commun. Lett. 22(7), 1410–1413 (2018). [CrossRef]  

4. F. Alam, M. Chew, T. Wenge, and G. Gupta, “An accurate visible light positioning system using regenerated fingerprint database based on calibrated propagation model,” IEEE Trans. Instrum. Meas. 68(8), 2714–2723 (2019). [CrossRef]  

5. T. Q. Wang, Y. A. Sekercioglu, A. Neild, and J. Armstrong, “Position accuracy of time-of-arrival based ranging using visible light with application in indoor localization systems,” J. Lightwave Technol. 31(20), 3302–3308 (2013). [CrossRef]  

6. S. Cincotta, C. He, A. Neild, and J. Armstrong, “High angular resolution visible light positioning using a quadrant photodiode angular diversity aperture receiver (QADA),” Opt. Express 26(7), 9230–9242 (2018). [CrossRef]  

7. L. Li, P. Hu, C. Peng, G. Shen, and F. Zhao, “Epsilon: A visible light based positioning system,” in Proceedings of 11th USENIX Symposium on Network Systems Design and Implementation, vol. 14 (2014), pp. 331–343.

8. N. Mohammed and M. Abd Elkarim, “Exploring the effect of diffuse reflection on indoor localization systems based on RSSI-VLC,” Opt. Express 23(16), 20297–20313 (2015). [CrossRef]  

9. Y. Li, Z. Ghassemlooy, X. Tang, B. Lin, and Y. Zhang, “A VLC smartphone camera based indoor positioning system,” IEEE Photonics Technol. Lett. 30(13), 1171–1174 (2018). [CrossRef]  

10. L. Wang and C. Guo, “Indoor visible light localization algorithm with multi-directional PD array,” in Proceedings of IEEE Globecom Workshops, (IEEE, 2017), pp. 1–6.

11. Y. Xu, Z. Wang, P. Liu, J. Chen, S. Han, C. Yu, and J. Yu, “Accuracy analysis and improvement of visible light positioning based on vlc system using orthogonal frequency division multiple access,” Opt. Express 25(26), 32618–32630 (2017). [CrossRef]  

12. L. Bai, Y. Yang, C. Guo, C. Feng, and X. Xu, “Camera assisted received signal strength ratio algorithm for indoor visible light positioning,” IEEE Commun. Lett. 23(11), 2022–2025 (2019). [CrossRef]  

13. Z. Yang, Z. Wang, J. Zhang, C. Huang, and Q. Zhang, “Wearables can afford: Light-weight indoor positioning with visible light,” in Proceedings of the 13th Annual International Conference on Mobile Systems, Applications, and Services, (2015), pp. 317–330.

14. M. Rahman, M. Haque, and K. Kim, “High precision indoor positioning using lighting LED and image sensor,” in Proceedings of the 14th International Conference on Computer and Information Technology, (2011), pp. 309–314.

15. V. Lepetit, F. Moreno-Noguer, and P. Fua, “EPnP: An accurate O(n) solution to the PnP problem,” Int. J. Comput. Vis. 81(2), 155–166 (2009). [CrossRef]  

16. L. Kneip, D. Scaramuzza, and R. Siegwart, “A novel parametrization of the perspective-three-point problem for a direct computation of absolute camera position and orientation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2011), pp. 2969–2976.

17. L. Bai, Y. Yang, C. Feng, C. Guo, and J. Cheng, “A high coverage camera assisted received signal strength ratio algorithm for indoor visible light positioning,” https://arxiv.org/abs/2004.06294.

18. X. Gao, X. Hou, J. Tang, and H. Cheng, “Complete solution classification for the perspective-three-point problem,” IEEE Trans. Pattern Anal. Machine Intell. 25(8), 930–943 (2003). [CrossRef]  

19. A. Masselli and A. Zell, “A new geometric approach for faster solving the perspective-three-point problem,” in Proceedings of the 22nd International Conference on Pattern Recognition, (2014), pp. 2119–2124.

20. T. Komine and M. Nakagawa, “Fundamental analysis for visible-light communication system using LED lights,” IEEE Trans. Consum. Electron. 50(1), 100–107 (2004). [CrossRef]  

21. Y. Yang, Z. Zeng, J. Cheng, C. Guo, and C. Feng, “A relay-assisted OFDM system for VLC uplink transmission,” IEEE Trans. Commun. 67(9), 6268–6281 (2019). [CrossRef]  

22. W. Wu, “Basic principles of mechanical theorem proving in geometries,” J. of Sys. Sci. and Math. Sci 4, 221–252 (1984).

23. S. I. Roumeliotis and T. Ke, “An efficient algebraic solution to the perspective-three-point problem,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2017), pp. 7225–7233.

24. L. Wang, C. Guo, P. Luo, and Q. Li, “Indoor visible light localization algorithm based on received signal strength ratio with multi-directional LED array,” in Proceedings of 2017 IEEE International Conference on Communications Workshops, (IEEE, 2017), pp. 138–143.

25. L. Zhou, Y. Yang, M. Abello, and M. Kaess, “A robust and efficient algorithm for the PnL problem using algebraic distance to approximate the reprojection distance,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33 (2019), pp. 9307–9315.

26. J. Lim, “Ubiquitous 3D positioning systems by LED-based visible light communications,” IEEE Wireless Commun. 22(2), 80–85 (2015). [CrossRef]  

27. K. Cui, G. Chen, Z. Xu, and R. D. Roberts, “Line-of-sight visible light communication system design and demonstration,” in Proceedings of 2010 7th International Symposium on Communication Systems, Networks & Digital Signal Processing, (2010), pp. 621–625.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. The system diagram of the VLP system.
Fig. 2.
Fig. 2. The geometric relations among LEDs and the camera optical center for the use of the law of cosines.
Fig. 3.
Fig. 3. The comparison of the 3D-positioning CR performance among R-P3P, CA-RSSR and the PnP algorithm with varying FoVs of the receiver.
Fig. 4.
Fig. 4. The comparison of 3D-positioning accuracy performance among R-P3P, CA-RSSR and the PnP algorithm under the case where LEDs tilt with a random angle perturbation $\theta \leq 5{^{\circ }}$.
Fig. 5.
Fig. 5. The effect of the tilt angle of LEDs on 3D-positioning accuracy performance of R-P3P.
Fig. 6.
Fig. 6. The comparison of the effect of the image noise on 3D-positioning accuracy performance among R-P3P, CA-RSSR and the PnP algorithm under the case where LEDs tilt with a random angle perturbation $\theta \leq 5{^{\circ }}$.
Fig. 7.
Fig. 7. The comparison of 3D-positioning accuracy performance for R-P3P and CA-RSSR with varying distances between the PD and the camera under the case where LEDs tilt with a random angle perturbation $\theta \leq 5{^{\circ }}$.
Fig. 8.
Fig. 8. The computational cost of 3D-positioning for R-P3P, CA-RSSR and the PnP algorithm.

Tables (2)

Tables Icon

Table 1. System parameters.

Tables Icon

Table 2. Required number of LEDs for the positioning schemes.

Equations (23)

Equations on this page are rendered with MathJax. Learn more.

H i = ( m + 1 ) A 2 π d i 2 cos m ( ϕ i ) T s ( ψ i ) g ( ψ i ) cos ( ψ i )
P r , i = P t H i = C d i 2 cos m ( ϕ i ) cos ( ψ i )
( s i c ) T = [ x i c y i c z i c ] = M 1 z i c [ u i p v i p 1 ]
w i c = s i c o c = ( x i c , y i c , z i c )
ψ i , est = arccos w i c n cam c w i c
α i j = arccos w i c w j c w i c w j c .
d i 2 + d j 2 2 d i d j cos α i j = d i j 2 .
{ r = 2 cos α 12 q = 2 cos α 13 p = 2 cos α 23 ,
{ x = d 1 d 3 y = d 2 d 3 ,
{ t = d 12 2 d 3 2 a = d 23 2 d 12 2 = d 23 2 v d 3 2 b = d 13 2 d 12 2 = d 13 2 v d 3 2 .
{ t = x 2 + y 2 x y r b t = x 2 + 1 x q a t = 1 + y 2 y p .
{ ( 1 a ) y 2 a x 2 + a x y r y p + 1 = 0 ( 1 b ) x 2 b y 2 + b x y r x q + 1 = 0.
{ d 1 = x d 3 d 2 = y d 3 d 3 = d 12 x 2 + y 2 x y r .
P r , i = C d i 2 cos m ( ϕ i ) cos ( ψ i ) .
cos ( ϕ i ) = ( P r , i d i 2 C cos ( ψ i , e s t ) ) 1 m .
cos ( Φ 1 2 ) cos ( ϕ i ) cos ( 0 ) .
{ d 1 = s 1 w r w est d 2 = s 2 w r w est d 3 = s 3 w r w est .
X ^ = ( A T A ) 1 A T b
A = [ x 2 w x 1 w y 2 w y 1 w x 3 w x 1 w y 3 w y 1 w ] ,
b = 1 2 [ d 1 2 d 2 2 + ( x 2 w ) 2 + ( y 2 w ) 2 ( x 1 w ) 2 ( y 1 w ) 2 d 1 2 d 3 2 + ( x 3 w ) 2 + ( y 3 w ) 2 ( x 1 w ) 2 ( y 1 w ) 2 ] .
z r , est w = z 1 w ± Δ
C R = A effective A total
P E = r true w r est w
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.