Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Effect of a vehicle’s mobility on SNR and SINR in vehicular optical camera communication systems

Open Access Open Access

Abstract

The widespread use of light-emitting diodes (LEDs) and cameras in vehicular environments provides an excellent opportunity for optical camera communication (OCC) in intelligent transport systems. OCC is a promising candidate for the Internet of Vehicles (IoV), and it uses LEDs as the transmitter and cameras as the receiver. However, the mobility of vehicles has a significant detrimental impact on the OCC system’s performance in vehicular environments. In this paper, a traffic light that uses multiple-input multiple-output (MIMO) technology serves as the transmitter, and the receiver is a camera mounted on a moving vehicle. The impact of vehicle mobility on the vehicular MIMO-OCC system in the transportation environment is then examined using precise point spread function (PSF) analysis. The experimental results are used to evaluate the proposed PSF. A good agreement between the laboratory’s recorded videos and this PSF model’s simulations is observed. Moreover, the signal-to-noise ratio (SNR) and signal-to-interference-plus-noise ratio (SINR) values are evaluated. It is shown that they are greatly influenced by the vehicle’s speed, direction of motion, and position of the camera. However, since the angular velocity in a typical transportation environment is low, it does not have a significant impact on the performance of the vehicular OCC systems.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

With the increase of vehicle possession in the world, most large cities are overcrowded with vehicles, which leads to challenges such as increased traffic congestion, accidents, unexpected emergencies, and wasted time. The intelligent transportation system (ITS) has emerged as a promising solution to overcome these significant problems [1]. Internet of Vehicle (IoV) enables vehicle-to-vehicle (V2V), infrastructure-to-vehicle (I2V), vehicle-to-infrastructure (V2I), and other types of vehicular-based communication to allow connectivity between vehicles and their surrounding environments for safety reasons [2,3]. Most designed communication models utilize radio frequency (RF) for interaction between system components. However, due to the highly regulated, dense, and limitations of RF spectrums, it is expected that the ever-increasing demand for data transmission will not be met by RF alone in the near future. Therefore, researchers introduced the optical spectrum as a suitable candidate for a complementary and low-cost solution to the RF spectrum [4,5]. Based on the receiver type, optical wireless communications (OWC) is divided into two categories: photodiode (PD)-based and camera-based [6]. Reference [7] discusses emerging OWC technologies and represents various applications, advantages, and limitations of these technologies. However, vehicular OWC systems based on PD are outside the scope of this paper. Optical camera communication (OCC) is a subsystem of OWC that uses the light-emitting diode (LED) as the transmitter and the optical camera as the receiver. Recently, the widespread availability of LEDs and cameras in ITS increases the potential of OCC systems for data transmission and the massive connectivity in IoVs [8,9].

In I2V, information such as forward collision warnings, accident warnings, road conditions, and vehicular traffic information is transmitted [1012]. Reference [10] suggested using OCC in road-to-vehicle communications to alert the lane crossing and prevent accidents. The authors estimated the minimum distance from which the driver should receive a warning to avoid the crash. A non-line-of-sight (NLoS) perception system was presented in [11], which allowed the occluding vehicle to convey safety events to the other vehicles. An optical camera-based vehicle localization method in [13] used a street light and a camera as the transmitter and receiver, respectively. The positioning accuracy of this method was less than 1 m. The authors in [14] introduced circular color shift keying (CSK) constellations in a vehicular OCC system and demonstrated that these constellations outperform the geometric and IEEE 802.15.7 constellations in terms of bit-error probability. In the absence of direct incident light, the camera-based visible light communication (VLC) system in [15] could reach a range of 70–80 m at a bit rate of 30 b/s. The authors in [16] used computer vision to automatically detect the traffic lights on the road in an OCC link. The works [17] and [18] focused on the You Only Look Once (YOLO) deep learning model to improve data reception in OCC-based I2V systems. Reference [19] provided asynchronous OCC-based I2V communication because synchronization for communication to/from moving vehicles is challenging.

The optical channel characteristics for I2V communications have been explored in some previous studies [2030]. However, these channel models do not take into account the mobility of the vehicles. References [2024] have used the PD detector, while the receiver in Refs. [2530] is based on a camera. The authors in [20] and [21] employed VLC technology for the I2V link between the streetlight and the vehicle. Reference [20] considered the asymmetrical intensity pattern for commercial streetlights and the wavelength-dependent reflectance for surface materials in a I2V VLC link. A dynamic soft handover algorithm based on coordinated multipoint was introduced in [21] for the I2V VLC system to maintain a stable signal quality regardless of vehicle velocity. The authors in [22] compared three LED radiation patterns namely Lambertian, Elliptical, and Batwing for modeling a channel for I2V VLC scenarios. References [23] and [24] utilized the traffic light as the transmitter and the photodiode as the receiver for the I2V link. VLC channel characterization for I2V communication is studied in [23] using real traffic lights and road scenarios. The authors in [24] detailed the experimental characterization of a non Line-of-Sight VLC system for I2V ITS applications. Due to absorption and scattering in the outdoor environment, an extinction loss term was added to the Lambertian OCC channel model in [25]. For OCC-based I2V communication, the authors in [26] employed a simplified mathematical channel model and the hierarchical coding scheme. Reference [27] combined radiometry and geometry for modeling the OCC channel. This model considered perspective distortion and out-of-focus issues due to camera rotation and changes in viewing angle. Authors in [28] developed a handover scheme to support seamless vehicular communication. A brief history of optical wireless communication (OWC) channel models was provided in [29,30].

However, the channel characteristics in vehicular communications are very challenging due to the high mobility of vehicles. Since the camera captures the light of LEDs during an exposure time, the LED’s image slides and spreads on the camera’s image sensor due to the vehicle’s movement. Therefore, the vehicle’s mobility may significantly impact the performance of OCC systems that use MIMO technology. To the best of our knowledge, no theoretical models have been developed to simulate the impact of a moving vehicle during an exposure time on the sliding and spreading of the LED image on the camera’s sensor. This study theoretically describes the spread and sliding of the LED’s image on the camera sensor during an exposure time by applying the effect of the car movement on the point spread function (PSF) of the OCC system in the transportation environment. In this work, we consider a MIMO-OCC scenario in vehicular environments in which a traffic light allocates different data to each LED. Then, LEDs simultaneously transmit independent data streams to a vehicle’s camera. In [31], we analyzed the PSF of the low-speed vehicular MIMO-OCC system to obtain the intensity of LED light at the camera. Therefore, the effect of the vehicle’s motion on the PSF of the vehicular MIMO-OCC system was not taken into account. However, the vehicle motion in the transportation environment changes the PSF of the OCC system. This challenge motivates us to investigate the effect of vehicle motion on the PSF of the vehicular MIMO-OCC system in this work. This paper has the following main contributions:

  • • The LEDs’ images in the camera’s sensor become blurry due to the brief shift in the LEDs’ positions relative to the optical axis of the camera lens during the exposure time of pixels. In order to determine the amount of image blurring caused by the movement of the camera mounted on a vehicle, the instantaneous locations of traffic light LEDs with respect to the optical axis of the camera lens have been computed.
  • • The PSF used in the transportation environment should be able to accurately model vehicle movement, as vehicle mobility is an inherent feature of the transportation system. For this purpose, we included the time parameter to the PSF to model the impact of the vehicle’s speed and direction on the vehicular MIMO-OCC system.
  • • The PSF model presented in this work was evaluated with the experimental results. A good agreement was observed between the recorded videos in the experiments and the simulation results of the PSF model.
  • • The effect of speed and angular velocity of the vehicle, movement’s direction, camera’s position in relation to the traffic light, and exposure time of the camera on the signal to noise ratio (SNR) and signal to interference plus noise ratio (SINR) of the vehicular MIMO-OCC system has been investigated. The simulation results indicate that the SNR and SINR values are significantly influenced by the vehicle’s speed, movement’s direction, and camera’s position. In contrast to the vehicle’s speed, the angular velocity is not high in a typical transportation environment, hence it does not have a significant impact. Furthermore, the SINR parameter’s value increases noticeably at high vehicle speeds when the frame’s exposure time is decreased.

The rest of the paper is organized as follows. In Section 2, the system model is described. Then, the effect of vehicle’s motion on the PSF of the vehicular MIMO-OCC system is derived in Section 3. The intensity distribution and image of a point source and an LED in the image sensor of the stationary and moving cameras are analysed using the PSF computations in Section 4. Morever, the impact of movement speed, angular velocity, camera’s position, and exposure time on the SNR and SINR of the MIMO-OCC system is investigated in this section. Section 5 provides experimental results to evaluate the PSF analysis’s outcomes. Finally, in Section 6, an overview of the results and conclusions are drawn.

2. Vehicular OCC system model

In this work, we consider $61$ LEDs in a traffic light that transmit data to a moving vehicle’s camera, as shown in Fig. 1(a). The camera’s shutter opens at time $t=0$, allowing light to enter the camera and hit the image sensor. The shutter then closes at time $t=t_{exp}$. To determine the PSF of the underlying OCC system, we need to acquire the positions of the traffic light LEDs based on the optical axis of the camera lens in every $t$, i.e., $p_t=(x(t), z(t))$. The vehicle’s camera is at point $l_1$ at time $t=0$, and the optical axis of the camera lens coincides with the Y-axis of coordinate system 1, i.e., CS1 in Fig. 1(a). We assume an LED’s spot in a traffic light located at the coordinates $p_0=(x(0), z(0))$ in the XZ plane of CS1. As illustrated in Fig. 1(a), the distance between point $l_1$ and plane XZ equals $y(0)$. Then, the vehicle moves forward in the direction of the Y-axis at a constant speed, $v_y$, and rotates around the Y-axis with a constant angular velocity, $\omega _y$. As shown in Fig. 1(b), the vehicle’s camera will be at point $l_2$ at time $t=t_{exp}$, and the optical axis of the camera lens will coincide with the Y$'$-axis of coordinate system 2 (i.e., CS2). Since the lens’s optical axis will also rotate as the vehicle turns, the position of the LED will change relative to the lens’s optical axis at any time $t \in [0,~ t_{exp}]$. The vehicle’s movement and rotation cause the LEDs’ image to slide on the camera sensor’s pixels within exposure time ($t_{exp}$) of a frame and spread in line with the camera’s movement. Therefore, to find the image of an LED in each frame, the LED’s position relative to the lens’s optical axis must be determined at any time $t$. When the vehicle moves in the direction of the Y-axis, the location of the LEDs at each moment of $t$ relative to the optical axis of the lens is obtained as follows (see Appendix A):

$$\begin{aligned} &x(t)=\left\{ \begin{array}{ll} x(0), & \omega_y=0 \\ x(0) \cos(\omega_y t)-y(0) \sin(\omega_y t)-v_y t \left( \frac{1-\cos(\omega_y t)}{\omega_y t}\right), & \omega_y\neq0 \\ \end{array}\right.\\ &y(t)=\left\{ \begin{array}{ll} y(0)+v_y t, & \omega_y=0\\ y(0) \cos(\omega_y t)+x(0) \sin(\omega_y t)+v_y t \left( \frac{\sin(\omega_y t)}{\omega_y t}\right), & \omega_y\neq0 \\ \end{array}\right.\\ &z(t)=z(0). \end{aligned}$$

Figure 1(d) shows a vehicle that moves with speed $v_x$ in the direction of the X-axis, but the optical axis of the camera lens is still in the direction of the Y-axis. Consequently, the vehicle’s movement in the Y direction is parallel to the camera lens’s optical axis, whereas the vehicle’s movement in the X direction is perpendicular to it. In case of X direction, the location of the LEDs at each moment of $t$ relative to the optical axis of the lens is as follows

$$\begin{aligned} &x(t)= \left\{ \begin{array}{ll} x(0)+v_x t, & \omega_x=0\\ x(0) \cos(\omega_x t)+y(0) \sin(\omega_x t)+v_x t \left( \frac{\sin(\omega_x t)}{\omega_x t}\right), & \omega_x\neq0 \\ \end{array}\right.\\ &y(t)=\left\{ \begin{array}{ll} y(0), & \omega_x=0 \\ y(0) \cos(\omega_x t)-x(0) \sin(\omega_x t)-v_x t \left( \frac{1-\cos(\omega_x t)}{\omega_x t}\right), & \omega_x\neq0 \\ \end{array}\right.\\ &z(t)=z(0). \end{aligned}$$

 figure: Fig. 1.

Fig. 1. The OCC-based vehicular communication architecture. (a) Vehicle movement in Y direction. (b) Coordinate system at time $t_{exp}$. (c) Coordinate system at any time $t$. (d) Vehicle movement in X direction.

Download Full Size | PDF

Figure 2 shows the assumed MIMO-OCC system’s transmitter, which is used to calculate the SNR and SINR parameters. As shown, the optical transmitter is a traffic light with 61 red LEDs arranged in 4 rings with radii of $20~$mm, $40~$mm, $60~$mm, and $80~$mm related to the central LED. Note that similar traffic lights are available at [32]. In order to reduce LED-to-LED interference, it is assumed that only the central LED is driven by data when calculating the SNR. However, all LEDs are driven when the SINR is computed. In this work, the SINR is derived for the central LED that experiences the most interference. The SNR and SINR values in this work are obtained as

$$\textrm{SNR}=10~\textrm{log}_{10}(\textrm{mean}(\textrm{SNR}_\textrm{ROI})).$$
$$\textrm{SINR}=10~\textrm{log}_{10}(\textrm{mean}(\textrm{SINR}_\textrm{ROI})).$$
where $\textrm {SNR}_\textrm {ROI}$ and $\textrm {SINR}_\textrm {ROI}$ are the value of SNR and SINR in the ROI of the central LED and mean(.) denotes the average SNR and SINR value of the pixels within the ROI of the central LED. The values of $\textrm {SNR}_\textrm {ROI}$ and $\textrm {SINR}_\textrm {ROI}$ for all pixeles within the ROI of the central LED are computed as
$$\textrm{SNR}_\textrm{ROI}=\frac{m^2_{LED}}{\sigma^2_{noise}}.$$
$$\textrm{SINR}_\textrm{ROI}=\frac{m^2_{LED}}{m^2_{I}+\sigma^2_{noise}}.$$
where, $m_{LED}$ denotes the average of centeral LED’s DN for a pixel within the ROI of the centeral LED and $\sigma ^2_{noise}=\sigma ^2_{0}+\sigma ^2_{1}$ depicts the variance of noise in the ROI of the centeral LED. The parameters $\sigma _{0}$ and $\sigma _{1}$ are the standard deviation of DN in the pixels within the ROI of the centeral LED when the all LEDs of the traffic light are OFF and the only centeral LED is ON, respectively. In addition, $m_{I}$ is the average of the other LED’s DN for a pixel within the ROI of the centeral LED. For the purpose of calculating $m_{I}$ for pixels inside the ROI area of the central LED, we turned on all LEDs of the traffic light except the centeral LED. For the numerical computation of SNR and SINR, the PSF and the image sensor’s light intensity are computed in Section 3 to provide the value of DN. In this work, we added the CMOS noises for DN calculation such as the photoresponse nonuniformity (PRNU) noise, dark current shot noise, dark signal nonuniformity (DSNU) noise, sense node noise, and source follower noise (SF). In the case of the experimental computation of SNR and SINR, we averaged 500 frames in which only the central LED was ON to determine the ROI area for the central LED. This method was used to compute the average DN value of every pixel in the frame. In the experimental results, it was observed that the average DN of pixels outside the ROI area is less than 1. Consequently, the pixels whose average DN value was larger than 1 determined the ROI region of the central LED by setting the threshold level to 1.

 figure: Fig. 2.

Fig. 2. Optical transmitter in the considered MIMO-OCC system.

Download Full Size | PDF

3. Derivation of PSF

To compute the intensity of a traffic light LED at the image plane of the moving camera, we drive the PSF of the proposed moving optical system. Figure 3 shows the front view of Fig. 1 at time $t$, where $t \in [0,~ t_{exp}]$. We first determine the PSF of a spot on an LED’s surface as a point source located at position $p_t=(x(t),z(t))$ in plane $U_o(t)$. Then, we extend this point source to cover all LED’s surface and compute the PSF for the entire time range $[0, t_{exp}]$. As shown in Fig. 3, it is assumed that a point source in the coordinates $p_t=(x(t),z(t))$ of the $U_o(t)$ plane radiates on the camera lens’s surface in the $U_l$ plane. After passing through the lens’s output surface in the $U_{l^{\prime }}$ plane, the light of the point source is received by the camera sensor in the $U_i$ plane. The planes $U_o(t)$ and $U_l$ are separated by the distance $y(t)$ calculated in Eq. (1). Since the PSF is computed based on the location of the LED relative to the optical axis of the camera lens, we take into account the relative movement of the LED compared to the camera. For this reason, we consider the $U_l$, $U_{l^{\prime }}$, and $U_i$ planes to be fixed in Fig. 3 and the $U_o(t)$ plane to be dependent on time. The point source emits light in all directions with spherical wavefronts. According to the Huygens-Fresnel principle, the PSF at time $t$ in the position $(x_l, z_l)$ of the lens can be written as [33]

$$h_l(x_l,z_l ; x(t), y(t), z(t))=\frac{e^{j k y(t)}}{j \lambda y(t)}~ e^{\frac{j k}{2 y(t)} [ (x_l-x(t))^2+(z_l-z(t))^2 ] } ,$$
where $k$ and $\lambda$ denote the wavenumber and wavelength of the LED in the traffic light. The PSF of the moving optical system at time $t$ in position $(x_{l^{\prime }}, z_{l^{\prime }})$ of the $U_{l^{\prime }}$ plane is given by
$$h_{l^{\prime}}(x_{l^{\prime}},z_{l^{\prime}} ; x(t), y(t), z(t)) = h_l(x_{l^{\prime}},z_{l^{\prime}} ; x(t), y(t), z(t)) P(x_{l^{\prime}},z_{l^{\prime}}) e^{-\frac{j k}{2 f} ( x_{l^{\prime}}^2+z_{l^{\prime}}^2 )} ,$$
where $f$ and $P(x_{l^{\prime }},z_{l^{\prime }})$ denotes the focal length and circular pupil function of the lens, respectively. Finally, the Fresnel diffraction formula models the PSF of the whole moving optical system from the plane $U_o(t)$ to plane $U_i$ as follows [34]:
$$ \begin{aligned} & h_i\left(x_i, z_i ; x(t), y(t), z(t)\right)=\frac{e^{j k d_i}}{j \lambda d_i} \iint_{-\infty}^{\infty} h_{l^{\prime}}\left(x_{l^{\prime}}, z_{l^{\prime}} ; x(t), y(t), z(t)\right) \\ & \times e^{\frac{j k}{2 d_i}\left[\left(x_i-x_{l^{\prime}}\right)^2+\left(z_i-z_{l^{\prime}}\right)^2\right]} d x_{l^{\prime}} d z_{l^{\prime}}, \end{aligned} $$
where $d_{i}$ stands for the distance between $U_{l^{\prime }}$ and $U_i$ planes. In the following, we suppose that the camera is set to its best focus mode at $t=0$ in each frame. As a result, the value of $d_{i}$ at time $t=0$ is determined by $\frac {1}{y(0)}+\frac {1}{d_{i}}=\frac {1}{f}$ and remains constant during the exposure time of one frame. Substituting Eqs. (7) and (8) in (10) and employing the relationships $\frac {1}{y(t)}+\frac {1}{d_{i}}=\frac {1}{f}$ and $M(t)=\frac {d_{i}}{y(t)}$, we obtain:
$$\begin{aligned} h_i(x_{i},z_{i} ; x(t), y(t), z(t)) = \frac{1}{\lambda^2 y(t) d_{i}} & \int \int_{-\infty}^{\infty} P(x^{\prime},z^{\prime}) e^{\frac{j \pi}{\lambda } [\frac{1}{y(t)}-\frac{1}{y(0)}]({x^{\prime}}^2+{z^{\prime}}^2)}\\ &\times e^{-\frac{j 2 \pi}{\lambda d_{i}} [ (x_i+M(t) x(t))x^{\prime} +(z_i+M(t) z(t))z^{\prime} ] } dx^{\prime} dz^{\prime} . \end{aligned}$$

Since the absolute value of $h_i(x_{i},z_{i} ; x(t), y(t), z(t))$ (i.e., $|h_i(x_{i},z_{i} ; x(t), y(t), z(t))|$) determines the LED light intensity, the stationary phases in Eq. (11) are omitted. As the LED is a non-coherent light source, the intensity of one LED in the camera’s image plane at time $t$ is formulated as

$$|u_i(x_i,z_i,t)|^2=K_h(t) \int \int_{^{LED}_{Area}} |h_i(x_i,z_i;x(t), y(t), z(t))|^2 |u_o(x(t),z(t))|^2 dx(t) dz(t) ,$$
where $u_o(x(t),z(t))$ and $|u_o(x(t),z(t))|^2$ denote the electric field and the intensity of the LED light source at time $t$, respectively. In addition, $u_i(x_i,z_i,t)$ and $|u_i(x_i,z_i,t)|^2$ represent the electric field and the intensity of the received light at the image plane of the camera lens at time $t$, respectively. In the case of a point source, the surface integral is omitted from Eq. (12). Consequently, for every instant $t$, the value of $|u_i(x_i,z_i,t)|^2$ is determined for a specific point $(x(t), y(t), z(t))$, which represents the point source’s position relative to the camera. The coefficient $K_h(t)=\frac {\lambda ^2~y(t)^2}{(y(t)^2+x(t)^2+z(t)^2)}$ has been employed similar to [31].

 figure: Fig. 3.

Fig. 3. The front view of the OCC-based vehicular system at time $t$.

Download Full Size | PDF

Finally, the average intensity of the light source during the exposure time of the camera for each point $(x_i, z_i)$ of the image sensor, is obtained as follows:

$$|u_i(x_i,z_i)|^2= \frac{1}{t_{exp}} \int_{0}^{t_{exp}} |u_i(x_i,z_i,t)|^2 dt .$$

The value of DN can be determined for the numerical computation of SNR and SINR in Section 4 by using the received light intensity in Eq. (13) [31]. Since x(t), y(t), and z(t) in Eqs. (1) or (2) are influenced by the camera’s speed ($v_x$ or $v_y$) and the rotation angle ($\omega _x$ or $\omega _y$), Eqs. (12) and (13) similarly depend on the camera’s speed and rotation angle.

First, assume the ideal case of a point source. The image of a point source is ideally a bright spot in the picture. we consider a point source at the coordinates $p_t=(x(t),z(t))$ in plane $U_o(t)$ of Fig. 3. The light of the point source passes through a lens at a distance of $y(t)$ from plane $U_o(t)$ and reaches the image sensor after traveling the distance $d_i$. According to geometric calculations, the image of this point source is a bright spot at the coordinates $(-M(t) x(t),-M(t) z(t))$ on the camera sensor plane, where $M(t)=\frac {d_i}{y(t)}$. When the camera is stationary during the exposure interval of a frame, the values of the parameters $x(t)$, $y(t)$, $z(t)$, and $M(t)$ in that frame are constant in relation to time $t$, and their values are $x(t)=x(0)$, $y(t)=y(0)$, $z(t)=z(0)$, and $M(t)=M(0)$. Thus, according to PSF analysis in [31], the intensity of this point source in the camera’s image plane is as:

$$|u_i(x_i,z_i)|^2=K_h(0) \left( \frac{ \pi R^2}{ \lambda^2 d_i y(0)} \right)^2 \left| \frac{J_{1}(2S)}{S}\right|^2 |u_o(x(0),z(0))|^2 ,$$
where $J_{1}$ is the first-order Bessel function, $S=\left ( \frac { \pi R \sqrt {(x_i+M(0) x(0))^2+(z_i+M(0) z(0))^2}}{\lambda d_{i}} \right )$ and $R$ denotes the radius of the lens. The light intensity distribution in the image sensor is determined by $\left | \frac {J_{1}(2S)}{S}\right |^2$, which is the only variable term to $x_i$ and $z_i$ in Eq. (14). The interval $|S|<1.9$ contains more than $97.5{\%}$ of the energy of function $\left | \frac {J_{1}(2S)}{S}\right |^2$. Equation (15) is obtained by setting $S$ to be less than 1.9 and represents the equation of a circle with center $(-M(0) x(0),-M(0) z(0))$ and radius $\delta _r=\frac {1.9\lambda d_{i}}{\pi R}$. In other words,
$$(x_i+M(0) x(0))^2+(z_i+M(0) z(0))^2\leq\left(\frac{1.9\lambda d_{i}}{\pi R}\right)^2.$$

Therefore, unlike geometric calculations, the image of a point source at location $p_0=(x(0),z(0))$ in PSF analysis is a bright circle with a center $(-M(0) x(0),-M(0) z(0))$ and approximate radius $\delta _r$.

When the point source and camera move towards one another, the point source image slides on the image sensor. Let’s assume that the vehicle moves away from the traffic light, as shown in Fig. 1 with $v_y=2~{\textrm {m}}/{\textrm {s}}$ and $\omega _y=0~{\textrm {rad}}/{\textrm {s}}$. In the geometric calculations, the image of the point source slides from the point $(-M(0) x(0),-M(0) z(0))$ to $(-M(t_{exp}) x(t_{exp}),-M(t_{exp}) z(t_{exp}))$ across the image sensor. As a result, the bright spot is dispersed across the image sensor at the interval ${\Delta }_{mov}$, where

$${\Delta}_{mov}=\sqrt{(M(t_{exp}) x(t_{exp})-M(0) x(0))^2+(M(t_{exp}) z(t_{exp})-M(0) z(0))^2}.$$

There are two differences between the PSF in Eq. (11) for a moving camera and the PSF for a stationary camera in [31]. The first difference comes from the fact that the additional term ${\Delta }_{PSF}=e^{\frac {j \pi }{\lambda } [\frac {1}{y(t)}-\frac {1}{y(0)}]({x^{\prime }}^2+{z^{\prime }}^2)}$ appears in the moving camera’s PSF. The second difference is related to the term $e^{-\frac {j 2 \pi }{\lambda d_{i}} [ (x_i+M(t) x(t))x^{\prime } +(z_i+M(t) z(t))z^{\prime } ] }$ in the PSF that is time-independent for a stationary camera and time-dependent for a moving camera. This term determines the image’s center of the point source in the camera’s image sensor at any time $t$, as $(-M(t) x(t),-M(t) z(t))$. In the analyses of Section 3, it is assumed that the distance between the camera’s image sensor to the lens ($d_i$) remains constant during the exposure time of a single frame. Therefore, the lens will be partially out of focus due to the relative movement between the camera and the point source during the exposure time of a frame. As a result, the term ${\Delta }_{PSF}$ appears in the PSF of a moving camera, which relatively blurs the image. When the exposure time, $t_{exp}$, is short, the impact of the term ${\Delta }_{PSF}$ on the PSF is small and can be ignored. However, the second difference demonstrates that the image of the point source slides during the exposure time of one frame on the image sensor of the moving camera because the bright circle’s center is time-dependent.

4. Numerical results

In this section, we first examine the distribution of received light intensity on the camera sensor for a point source and an LED. Then, the values of SNR and SINR are calculated for the traffic light shown in Fig. 2. Figure 4(a) shows the intensity distribution of a point source in the stationary camera’s image sensor for geometric calculations [35]. As expected, the point source intensity in the geometric calculations is ideally concentrated at one point of the image sensor, and as the value of $y(0)$ increases, the received intensity in the image sensor also decreases. However, Figs. 4(b) and 4(c) compare the intensity distribution of a point source in the stationary and moving camera’s image sensor for PSF analysis. The calculation of the intensity distribution for the moving and stationary camera is derived from Eq. (13). The PSF analysis in these figures demonstrates that, in contrast to the geometric calculations, the point source intensity on the image sensor of a stationary camera is slightly dispersed around the center point of its image. In addition, camera’s movement leads images to further spread on the image sensor and its intensity amplitude diminishes. In these figures, it is assumed that the camera is moving with $v_y=2~{\textrm {m}}/{\textrm {s}}$ and $\omega _y=0~{\textrm {rad}}/{\textrm {s}}$, as shown in Fig. 1. The light intensity distribution in Figs. 4(b) and 4(c) demonstrates that the spreading of the image is more significantly impacted by camera movement in near distances (shorter $y(0)$). Therefore, as the light source is nearer to the camera, the camera’s movement causes the light source’s image to become more spread on the camera sensor, and the amplitude of its intensity diminishes further. The values of the simulation parameters used for Figs. 4 and 5 are shown in Table 1, where the parameters $FN$ and $l_{pix}$ stand for the lens’s f-number and the image sensor’s pixel size, respectively.

 figure: Fig. 4.

Fig. 4. Point source intensity distribution in the image sensor of a stationary and moving camera. (a) Geometric analysis. (b) PSF analysis at $y(0)=4~$m. (c) PSF analysis at $y(0)=8~$m.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Point source image in the image sensor of a stationary and moving camera for $y(0)=4~$m. (a) stationary camera. (b) moving camera.

Download Full Size | PDF

Tables Icon

Table 1. The simulation parameters used for Figs. 4 to 7.

According to geometric calculations, since the point source light is received at a single location on the image sensor, only one pixel can capture its light. For the case of PSF analysis, Fig. 5 shows the DN value of the image sensor pixels according to the light intensity distribution in Fig. 4(b). It is presumed that the stationary camera’s settings, such as the camera’s image sensor sensitivity (ISO), have been set so that the DN of the pixel that gets the most light from a point source in a fixed $y(0)$ is equal to $255$.

Since all artificial light sources, including LEDs, have finite physical size, there is actually no point source in the real world. Therefore, we consider an LED with center $p_0=(x(0), z(0))$ and radius $r$ is located at a distance $y(0)$ from the camera mounted on a car at an intersection, as shown in Fig. 1(a). According to geometric calculations, the image of this LED on the camera sensor is a circle with center $(-M(0) x(0),-M(0) z(0))$ and radius $M(0) r$, where $M(0)=\frac {d_i}{y(0)}$ [35]. However, the LED light intensity in the moving and stationary camera based on the PSF analysis is derived from Eq. (13). Figure 6(a) compares the intensity distribution of an LED on the stationary camera in the geometric and PSF analysis. Unlike the geometric analysis, the PSF demonstrates a gradual decline in the distribution of an LED’s light intensity on the image sensor. In addition, the LED’s light intensity in PSF analysis is distributed over a larger area of the image sensor. The intensity distribution of an LED captured by the stationary and moving camera in the PSF analysis are compared in Figs. 6(b) and 6(c). In these figures, it is assumed that the camera moves away from the traffic light with $v_y=2~{\textrm {m}}/{\textrm {s}}$ and $\omega _y=0~{\textrm {rad}}/{\textrm {s}}$, as shown in Fig. 1. Figures 6(b) and 6(c) illustrate that the LED’s intensity distribution for a moving camera spread over a larger area of the camera sensor. Comparing Figs. 6(b) and 6(c) reveals that the camera’s motion has a higher impact on the spreading of LED’s intensity distribution at closer distances.

 figure: Fig. 6.

Fig. 6. Intensity distribution of an LED in the image sensor of the stationary and moving camera. (a) Sationary camera. (b) PSF analysis at $y(0)=4~$m. (c) PSF analysis at $y(0)=8~$m.

Download Full Size | PDF

Figure 7 shows the image of an LED at $y(0)=4~$m for PSF analysis in the stationary and moving camera. As shown, the LED’s image in the moving camera in Fig. 7(b) is more spread over the surface of the camera sensor compared to the stationary camera in Fig. 7(a). The LED’s images in Fig. 7 confirm the results of Fig. 6. The spread of the LED’s image becomes important when many LEDs are placed close to one another and transmit data independently. In this scenario, the spread of the LED’s images has more impact on the images of the nearby LEDs.

 figure: Fig. 7.

Fig. 7. Image of an LED in the PSF analysis for the stationary and moving camera at $y(0)=4~$m. (a) Stationary camera. (b) Moving camera.

Download Full Size | PDF

Table 2 is a list of the variables utilized for measuring the SNR and SINR in Eqs. (3) and (4). In this table, the parameters $QE$, $A_{SN}$, $A_{SF}$, $V_{min}$, $N_{ADC}$, and $K_{ADC}$ represent quantum efficiency, sense node conversion gain, linear source follower gain, ADC’s minimum quantifiable voltage, the ADC bit, and the ADC resolution, respectively. Moreover, parameters $I_{dc}$, $\sigma _{DSNU}$, $\sigma _{PRNU}$, $\sigma _{SF}$, and $\sigma _{reset}$ denote the average dark current, the standard deviations of DSNU noise, PRNU noise, SF noise, and reset noise, respectively [31]. Figure 8 investigates the effect of parameters $v_y$, $v_x$, $\omega _y$, $\omega _x$, and $p_0=(i,j)$ on the SNR of an LED’s light signal in the PSF analysis at $y(0)=4~$m and $y(0)=8~$m. The parameter $p_0=(i,j)$ indicates the position of the traffic light’s center at point $(x(0)=i,z(0)=j)$ on the XZ plane of CS1 in Fig. 1. The symbol $v$ on the horizontal axis in this figure represents $v_y$ for movement in the Y-axis direction and $v_x$ for movement in the X-axis direction. The movement of the vehicle has no impact on the SNR if the traffic light is located at $p_0=(0,0)~$m and vehicle moves in the direction of the Y-axis, as shown in Fig. 8. In other words, as long as the LED is located exactly on the lens’s optical axis and the direction of the vehicle’s movement is on this axis, the SNR value remains unchanged. The vehicle’s movement cannot cause the LED light to spread during a single exposure time of a frame in this case because the LED and the camera are fully facing each other. However, when the vehicle travels along the X-axis, the SNR value rapidly drops as the vehicle’s speed increases. Usually, under normal situations, vehicles do not turn left or right with high angular speed in the transportation environment. We investigated $\omega _y$ and $\omega _x$ up to $0.5~\textrm {rad}/ \textrm {s}$ in this study, and the findings indicated that they had a negligible effect on the SNR and SINR. In contrast to $\omega$, changing the traffic light’s position from $p_0=(0,0)~$m to $p_0=(3,3)~$m causes the SNR to decline as $v_y$ increases. However, the change of traffic light’s position during the vehicle’s movement in the X-axis direction do not significantly impact the SNR. Comparing Figs. 8(a) and 8(b) demonstrates that the impact of the traffic light’s position on the SNR curve versus $v_y$ reduces as the distance between the LED and the camera rises. For instance, changing $p_0$ from $(0,0)~$m to $(3,3)~$m results in a decrease of $6.44$ and $2.21~$dB in the SNR curve at $v_y=10~\textrm {m}/ \textrm {s}$ for $y(0)=4~$m and $y(0)=8~$m , respectively. Moreover, the SNR value experiences a decline as $t_{exp}$ diminishes. This reduction in SNR can be attributed to the fact that with a smaller $t_{exp}$, the light intensity received in the pixels also lessens. For example, Fig. 8(b) shows that the SNR value at $v_x=0~\textrm {m}/ \textrm {s}$ and $p_0 = (3, 3)$ m has decreased by $0.3~$dB, when $t_{exp}$ is reduced from $0.01~$s to $0.005~$s.

 figure: Fig. 8.

Fig. 8. The SNR versus the vehicle’s speed. (a) The SNR versus the vehicle’s speed for $y(0)=4~$m. (b) The SNR versus the vehicle’s speed for $y(0)=8~$m.

Download Full Size | PDF

Tables Icon

Table 2. Simulation parameters used for the SNR and SINR computation.

Figure 9 illustrates how the quantity and direction of the LED image spreading vary in response to changes in the vehicle’s direction. As mentioned in Section 3, the image of any point $(x(t),z(t))$ of a LED at any time $t$ on the camera’s image sensor is a circle with center $(-M(t)x(t),-M(t)z(t))$, where $M(t)=\frac {d_i}{y(t)}$. The vehicle’s movement in the direction of X-axis will only alter the value of $x(t)$, causing the LED’s image on the camera sensor to only spread across the $x_i$-axis. However, the $y(t)$ value is altered when the vehicle moves in the Y direction, which causes the image on the camera sensor to be spread across both $x_i$ and $z_i$ directions. Consequently, the direction of the vehicle movement influences the amount of the LEDs interference. Since the SINR value is dependent on the LEDs’ interference, the direction of the vehicle movement affects its value. The impact of parameters $v_y$, $v_x$, $\omega _y$, $\omega _x$, and $p_0=(i,j)$ on the SINR of the traffic light’s central LED is examined in Fig. 10. This figure shows that when the traffic light is located at $p_0=(0,0)~$m and vehicle moves in the direction of the Y-axis, the increase of $v_y$ from $0$ to $10~\textrm {m}/ \textrm {s}$ has no impact on the SINR value. However, as the vehicle goes in the X-axis direction and its speed rises, the SINR value rapidly decreases. Figure 10 illustrates that the SINR curves for the legends "$v_y$, $p_0 = (3, 3)$ m, $t_{exp}=0.01$ s" and "$v_y$, $p_0 = (3, 3)$ m, $t_{exp}=0.005$ s" has a greater value at $y(0)=8$ m compared to $y(0)=4$ m when $v_y$ increases. Consequently, the SINR value is less influenced by the vehicle’s movement along the Y-axis as the angle between the LEDs and the lens’s optical axis ($\textrm {tan}^{-1}(\frac {\sqrt {x(t)^2+z(t)^2}}{y(t)})$) decreases. As can be observed, parameters $v_y$, $v_x$ and $p_0$ have a more greater impact on SINR than SNR. For example, the comparison of Figs. 8(a) and 10(a) shows when $p_0$ is increased from $(0,0)~$m to $(3,3)~$m at $t_{exp}=0.01~$s and $v_y=10~\textrm {m}/ \textrm {s}$, the SNR and SINR values are reduced by $6.44~$dB and $37.83~$dB, respectively. However, as $t_{exp}$ decreases, the spread of the LED image diminishes in higher $v_y$ and $v_x$. Consequently, as depicted in Fig. 10, a decrease in $t_{exp}$ results in a higher SINR value at larger $v_y$ or $v_x$. For instance, Fig. 10(a) illustrates that as $t_{exp}$ drops from $0.01$ s to $0.005$ s the SINR value at $p_0 = (3, 3)$ m and $v_x=10~\textrm {m}/ \textrm {s}$ increases from $-4.53~$dB to $23.28~$dB.

 figure: Fig. 9.

Fig. 9. Interference of LED images due to vehicle movement with $v=2~\textrm {m}/ \textrm {s}$ at $y(0)=4~$m and $p_0=(3,3)~$m. (a) Vehicle movement in the direction of the X-axis. (b) Vehicle movement in the direction of the Y-axis.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. The SINR versus the vehicle’s speed. (a) The SINR versus $v$ for $y(0)=4~$m. (b) The SINR versus $v$ for $y(0)=8~$m.

Download Full Size | PDF

The mathematical analyses derived in this article are assessed with experimental data in the next section.

5. Experimental results

This section uses the experimental setup depicted in Fig. 11 to evaluate the PSF analysis results, which forms the basis of the study in this paper. The experimental parameters are listed in Table 3. According to the official information, the Redmi Note 8 image sensor has a pixel size of $0.8~\mu \textrm {m}$ and uses a pixel-combined Tetracell technology [36]. Tetracell, also known as quad Bayer or 4-cell, means adjacent $2\times 2$ pixels are the same color. In a low-light environment, Tetracell merges four neighboring $0.8~\mu \textrm {m}$ pixels to mimic a large $1.6~\mu \textrm {m}$ pixel to absorb more light. The traffic light in this experiment is in the line of sight of camera, and moves 3 meters horizontally on a rail at a speed of $v_x=2~\textrm {m}/ \textrm {s}$.

 figure: Fig. 11.

Fig. 11. The experimental setup designed to evaluate the PSF analysis.

Download Full Size | PDF

Tables Icon

Table 3. Experimental parameters.

Based on the characteristics of the receiver and the transmitter, the quality of LED images noticeably deteriorates beyond a distance of $8$ m. Consequently, we have established $8$ m as the far distance and half of that (i.e., $4$ m), as the near distance to compare the outcomes of our theoretical results and experimental findings. Four videos are recorded for stationary and moving scenarios at $y=4~\textrm {m}$ and $y=8~\textrm {m}$. Table 4 and 5, and Figs. 12 and 13 present and compare measured results with simulated results. The important simulation parameters are shown in Table 2. In this section, due to the smaller pixel size compared to Section 4, we have increased the value of $A_{SN}$ to $34~\mu v /\textrm {e}^-$. Figure 12 compares the LEDs’ image in the recorded videos and the PSF analysis for stationary and moving states. Figure 13 compares the DN of LED’s image calculated by the PSF analysis with the distribution of the LED’s image in the recorded video in X direction. These figures illustrate the good agreement between the experimental results and the simulated results.

 figure: Fig. 12.

Fig. 12. Comparison of the LED’s image in the video frames and PSF analysis. (a)Video, $y(0)=4~$m, $v_x=0~\textrm {m}/ \textrm {s}$. (b)Video, $y(0)=4~$m, $v_x=2~\textrm {m}/ \textrm {s}$. (c)Video, $y(0)=8~$m, $v_x=0~\textrm {m}/ \textrm {s}$. (d)Video, $y(0)=8~$m, $v_x=2~\textrm {m}/ \textrm {s}$. (e)PSF, $y(0)=4~$m, $v_x=0~\textrm {m}/ \textrm {s}$. (f)PSF, $y(0)=4~$m, $v_x=2~\textrm {m}/ \textrm {s}$. (g)PSF, $y(0)=8~$m, $v_x=0~\textrm {m}/ \textrm {s}$. (h)PSF, $y(0)=8~$m, $v_x=2~\textrm {m}/ \textrm {s}$.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. Comparison of the LED’s image profile in X direction on recorded frames and the simulation results. (a) LED’s signal for $y(0)=4~$m and $v_x=0~\textrm {m}/ \textrm {s}$.. (b) LED’s signal for $y(0)=4~$m and $v_x=2~\textrm {m}/ \textrm {s}$.. (c) LED’s signal for $y(0)=8~$m and $v_x=0~\textrm {m}/ \textrm {s}$.. (d) LED’s signal for $y(0)=8~$m and $v_x=2~\textrm {m}/ \textrm {s}$.

Download Full Size | PDF

Tables Icon

Table 4. The comparison of the LED’s SNR in the recorded video, PSF analysis, and geometric analysis.

Tables Icon

Table 5. The comparison of the LED’s SINR in the recorded video, PSF analysis, and geometric analysis.

In order to calculate the SNR, the interference between the LEDs should be minimized. Therefore, in this part of the experiment, only the central LED of the traffic light is turned “ON” for data transmission and the DN values of the recorded videos are used in Eq. (3) for experimental computation of SNR. Tables 4 and 5 compare the SNR and SINR of the experimental results with SNR and SINR of the simulated results using PSF and geometric analyses. In Tables 4 and 5, we have only compared the theoretical and experimental results for speeds of $0$ and $2$ m/s due to the speed constraints of our laboratory equipment. Since geometric analysis is ideal and PSF analysis is a more accurate model, we anticipate that the PSF results to be in better agreement with the experimental outcomes than those derived from the geometric model. The information presented in Tables 4 and 5 confirms our prediction for both speeds of $0$ and $2$ m/s. As evidenced by Tables 4 and 5, the SNR and SINR of PSF analysis are more comparable to experimental results than the geometric analysis. We anticipate that this trend will continue to be observed at higher speeds as well. The difference between the simulation results in Figs. 8 and 10, and the experimental results Tables in 4 and 5 is due to the difference in their pixel size, $A_{SN}$ value, and the usage of Tetracell technology.

6. Conclusion

In this paper, we considered a vehicular MIMO-OCC scenario in which a moving vehicle’s camera served as the receiver and the traffic light’s LEDs acted as transmitters. We used precise PSF analysis to study the effect of vehicle mobility on the performance of the vehicular MIMO-OCC system because it has considerable adverse effects on the performance of the OCC system in vehicular applications. The outcomes of the PSF analysis had a good agreement with the experimental results provided in this work. The results showed that increasing the vehicle’s speed from 0 to $10~\textrm {m}/ \textrm {s}$ significantly decreases the SNR and SINR of the OCC system. This study demonstrated that the camera’s motion has a greater influence on the spreading of the LED’s image in shorter distances. Moreover, changing the LED’s position from $p_0=(0,0)~$m to $p_0=(3,3)~$m causes the SNR and SINR to decrease more as vehicle’s speed in Y direction rises. However, increasing angular velocity from $0$ to $0.5~\textrm {rad}/ \textrm {s}$ has a negligible influence on the SNR and SINR.

7. Appendix A

In this section, the location of the LEDs at each moment of $t$ relative to the optical axis of the lens in Fig. 1 is obtained. First, we obtain the momentary location of the moving vehicle in the direction of the Y-axis. To delineate the continuous movement of the vehicle in the time range from $0$ to $t$, we divide this time interval into $N$ subintervals and then take $N$ towards infinity. In each subinterval ${\Delta } t=\frac {t}{N}$, the vehicle moves forward by $v_y {{\Delta }}t$ and rotates by ${\Delta }\alpha =\frac {\omega _y t}{N}$ relative to the lens’s optical axis in the previous subinterval. In the first subinterval, the lens’s optical axis rotates by ${\Delta }\alpha$ relative to the Y-axis of CS1 and coincides with the Y-axis of ${\textrm {CS}_{{\Delta } t}}_{1}$, as shown in Fig. 1(c). Then, the vehicle moves forward by $v_y {\Delta }t$ and arrives at point ${l_{{\Delta } t}}_{1}$. The locations of the lens and the LED’s spot in ${\textrm {CS}_{{\Delta } t}}_{1}$ relative to CS1 can be determined as follows:

$$\begin{aligned} &x_{{\Delta t}_1}=x(0) \cos(\Delta\alpha)-y(0) \sin(\Delta\alpha),\\ &y_{{\Delta t}_1}=y(0) \cos(\Delta\alpha)+x(0) \sin(\Delta\alpha)+ v_y \Delta t. \end{aligned}$$

Note that $z_{{\Delta t}_i}=z(0)$ for $i=1{\ldots }N$. We continue the same process in the next subintervals. The locations of the lens and LED’s spot in the second subinterval relative to ${\textrm {CS}_{{\Delta } t}}_{1}$ are as:

$$\begin{aligned} &x_{{\Delta t}_2}=x_{{\Delta t}_1} \cos(\Delta\alpha)-y_{{\Delta t}_1} \sin(\Delta\alpha),\\ &y_{{\Delta t}_2}=y_{{\Delta t}_1} \cos(\Delta\alpha)+x_{{\Delta t}_1} \sin(\Delta\alpha)+ v_y \Delta t. \end{aligned}$$

Substituting (17) into Eq. (18), we have:

$$\begin{aligned} &x_{{\Delta t}_2}=x(0) \cos(2\Delta\alpha)-y(0) \sin(2\Delta\alpha)-v_y \Delta t \sin(\Delta\alpha),\\ &y_{{\Delta t}_2}=y(0) \cos(2\Delta\alpha)+x(0) \sin(2\Delta\alpha)+v_y \Delta t \cos(\Delta\alpha)+ v_y \Delta t. \end{aligned}$$

After repeating the same procedure until the $N^{th}$ subinterval, we finally arrive at:

$$\begin{aligned} &x_{{\Delta t}_N}=x(0) \cos(N\Delta\alpha)-y(0) \sin(N\Delta\alpha)-v_y \Delta t \sum\nolimits_{n=0}^{N-1} \sin(n\Delta\alpha),\\ &y_{{\Delta t}_N}=y(0) \cos(N\Delta\alpha)+x(0) \sin(N\Delta\alpha)+v_y \Delta t \sum\nolimits_{n=0}^{N-1} \cos(n\Delta\alpha). \end{aligned}$$

Employing the relationships $\Delta \alpha =\frac {\omega _y t}{N}$ and $\Delta t=\frac {t}{N}$ and taking $N$ towards infinity, we have:

$$\begin{aligned} &x(t)=x(0) \cos(\omega_y t)-y(0) \sin(\omega_y t)-v_y t \left(\lim_{N\rightarrow\infty}\frac{1}{N}\sum\nolimits_{n=0}^{N-1} \sin(\frac{n\omega_y t}{N})\right),\\ &y(t)=y(0) \cos(\omega_y t)+x(0) \sin(\omega_y t)+v_y t \left(\lim_{N\rightarrow\infty}\frac{1}{N}\sum\nolimits_{n=0}^{N-1} \cos(\frac{n\omega_y t}{N})\right),\\ &z(t)=z(0). \end{aligned}$$
where $p_t=(x(t), z(t))$, and $y(t)$ are the location of the LED’s spot in the XZ plane of $\textrm {CS}_{{\Delta t}_N}$ and the location of the lens on the Y-axis of $\textrm {CS}_{{\Delta t}_N}$, respectively. $\textrm {CS}_{{\Delta t}_N}$ is a coordinate system whose Y-axis coincides with the lens’s optical axis at time $t$. The Y-axis of $\textrm {CS}_{{\Delta t}_N}$ has an $\omega _y t$ angle with the Y-axis of the CS1 coordinate system. Substituting the relations
$$\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n=0}^{N-1} \sin(\frac{n\omega_y t}{N})=\left\{ \begin{array}{rl} 0~~~~~ & \omega_y=0\\ \frac{1-\cos(\omega_y t)}{\omega_y t} & \omega_y\neq0 \end{array}\right.$$
$$\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n=0}^{N-1} \cos(\frac{n\omega_y t}{N})=\left\{ \begin{array}{rl} 1~~~~~ & \omega_y=0\\ \frac{\sin(\omega_y t)}{\omega_y t} & \omega_y\neq0 \end{array}\right.$$
into Eq. (21), we finally obtain the location of the LED’s spot and the lens at time $t$ as follows:
$$\begin{aligned} &x(t)=x(0) \cos(\omega_y t)-y(0) \sin(\omega_y t)-\left\{ \begin{array}{ll} 0 & \omega_y=0 \\ v_y t \left( \frac{1-\cos(\omega_y t)}{\omega_y t}\right) & \omega_y\neq0 \\ \end{array}\right.\\ &y(t)=y(0) \cos(\omega_y t)+x(0) \sin(\omega_y t)+\left\{ \begin{array}{ll} v_y t & \omega_y=0\\ v_y t \left( \frac{\sin(\omega_y t)}{\omega_y t}\right) & \omega_y\neq0 \\ \end{array}\right.\\ &z(t)=z(0). \end{aligned}$$

By using same argument as above, momentary location of the LED’s spot and the lens for a moving in the direction of the X-axis are obtained as:

$$\begin{aligned} &x(t)=x(0) \cos(\omega_x t)+y(0) \sin(\omega_x t)+\left\{ \begin{array}{ll} v_x t, & \omega_x=0\\ v_x t \left( \frac{\sin(\omega_x t)}{\omega_x t}\right), & \omega_x\neq0 \\ \end{array}\right.\\ &y(t)=y(0) \cos(\omega_x t)-x(0) \sin(\omega_x t)-\left\{ \begin{array}{ll} 0, & \omega_x=0 \\ v_x t \left( \frac{1-\cos(\omega_x t)}{\omega_x t}\right), & \omega_x\neq0 \\ \end{array}\right.\\ &z(t)=z(0). \end{aligned}$$

Funding

R&D Center of Mobile Telecommunication Company of Iran (MCI) (RD-51-9911-0021).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. S. Kuutti, R. Bowden, and Y. Jin, “A survey of deep learning applications to autonomous vehicle control,” IEEE Trans. Intell. Transport. Syst. 22(2), 712–733 (2021). [CrossRef]  

2. B. Caiazzo, D. G. Lui, A. Petrillo, et al., “Distributed double-layer control for coordination of multiplatoons approaching road restriction in the presence of IoV communication delays,” IEEE Internet Things J. 9(6), 4090–4109 (2022). [CrossRef]  

3. J. He and Y. Liu, “Vehicle positioning scheme based on particle filter assisted single led visible light positioning and inertial fusion,” Opt. Express 31(5), 7742–7752 (2023). [CrossRef]  

4. K. G. Rallis, V. K. Papanikolaou, and P. D. Diamantoulakis, “Energy efficient cooperative communications in aggregated VLC/RF networks with NOMA,” IEEE Trans. Commun. 71(9), 5408–5419 (2023). [CrossRef]  

5. Y.-H. Chang, S.-Y. Tsai, and C.-W. Chow, “Unmanned-aerial-vehicle based optical camera communication system using light-diffusing fiber and rolling-shutter image-sensor,” Opt. Express 31(11), 18670–18679 (2023). [CrossRef]  

6. A. Celik, I. Romdhane, G. Kaddoum, et al., “A top-down survey on optical wireless communications for the internet of things,” IEEE Commun. Surv. Tutorials 25(1), 1–45 (2023). [CrossRef]  

7. S. A. H. Mohsan, A. Mazinani, H. B. Sadiq, et al., “A survey of optical wireless technologies: practical considerations, impairments, security issues and future research directions,” Opt. Quantum Electron. 54(3), 187 (2022). [CrossRef]  

8. K. Wang, T. Song, and Y. Wang, “Evolution of short-range optical wireless communications (tutorial),” J. Lightwave Technol. 41(4), 1019–1040 (2023). [CrossRef]  

9. J. He, K. Tang, J. He, et al., “Effective vehicle-to-vehicle positioning method using monocular camera based on vlc,” Opt. Express 28(4), 4433–4443 (2020). [CrossRef]  

10. N. Devulapalli, V. Matus, E. Eso, et al., “Lane-cross detection using optical camera-based road-to-vehicle communications,” in 2021 17th International Symposium on Wireless Communication Systems (ISWCS), (2021), pp. 1–5.

11. K. Ashraf, V. Varadarajan, and M. R. Rahman, “See-through a vehicle: Augmenting road safety information using visual perception and camera communication in vehicles,” IEEE Trans. Veh. Technol. 70(4), 3071–3086 (2021). [CrossRef]  

12. J. He and B. Zhou, “Vehicle positioning scheme based on visible light communication using a cmos camera,” Opt. Express 29(17), 27278–27290 (2021). [CrossRef]  

13. P. Singh, H. Jeon, and S. Yun, “Vehicle positioning based on optical camera communication in V2I environments,” Comput. Materials, & Continua 72(2), 2927–2945 (2022). [CrossRef]  

14. S. Halawi, E. Yaacoub, S. Kassir, et al., “Performance analysis of circular color shift keying in VLC systems with camera-based receivers,” IEEE Trans. Commun. 67(6), 4252–4266 (2019). [CrossRef]  

15. E. Eso, Z. Ghassemlooy, S. Zvanovec, et al., “Experimental demonstration of vehicle to road side infrastructure visible light communications,” in 2019 2nd West Asian Colloquium on Optical Wireless Communications (WACOWC), (2019), pp. 85–89.

16. H. Marina, I. Soto, J. Valerio, et al., “Automatic traffic light detection using AI for VLC,” in 2022 13th International Symposium on Communication Systems, Networks and Digital Signal Processing (CSNDSP), (2022), pp. 446–451.

17. D. N. Choi, S. Y. Jin, J. Lee, et al., “Deep learning technique for improving data reception in optical camera communication-based V2I,” in 2019 28th International Conference on Computer Communication and Networks (ICCCN), (2019), pp. 1–2.

18. H. Matsuda, H. Matsushita, and S. Arai, “Recognition using YOLO for degraded images on visible light communication,” in International Symposium on Nonlinear Theory and Its Applications, (2022), pp. 289–292.

19. T. Nguyen, N. T. Le, and Y. M. Jang, “Asynchronous scheme for optical camera communication-based infrastructure-to-vehicle communication,” Int. J. Distributed Sens. Networks 2015, 1–10 (2015). [CrossRef]  

20. H. B. Eldeeb, M. Elamassie, S. M. Sait, et al., “Infrastructure-to-vehicle visible light communications: Channel modelling and performance analysis,” IEEE Trans. Veh. Technol. 71(3), 2240–2250 (2022). [CrossRef]  

21. M. S. Demir, H. B. Eldeeb, and M. Uysal, “CoMP-based dynamic handover for vehicular VLC networks,” IEEE Commun. Lett. 24(9), 2024–2028 (2020). [CrossRef]  

22. P. A. Cheruvillil and D. Sriram Kumar, “Design and analysis of infrastructure to vehicle visible light communication channel modeling,” in 27th International Conference on Advanced Computing and Communications (ADCOM 2022), vol. 2023 (2023), pp. 12–16.

23. S. Caputo, L. Mucchi, and F. Cataliotti, “Measurement-based VLC channel characterization for I2V communications in a real urban scenario,” Veh. Commun. 28, 100305 (2021). [CrossRef]  

24. M. Seminara, T. Nawaz, and S. Caputo, “Characterization of field of view in visible light communication systems for intelligent transportation systems,” IEEE Photonics J. 12(4), 1–16 (2020). [CrossRef]  

25. V. Matus, V. Guerra, and C. Jurado-Verdu, “Wireless sensor networks using sub-pixel optical camera communications: Advances in experimental channel evaluation,” Sensors 21(8), 2739 (2021). [CrossRef]  

26. T. Nagura, T. Yamazato, M. Katayama, et al., “Improved decoding methods of visible light communication system for ITS using LED array and high-speed camera,” in 2010 IEEE 71st Vehicular Technology Conference, (2010), pp. 1–5.

27. M. S. Ifthekhar, M. A. Hossain, C. H. Hong, et al., “Radiometric and geometric camera model for optical camera communications,” in 2015 Seventh International Conference on Ubiquitous and Future Networks, (2015), pp. 53–57.

28. E. Torres Zapata, “VLC systems for smart cities: mobility management scheme for vehicular networks,” Ph.D. thesis, University of Las Palmas de Gran Canaria (2022).

29. A. Al-Kinani, C.-X. Wang, L. Zhou, et al., “Optical wireless communication channel measurements and models,” IEEE Commun. Surv. Tutorials 20(3), 1939–1962 (2018). [CrossRef]  

30. Z. Ghassemlooy, W. Popoola, and S. Rajbhandari, Optical wireless communications: system and channel modelling with Matlab® (CRC press, 2019).

31. M. Eghbal, F. S. Tabataba, and J. Abouei, “Investigating the effect of turbulence on IPI in a vehicular OCC system using PSF analysis,” Opt. Continuum 1(9), 2011–2029 (2022). [CrossRef]  

32. https://www.camaweigh.com/led-intelligent-traffic-signal-light.html.

33. J. Goodman, “Wave-optics analysis of coherent optical systems,” in Introduction to Fourier Optics, (Roberts and Company Publishers, 2005).

34. K. Khare, “Fourier optics and computational imaging,” (Wiley Online Library, 2015).

35. T. Yamazato, M. Kinoshita, and S. Arai, “Vehicle motion and pixel illumination modeling for image sensor based visible light communication,” IEEE J. Select. Areas Commun. 33(9), 1793–1805 (2015). [CrossRef]  

36. https://en.wikipedia.org/wiki/ISOCELL.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. The OCC-based vehicular communication architecture. (a) Vehicle movement in Y direction. (b) Coordinate system at time $t_{exp}$. (c) Coordinate system at any time $t$. (d) Vehicle movement in X direction.
Fig. 2.
Fig. 2. Optical transmitter in the considered MIMO-OCC system.
Fig. 3.
Fig. 3. The front view of the OCC-based vehicular system at time $t$.
Fig. 4.
Fig. 4. Point source intensity distribution in the image sensor of a stationary and moving camera. (a) Geometric analysis. (b) PSF analysis at $y(0)=4~$m. (c) PSF analysis at $y(0)=8~$m.
Fig. 5.
Fig. 5. Point source image in the image sensor of a stationary and moving camera for $y(0)=4~$m. (a) stationary camera. (b) moving camera.
Fig. 6.
Fig. 6. Intensity distribution of an LED in the image sensor of the stationary and moving camera. (a) Sationary camera. (b) PSF analysis at $y(0)=4~$m. (c) PSF analysis at $y(0)=8~$m.
Fig. 7.
Fig. 7. Image of an LED in the PSF analysis for the stationary and moving camera at $y(0)=4~$m. (a) Stationary camera. (b) Moving camera.
Fig. 8.
Fig. 8. The SNR versus the vehicle’s speed. (a) The SNR versus the vehicle’s speed for $y(0)=4~$m. (b) The SNR versus the vehicle’s speed for $y(0)=8~$m.
Fig. 9.
Fig. 9. Interference of LED images due to vehicle movement with $v=2~\textrm {m}/ \textrm {s}$ at $y(0)=4~$m and $p_0=(3,3)~$m. (a) Vehicle movement in the direction of the X-axis. (b) Vehicle movement in the direction of the Y-axis.
Fig. 10.
Fig. 10. The SINR versus the vehicle’s speed. (a) The SINR versus $v$ for $y(0)=4~$m. (b) The SINR versus $v$ for $y(0)=8~$m.
Fig. 11.
Fig. 11. The experimental setup designed to evaluate the PSF analysis.
Fig. 12.
Fig. 12. Comparison of the LED’s image in the video frames and PSF analysis. (a)Video, $y(0)=4~$m, $v_x=0~\textrm {m}/ \textrm {s}$. (b)Video, $y(0)=4~$m, $v_x=2~\textrm {m}/ \textrm {s}$. (c)Video, $y(0)=8~$m, $v_x=0~\textrm {m}/ \textrm {s}$. (d)Video, $y(0)=8~$m, $v_x=2~\textrm {m}/ \textrm {s}$. (e)PSF, $y(0)=4~$m, $v_x=0~\textrm {m}/ \textrm {s}$. (f)PSF, $y(0)=4~$m, $v_x=2~\textrm {m}/ \textrm {s}$. (g)PSF, $y(0)=8~$m, $v_x=0~\textrm {m}/ \textrm {s}$. (h)PSF, $y(0)=8~$m, $v_x=2~\textrm {m}/ \textrm {s}$.
Fig. 13.
Fig. 13. Comparison of the LED’s image profile in X direction on recorded frames and the simulation results. (a) LED’s signal for $y(0)=4~$m and $v_x=0~\textrm {m}/ \textrm {s}$.. (b) LED’s signal for $y(0)=4~$m and $v_x=2~\textrm {m}/ \textrm {s}$.. (c) LED’s signal for $y(0)=8~$m and $v_x=0~\textrm {m}/ \textrm {s}$.. (d) LED’s signal for $y(0)=8~$m and $v_x=2~\textrm {m}/ \textrm {s}$.

Tables (5)

Tables Icon

Table 1. The simulation parameters used for Figs. 4 to 7.

Tables Icon

Table 2. Simulation parameters used for the SNR and SINR computation.

Tables Icon

Table 3. Experimental parameters.

Tables Icon

Table 4. The comparison of the LED’s SNR in the recorded video, PSF analysis, and geometric analysis.

Tables Icon

Table 5. The comparison of the LED’s SINR in the recorded video, PSF analysis, and geometric analysis.

Equations (24)

Equations on this page are rendered with MathJax. Learn more.

x ( t ) = { x ( 0 ) , ω y = 0 x ( 0 ) cos ( ω y t ) y ( 0 ) sin ( ω y t ) v y t ( 1 cos ( ω y t ) ω y t ) , ω y 0 y ( t ) = { y ( 0 ) + v y t , ω y = 0 y ( 0 ) cos ( ω y t ) + x ( 0 ) sin ( ω y t ) + v y t ( sin ( ω y t ) ω y t ) , ω y 0 z ( t ) = z ( 0 ) .
x ( t ) = { x ( 0 ) + v x t , ω x = 0 x ( 0 ) cos ( ω x t ) + y ( 0 ) sin ( ω x t ) + v x t ( sin ( ω x t ) ω x t ) , ω x 0 y ( t ) = { y ( 0 ) , ω x = 0 y ( 0 ) cos ( ω x t ) x ( 0 ) sin ( ω x t ) v x t ( 1 cos ( ω x t ) ω x t ) , ω x 0 z ( t ) = z ( 0 ) .
SNR = 10   log 10 ( mean ( SNR ROI ) ) .
SINR = 10   log 10 ( mean ( SINR ROI ) ) .
SNR ROI = m L E D 2 σ n o i s e 2 .
SINR ROI = m L E D 2 m I 2 + σ n o i s e 2 .
h l ( x l , z l ; x ( t ) , y ( t ) , z ( t ) ) = e j k y ( t ) j λ y ( t )   e j k 2 y ( t ) [ ( x l x ( t ) ) 2 + ( z l z ( t ) ) 2 ] ,
h l ( x l , z l ; x ( t ) , y ( t ) , z ( t ) ) = h l ( x l , z l ; x ( t ) , y ( t ) , z ( t ) ) P ( x l , z l ) e j k 2 f ( x l 2 + z l 2 ) ,
h i ( x i , z i ; x ( t ) , y ( t ) , z ( t ) ) = e j k d i j λ d i h l ( x l , z l ; x ( t ) , y ( t ) , z ( t ) ) × e j k 2 d i [ ( x i x l ) 2 + ( z i z l ) 2 ] d x l d z l ,
h i ( x i , z i ; x ( t ) , y ( t ) , z ( t ) ) = 1 λ 2 y ( t ) d i P ( x , z ) e j π λ [ 1 y ( t ) 1 y ( 0 ) ] ( x 2 + z 2 ) × e j 2 π λ d i [ ( x i + M ( t ) x ( t ) ) x + ( z i + M ( t ) z ( t ) ) z ] d x d z .
| u i ( x i , z i , t ) | 2 = K h ( t ) A r e a L E D | h i ( x i , z i ; x ( t ) , y ( t ) , z ( t ) ) | 2 | u o ( x ( t ) , z ( t ) ) | 2 d x ( t ) d z ( t ) ,
| u i ( x i , z i ) | 2 = 1 t e x p 0 t e x p | u i ( x i , z i , t ) | 2 d t .
| u i ( x i , z i ) | 2 = K h ( 0 ) ( π R 2 λ 2 d i y ( 0 ) ) 2 | J 1 ( 2 S ) S | 2 | u o ( x ( 0 ) , z ( 0 ) ) | 2 ,
( x i + M ( 0 ) x ( 0 ) ) 2 + ( z i + M ( 0 ) z ( 0 ) ) 2 ( 1.9 λ d i π R ) 2 .
Δ m o v = ( M ( t e x p ) x ( t e x p ) M ( 0 ) x ( 0 ) ) 2 + ( M ( t e x p ) z ( t e x p ) M ( 0 ) z ( 0 ) ) 2 .
x Δ t 1 = x ( 0 ) cos ( Δ α ) y ( 0 ) sin ( Δ α ) , y Δ t 1 = y ( 0 ) cos ( Δ α ) + x ( 0 ) sin ( Δ α ) + v y Δ t .
x Δ t 2 = x Δ t 1 cos ( Δ α ) y Δ t 1 sin ( Δ α ) , y Δ t 2 = y Δ t 1 cos ( Δ α ) + x Δ t 1 sin ( Δ α ) + v y Δ t .
x Δ t 2 = x ( 0 ) cos ( 2 Δ α ) y ( 0 ) sin ( 2 Δ α ) v y Δ t sin ( Δ α ) , y Δ t 2 = y ( 0 ) cos ( 2 Δ α ) + x ( 0 ) sin ( 2 Δ α ) + v y Δ t cos ( Δ α ) + v y Δ t .
x Δ t N = x ( 0 ) cos ( N Δ α ) y ( 0 ) sin ( N Δ α ) v y Δ t n = 0 N 1 sin ( n Δ α ) , y Δ t N = y ( 0 ) cos ( N Δ α ) + x ( 0 ) sin ( N Δ α ) + v y Δ t n = 0 N 1 cos ( n Δ α ) .
x ( t ) = x ( 0 ) cos ( ω y t ) y ( 0 ) sin ( ω y t ) v y t ( lim N 1 N n = 0 N 1 sin ( n ω y t N ) ) , y ( t ) = y ( 0 ) cos ( ω y t ) + x ( 0 ) sin ( ω y t ) + v y t ( lim N 1 N n = 0 N 1 cos ( n ω y t N ) ) , z ( t ) = z ( 0 ) .
lim N 1 N n = 0 N 1 sin ( n ω y t N ) = { 0           ω y = 0 1 cos ( ω y t ) ω y t ω y 0
lim N 1 N n = 0 N 1 cos ( n ω y t N ) = { 1           ω y = 0 sin ( ω y t ) ω y t ω y 0
x ( t ) = x ( 0 ) cos ( ω y t ) y ( 0 ) sin ( ω y t ) { 0 ω y = 0 v y t ( 1 cos ( ω y t ) ω y t ) ω y 0 y ( t ) = y ( 0 ) cos ( ω y t ) + x ( 0 ) sin ( ω y t ) + { v y t ω y = 0 v y t ( sin ( ω y t ) ω y t ) ω y 0 z ( t ) = z ( 0 ) .
x ( t ) = x ( 0 ) cos ( ω x t ) + y ( 0 ) sin ( ω x t ) + { v x t , ω x = 0 v x t ( sin ( ω x t ) ω x t ) , ω x 0 y ( t ) = y ( 0 ) cos ( ω x t ) x ( 0 ) sin ( ω x t ) { 0 , ω x = 0 v x t ( 1 cos ( ω x t ) ω x t ) , ω x 0 z ( t ) = z ( 0 ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.