Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Experimental demonstration of a free space optical wireless video transmission system based on image compression sensing algorithm

Open Access Open Access

Abstract

The wireless transmission of video data mainly entails addressing the massive video stream data and ensuring the quality of image frame transmission. To reduce the amount of data and ensure an optimal data transmission rate and quality, we propose a free-space optical video transmission system that applies compressed sensing (CS) algorithms to wireless optical communication systems. Based on the Artix-7 series field programmable gate array (FPGA) chip, we completed the hardware design of the optical wireless video transceiver board; the CS image is transmitted online to the FPGA through Gigabit Ethernet, and the video data is encoded by gigabit transceiver with low power (GTP) and converted into an optical signal, which is relayed to the atmospheric turbulence simulation channel through an attenuator and a collimating mirror. After the optical signal is decoded by photoelectric conversion at the receiving end, the Camera-Link frame grabber is d; thus, the image is collected, and it is reconstructed offline. Herein, the link transmission conditions of different algorithm sampling rates, optical power at the receiving end, and atmospheric coherence length are measured. The experimental results indicate that the encrypt-then-compress (ETC) type algorithm exhibits a more optimal image compression transmission reconstruction performance, and that the 2D compressed sensing (2DCS) algorithm exhibits superior performance. Under the condition that the optical power satisfies the link connectivity, the PSNR value of the reconstructed image is 3–7 dB higher than that of the comparison algorithm. In a strong atmosphere turbulence environment, the peak signal-to-noise ratio (PSNR) of the corresponding reconstructed image under different transmission rates at the receiving end can still exceed 30 dB, ensuring the complete reconstruction of the image.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Due to the development of the Internet of Things (IoT) and wireless communication technology, the demand for video data communication and transmission is increasing each day. With regard to the real-time transmission of high-definition (720p or 1080p) and ultra-high-definition (4 K and above) video data streams, there is an exceedingly high demand in both scientific research and daily life. According to Cisco's 2017 report, it is estimated that with respect to internet data, the proportion of video data will triple in four years, and that the proportion will attain 82% [1]. The transmission of video data streams exhibits a wide range of application scenarios. Currently, due to the development of the IoT and big data artificial intelligence, the development of the Internet of Video Things (IoVT) has emerged [2]. The IoVT can crucially facilitate sectors such as medical care, transportation, and robotics. Dae Yeol Lee et al. constructed a video database containing 437 videos and 15,000 subjective evaluations of video quality; the researchers combined techniques such as sub-sampling and video compression [3], which can be utilized for technical analysis and addressing the commercial needs of subsequent research. In addition, video transmission also represents a common medium for communication work. By arranging underwater detector arrays to capture time-coded optical signals and reconstruct 3D and 4D video streams, the detection of underwater optical signals can be completed [4,5], and underwater optical communication can be realized. Similarly, a laser or photodiode is utilized as a light source in an indoor free space, and using a complmentary metal oxide semiconductor (CMOS) camera as a receiver, an optical camera communication system is built. The bit error ratio (BER) successfully attained the < 7% forward error correction (FEC) limit of 3.8 × 10−3 [6,7]. Under this application background, how to manage massive video data and ensure its transmission quality and real-time performance will yield immense challenges, which affect the current data transmission system.

In the video transmission work, the traditional H.264 coding and the high efficiency video coding (HEVC) [8], which has gained immense popularity in recent years, are undoubtedly the most widely utilized coding technologies. The aforementioned encoding methods works effectively; however, they introduce time and spatial redundancy into the video sequence, which often leads to immense calculation [9]. Compressed sensing (CS) is an algorithm that integrates data sampling and compression [10,11]. First, the compressed sensing-based algorithm mechanism can effectively reduce the total sampling rate of the algorithm and eliminate redundant data [12]. Second, when CS algorithms are applied to optical video data transmission, optical compression reduces the high peak-to-average power ratio, thereby reducing the phase noise occasioned by nonlinearity [13], and atmospheric channel noise is often composed of nonlinear noise [14]. Meanwhile, due to the compressed sensing-based undersampled data combined with the process of encoding sampling and reconstruction, the security of image data becomes enhanced [15], and more possibilities for the encryption of different mappings into the algorithm are introduced [1619]. Due to the aforementioned technological development, compressed sensing algorithms have been applied in mobile video cloud and terminal transmission applications [20]; thus, large video data are transmitted. With regard to the numerous key technologies of wireless video transmission, spectrum congestion (especially the spectrum congestion of medium or short distance mobile communication access [21]) immensely limits the development of wireless communication technology. Optical wireless communication (OWC), a revolutionary technology that exhibits a faster data transmission rate, a larger bandwidth capacity, and higher reliability, is gradually becoming a means of overcoming the aforementioned limitation. Compared with traditional radio frequency communication, OWC covers the entire band (from ultraviolet to near infrared), and it exhibits 12-channel 150 Gbps transmission rate FSO communication [22]. In addition, OWC technology provides more optimal system flexibility, data security, and system integration; therefore, it will undoubtedly become a more optimal method for developing video data transmission technology [23,24].

Herein, a free-space optical video transmission system that utilizes the proposed CS algorithm sampling transmission is experimentally demonstrated. Based on the Artix7 series FPGA chips, we designed and developed an optical wireless video transceiver processing board. We utilize the Ethernet link on the PC side to relay the CS processing image to the board online, encode the video signal through GTP, convert the electrical signal into signal light by utilizing the optical module, transmit it through the optical fiber, and attenuate it using the attenuator. Moreover, we utilize the optical collimator to relay the CS processing image to the channel of the atmospheric turbulence simulation pool for transmission experiments. The receiving end also utilizes an optical collimator to receive the signal light; subsequently, it transmits the optical fiber to the FPGA board at the receiving end, converts the video signal to the Camera-Link protocol, utilizes the Camera-Link frame grabber to collect and store image files, and completes the reconstruction offline. By controlling the experimental variables including algorithm sampling rate parameters, optical signal transmission rate, received signal optical power, and atmospheric coherence length, the performance of the optical wireless video transmission system is experimentally analyzed. The results indicate that the encrypt-then-compress (ETC) type compressed sensing 2DCS algorithm utilized in the experiment exhibits a significantly more optimal performance than other algorithms in atmospheric channel transmission. Under the condition that the optical power marginally satisfies the link connectivity, the PSNR value of the reconstructed image is 3-7 dB higher than that of the comparison algorithm. In a strong atmosphere turbulence environment, the PSNR of the corresponding reconstructed image under different transmission rates at the receiving end can still attain >30 dB, ensuring the complete reconstruction of the image. This study proposes the application of the CS algorithm to optical wireless video transmission, and analyzes its working performance, which can effectively reduce the data transmission burden and enhance system performance.

2. Experimental principle of optical wireless video transmission

Figure 1 depicts the principle of the optical wireless video transmission experiment herein. CS algorithms are utilized at the input and output ends, including two typical block compressed sensing (BCS) algorithms, namely smoothed projected Landweber (SPL) [25] and gradient projection for sparse reconstruction (GPSR) algorithm [26], as a comparison object. Due to the wide application of big data, cloud computing, and distributed processing, in such processing scenarios, it is usually necessary to encrypt the carrier signal before it is transmitted to the cloud; thus, the encrypt-then-compress (ETC) processing method is imperative. This method not only significantly enhances the security of data transmission, but also improves the image quality of the reconstructed image. Based on the SPL algorithm, 2D random permutation (2DRP) [27] and 2D compressed sensing (2DCS) [28] algorithms were developed; 2DRP adopts a two-dimensional random permutation strategy. The block segmentation phenomenon in the reconstruction of the BCS algorithm is suppressed through the encoding reconstruction process. Compared with the SPL algorithm, 2DRP enhances the image reconstruction effect while optimizing the transmission security. 2DCS adopts global random permutation coding, and simultaneously utilizes the two-dimensional Landweber projection method to facilitate the reconstruction calculation, which enhances the error correction ability for transmission errors and further optimizes the quality index of the reconstructed image. Assuming that these two ETC algorithms are theoretically more suitable for the complex atmospheric channel transmission environment than traditional algorithms, the 2DRP and 2DCS algorithms are selected for experimental comparison. The general concept of a typical block-based compressed sensing algorithm under different sampling rates entails dividing the original image into small blocks with a B × B size, and, subsequently, utilizing the measurement matrix to form a compressed image through sampling, and generating a reconstructed image by recombining after transmission and decoding [29]. Correspondingly, the ETC CS algorithm must first perform an algorithmic encryption operation on the original image, and, subsequently, segment and compress the encrypted image. In the image reconstruction process, it is necessary to perform a decryption operation on the image reconstructed using the CS algorithm; thus, the final reconstructed image is obtained. After being processed by the CS algorithm, the image data sequence is transmitted into the optical wireless video transmission experimental system through Ethernet, it is transmitted through the atmospheric turbulence simulation channel, and the receiving end reconstructs the collected image, recombine the reconstructed image sequence to generate the video.

 figure: Fig. 1.

Fig. 1. Schematic diagram of the optical wireless video transmission experiment.

Download Full Size | PDF

Figure 2(a) depicts the overall design of the optical wireless video transmission experimental system. In this paper, we propose an optical wireless video transmission system to verify the principle of optical transmission of video in atmospheric turbulence channels. First, the video is intercepted frame by frame into a set of image sequences. The sending end host computer sequentially uses the CS algorithm to perform compression processing offline. The compressed image data is converted into a BIN format file and saved according to the sorting name. Use the host computer software to download the file to the specified folder. The BIN files are sent in sequence from the sending PC (PC-TX) to the space optical video transmission board using Gigabit Ethernet. Then, we utilize FPGA to cache and encode the compressed image data inside the chip, output the data stream through the SFP interface, and utilize the small form pluggable (SFP) photoelectric conversion module (SFP-GE-LX-SM1550) to complete the conversion from electrical signal to optical signal and output to the LC–FC interface optical fiber. After using the optical attenuator (SM-1550 NM) to further attenuate the transmitted optical power, the optical signal is collimated by the collimator (F810FC-1550-THORLABS) and output. After the optical signal is transmitted through the atmospheric turbulence analog channel, the receiving end utilizes an optical collimator to align and collect the spatial light. The signal light (i.e., the spatial light that contains the video data signal) is received by the collimating mirror and, subsequently, coupled into the multimode optical fiber, which is input to the SFP photoelectric conversion module on the optical wireless video transmission board at the receiving end. The conversion module converts the optical signal into an electrical signal and transmits it to the FPGA chip for decoding. Finally, we convert the decoded image data to the video protocol again, and output it using the Camera-Link interface; subsequently, we utilize the Camera-Link frame grabber (Dalsa Xcelera-CL PX4 full) on the receiving end PC (PC-RX) to capture and transmit the video, and, subsequently, reconstruct the compressed image datas, and recombine the reconstructed images to generate the video. Based on the preceding design content, we developed the experimental environment as illustrated in Fig. 2(b).

 figure: Fig. 2.

Fig. 2. Design of the optical wireless video transmission experimental system: (a) Principle of an optical wireless video transmission experimental system, (b) construction of the optical wireless video transmission experimental system, (c) PCB diagram of the optical wireless video transceiver board, and (d) the optical wireless video transceiver physical diagram of the board.

Download Full Size | PDF

We adjust the channel atmospheric turbulence level through the atmospheric turbulence simulator, and we conduct experimental analysis on the impact of turbulence. According to the investigation and fitting results in the literature [30], it can be observed that the atmospheric turbulence simulator comprises a cooling surface and a heating surface, which are configured through the industrial computer of the turbulence control system. When the heating command is input on the turbulence control system, the heating system can be digitally controlled. Thus, a temperature difference between the upper and lower sides of the atmospheric turbulence simulator is induced, and the air in the airtight pool body moves rapidly, thereby generating turbulence effects. According to the literature [30], in the current atmospheric turbulence pool, the relationship between the temperature difference and the atmospheric coherence length ${r_0}$ is shown in Eq. (1), and the change curve is shown in Fig. 3:

$${r_0} = 48 \times \Delta {T^{ - 0.81}}.$$

 figure: Fig. 3.

Fig. 3. Schematic diagram of atmospheric coherence length change curve [30].

Download Full Size | PDF

Therefore, ${r_0}$ can be derived from the current temperature difference. Meanwhile, we choose three temperature differences, namely 20°C, 75°C, and 145°C. According to Eq. (1), for the atmospheric turbulence channel environment, the corresponding atmospheric coherence lengths are 4.24 cm, 1.45 cm, and 0.85 cm. In general, the higher the ${r_0}$ value, the better the atmospheric conditions.

Figures 2(c) and (d) depict the printed circuit board (PCB) design and physical object of the optical wireless video transmission board. The board is developed by the Artix-7 series 7a100t-fgg484 FPGA as the main control chip. This chip possesses a GTP data high-speed transceiver IP core, which can provide a 0.8–6.6 Gbps transmission rate for video optical transceivers. Meanwhile, it exhibits lower power consumption and a smaller package size, which facilitate the integration of hardware systems. In addition to the main control chip, the board is equipped with a MT41K128M16JT-125 DDR3 memory chip, RTL8211FI-CG Gigabit Ethernet chip, DS90CR287/288 Camera-Link data conversion transceiver chip, SFP photoelectric conversion module interface, RJ45 Ethernet interface, and SDR26 Camera-Link data interface. The aforementioned hardware design can support the GTP high-speed transceiver, Gigabit Ethernet, and Camera-Link video data transmission and reception, and the design can realize the proposed experimental requirements for Ethernet video transmission, spatial optical video transmission, and Camera-Link video acquisition. Under this premise, the transmission and reconstruction of compressed sensing images can be guaranteed.

Using the preceding design, we built a wireless spatial optical video transmission system, which can process images and transmit data to the FPGA via a computer that utilizes a compressed sensing algorithm, and propagate in the simulated atmospheric turbulence channel medium in the form of signal light. After the receiving end collects the signal light, the video data stream is decoded and sampled by the FPGA; thus, image reconstruction is realized. The experiment realized all the working processes of wireless spatial optical video transmission. According to the reconstructed image quality at the receiving end, factors such as the transmission rate of the video data optical signal, the influence of atmospheric turbulence on transmission, and the performance of different compressed sensing algorithms can be analyzed during the system work. Effective evaluation and analysis are conducted to facilitate the development of follow-up experimental work.

3. Optical wireless video transmission experiment and analysis

First, we further control the optical power at the receiving end by gradually increasing the amount of optical attenuation; thus, we can determine the limit optical power that can completely receive video data in the experimental environment. After measuring, the optical power at the receiving end is -30.76 dBm, and the bit error rate of the link at this time is calibrated at 1.39 × 10−6. By performing this step, we can calibrate and quantify the prerequisites for the successful establishment of an optical wireless transmission link. After calibrating the limit optical power of the link connection, we utilize the school badges image of Beijing Institute of Technology (BIT) for image processing, and the resolution of the utilized image is 256 × 256 pixels; thus, we perform the video compression sensing, optical wireless video transmission, and reconstruction experiment. Figures 4,5 depict the reconstruction results of the transmission link in different connectivity states. Figure 4 indicates that under the weak turbulence link, the quality of image transmission and reconstruction results is better, the PSNR pertaining to the reconstruction results of each algorithm exceeds 32 dB; the 2DCS algorithm can attain a maximum of 37.55 dB; and only the reconstruction effect of the GPSR algorithm is 25.87 dB, which is quite low compared to its simulation side. However, when the image data is transmitted in a high turbulence atmospheric channel, the bit error rate fluctuates in the limit state, it is probable that a damaged image will be observed (Fig. 5) at the reconstructed image end. The parameter index of the damaged image is often relatively poor; the 2DCS algorithm with the most optimal reconstruction effect corresponds to a 27.75 dB PSNR, and the PSNR value pertaining to the reconstruction result of the GPSR algorithm is only 11.99 dB. It is worth noting that due to the randomness of the atmospheric turbulent environment, not every frame of image transmitted in a high turbulence atmospheric channel will be damaged. When the atmospheric coherence length decreases, the probability of a damaged image increases.

 figure: Fig. 4.

Fig. 4. Reconstructed image under turbulence atmospheric channel, which atmospheric coherence length = ${r_0}$ 4.24 cm: (a) image reconstructed by 2DCS (PSNR = 37.55 dB), (b) image reconstructed by 2DRP (PSNR = 32.62 dB), (c) image reconstructed by GPSR (PSNR = 25.87 dB), and (d) image reconstructed by SPL (PSNR = 32.34 dB).

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Damaged image reconstructed under turbulence atmospheric channel, which atmospheric coherence length ${r_0}$ = 0.85 cm: (a) image reconstructed by 2DCS (PSNR = 27.75 dB); (b) image reconstructed by 2DRP (PSNR = 18.42 dB); (c) image reconstructed by GPSR (PSNR = 11.99 dB); and (d) image reconstructed by SPL (PSNR = 18.66 dB).

Download Full Size | PDF

Subsequently, to further quantify the image reconstruction quality, we selected a total of four evaluation criteria; thus, we measured the algorithm’s sampling rate. The peak signal to noise ratio (PSNR) of the image represents the peak error amount between the reconstructed image data and the original image data, $MA{X_I}$ represents the maximum gray value of a single pixel in the image, $MSE$ represents the root mean square error of the image, and calculate the peak signal-to-noise ratio between the two images. The calculation formula is as follows [31]:

$$PSNR = 10 \cdot {\log _{10}}(\frac{{MAX_I^2}}{{MSE}}).$$

Structural similarity index (SSIM) is an index that considers the similarity between the reconstructed image and the original image from three perspectives: luminance(l), contrast(c) and structure(s) [32]:

$$SSIM(x,y) = l{(x,y)^\alpha } \cdot c{(x,y)^\beta } \cdot s{(x,y)^\gamma }.$$

Normalized root-mean-square error (NMSE) is to normalize the root-mean-square error of the original image and the reconstructed image, where $nor(.)$ represents the normalization operation, and the calculation method is as follows [33]:

$$NMSE = nor(\frac{1}{{mn}}\sum\limits_{i = 0}^{m - 1} {\sum\limits_{j = 0}^{n - 1} {{{[I(i,j) - K(i,j)]}^2}} } ).$$
$I(i,j)$ and $K(i,j)$ in Eq. (4) represent the corresponding pixels of the original image and the reconstructed image respectively.

Gradient magnitude similarity deviation (GMSD) is a full reference image assessment (FR-IQA) method. It proposes to calculate local gradient magnitude similarity to measure local image quality and calculate the standard for local image quality. Equations (5)-(9) are quoted from literature [34]. To measure the global quality, the algorithm first uses the Prewitt operator to calculate the image gradient:

$${h_x} = \left[ {\begin{array}{{ccc}} {1/3}&0&{ - 1/3}\\ {1/3}&0&{ - 1/3}\\ {1/3}&0&{ - 1/3} \end{array}} \right],{h_y} = \left[ {\begin{array}{{ccc}} {1/3}&{1/3}&{1/3}\\ 0&0&0\\ { - 1/3}&{ - 1/3}&{ - 1/3} \end{array}} \right].$$

Then calculate the gradient magnitude:

$${m_r}(i) = \sqrt {{{(r \otimes {h_x})}^2}(i) + {{(r \otimes {h_y})}^2}(i)} .$$
$${m_d}(i) = \sqrt {{{(d \otimes {h_x})}^2}(i) + {{(d \otimes {h_y})}^2}(i)} .$$

Gradient magnitude similarity (GMS) is as follows:

$$GMS(i) = \frac{{2{m_r}(i){m_d}(i) + c}}{{m_r^2(i) + m_d^2(i) + c}},$$
where c is a constant set to prevent the denominator from being 0. Finally, the final result is calculated based on the obtained $GMS$ value, where $GMSM$ is the maximum $GMS$ value in the N groups:
$$GMSD = \sqrt {\frac{1}{N}\sum\limits_{i = 1}^N {{{(GMS(i) - GMSM)}^2}} } .$$

In the experiment, data analysis was performed on the reconstructed image and the original image. The higher the values of the two parameters PSNR and SSIM, the better the image reconstruction effect. The lower the values of NMSE and GMSD, the better the quality of the reconstructed image.

First, we utilize the video optical wireless transmission system to cyclically send compressed sensing images of the school badges image of the Beijing Institute of Technology at 20fps. We gradually modified the sampling rate utilized by the compressed sensing algorithm, collected ten images at different sampling rates, calculated the mean value of the evaluation index, and evaluated the image quality index of the reconstructed image at the receiving end. Figures 6 and 8 depicts statistics on the parameter indicators of the image reconstructed from the experimental results. We set the r0 value of the atmosphere channel environment in the experiment as 4.24 cm and 0.85 cm. As show as Fig. 6(a), if we consider the image reconstruction results of the school badges image of the Beijing Institute of Technology (BIT) at a sampling rate of 0.5, the PSNR value of the 2DCS algorithm can attain 37.55 dB; the result of the BCS-SPL algorithm is the most optimal among other algorithms except 2DCS, and the PSNR pertaining to the reconstruction result of the 2DRP algorithm is 32.62 dB. The SSIM values are shown as Fig. 6(b), the 2DCS algorithm is at least 5% higher than other algorithms. Meanwhile, we can find in Fig. 6(c) that the NMSE of 2DCS is on the order of 1 × 10−4, and all the other algorithms are 1 × 10−3. And as shown as Fig. 6(d), the GMSD value of 2DCS is lower than other algorithms by more than 3%.

 figure: Fig. 6.

Fig. 6. Reconstruction results of optical wireless transmission in the school badges image, which atmospheric coherence length ${r_0}$ = 4.24 cm: (a) PSNR index statistics of the reconstructed image, (b) SSIM index statistics of the reconstructed image, (c) NMSE index statistics of the reconstructed image, and (d) GMSD index statistics of the reconstructed image.

Download Full Size | PDF

Next, in order to further reflect the performance of each algorithm on video transmission, we selected a set of natural videos, intercepted a total of 400 frames of images, and used the CS algorithm for compression. We utilize the video optical wireless transmission system to sequentially send video compressed data frames, collect and reconstruct them at the receiving end. The PSNR and SSIM indicators of each frame of image and their overall average values in the reconstruction results of different algorithms are compared. The experimental results are shown in Figs. 7 (a) and (b). For natural scene videos, the reconstruction results of 2DCS are generally more than 3 dB higher than other algorithms in PSNR indicators, and SSIM is generally more than 3% higher than other algorithms.

 figure: Fig. 7.

Fig. 7. Optical wireless transmission reconstruction results of video generated from a total of 400 frame image sequences, which atmospheric coherence length ${r_0}$ = 4.24 cm: (a) PSNR index statistics and average value of the reconstructed image for each frame (Average value: 2DCS = 36.48 dB, 2DRP = 33.04 dB, SPL = 32.91 dB, GSPR = 26.2 dB), (b) SSIM index statistics and average value of the reconstructed image for each frame (Average value: 2DCS = 0.9575, 2DRP = 0.9168, SPL = 0.9053, GSPR = 0.7363).

Download Full Size | PDF

The preceding measurement results all indicate that the reconstruction image quality of the 2DCS method, which is the closest to the original image before uncompressed transmission, is superior. In addition, the experimental results of the 2DRP algorithm and the SPL algorithm are relatively close. The GPSR algorithm is an algorithm with superior image compression and reconstruction quality; however, because its performance in the application of spatial optical video transmission decoding and reconstruction is considerably affected, image quality is generally poor.

In regard to experiments, we analyze the factors that lead to deviations in the quality of reconstructed images. First, although the FPGA chip exhibits low power consumption, high integration, and parallel processing in data transmission, it is quite suitable for high-speed data transmission and reception, and can process data quickly and stably. It is the optimal processing chip for optical wireless video transmission. However, it cannot effectively process floating-point numbers; therefore, utilizing FPGA as a processor to transmit images processed by compressed sensing will inevitably negatively affect data accuracy. If the algorithm requires high data accuracy, the rounding errors will affect the quality of the reconstructed image.

If we consider the GPSR algorithm, according to the literature [26], the solution problem of the CS algorithm can be regarded as a convex constrained optimization problem estimated according to the observation value $y = Ax + n$, A denotes the measurement matrix, x denotes the measurement value, and n represents Gaussian white noise. The GPSR algorithm formulates the unconstrained convex optimization problem as a bounded constrained quadratic programming (BCQP) problem by dividing the variable into positive and negative components $x = u - v$. Equations (10)-(12) are quoted from literature [26]:

$$\mathop {\min }\limits_{u,v} \frac{1}{2}\left\|{y - A(u - v)} \right\|_2^2 + \tau 1_n^Tu + \tau 1_n^Tv,s.t.u \ge 0,v \ge 0,$$
where ${u_i} = {({x_i})_ + }$ and ${v_i} = {( - {x_i})_ + }$, $x{(.)_ + }$ represent the positive operator, defined as ${(x)_ + } = \max (0,x)$. $\tau$ denotes a non-negative parameter in the convex-constrained optimization equation, and ${1_n} = {[1,1,\ldots ,1]^T}$ denotes a vector consisting of n numbers 1.

To express Eq. (10) as a BCQP problem, define $z = \left[ {\begin{array}{{c}} u\\ v \end{array}} \right]$, $b = {A^T}y$, $c = \tau {1_{2n}} + \left[ {\begin{array}{{c}} { - b}\\ b \end{array}} \right]$ and

$$B = \left[ {\begin{array}{{cc}} {{A^T}A}&{ - {A^T}A}\\ { - {A^T}A}&{{A^T}A} \end{array}} \right].$$

Based on the preceding definition, Eq. (10) can be rewritten as follows:

$$\mathop {\min }\limits_z {c^T}z + \frac{1}{2}{z^T}Bz \equiv F(z),s.t.z \ge 0.$$

Subsequently, utilize the gradient projection (GP) iterative solution: if we assume the value of $z(0)$, iterate to obtain the sparse solution of the linear inverse problem. Because FPGA cannot handle floating-point numbers, rounding errors are often introduced in the process of transmitting the measurement matrix and observation values, and the errors will exert an exceedingly strong impact on the solution of Eq. (12) with iterations.

Second, there must be some bit errors in the transmission process of the spatial optical signal in the atmospheric turbulence channel; therefore, data transmission will be affected. When the data transmission exhibits bit errors, it will inevitably lead to a decrease in the quality of the reconstructed image at the receiving end. In the next step, we will analyze the impact of the bit error rate on spatial optical video transmission.

In video data transmission and communication work, the bit error rate is one of the most crucial transmission indicators, which crucially affects the quality of video transmission and reconstruction. Due to the presence of interference noise in the atmospheric channel, transmission errors are inevitable, and the bit error rate directly affects the video transmission quality. We keep the ${r_0}$ value unchanged at 4.24 cm. Subsequently, we utilize the bit error rate as the main mearsurement index, and we utilize three transmission rates (i.e., 1.25 Gbps, 2.5 Gbps, and 3.125 Gbps) to conduct experiments. First, the optical power is controlled by adjusting the optical attenuator during the video transmission experiment, and to determine the change trend of the bit error rate under different transmission rates, the experimental is performed. Simultaneously, the eye diagram of the FPGA optical signal transmission is measured. The experimental results are depicted in Fig. 8.

 figure: Fig. 8.

Fig. 8. Opitcal video transmission experiments on the relationship between optical power and BER, which atmospheric coherence length ${r_0}$ = 4.24 cm: (a) The statistics of optical power and BER at 1.25 Gbps, 2.5 Gbps, and 3.125 Gbps, (b) BER = 5.7 × 10−5 transmission eye diagram, (c) BER = 6.645 × 10−9 transmission eye diagram, and (d) BER = 1.079 × 10−11 transmission eye diagram.

Download Full Size | PDF

Figure 8(a) indicates that at a 1.25 Gbps transmission rate, the bit error rate increases sharply when the optical power is < -30.79 dBm, and the bit error rate at a 2.5 Gbps transmission rate rises rapidly when the optical power is < -29.2 dBm. The 3.125 Gbps limit optical power is -26.1 dBm; once the optical power is lower than this value, the bit error rate of the link will increase rapidly. From the optical power and bit error rate experimental results, it can be observed that when the optical power decreases, different transmission rates exhibit their own inflection points. The preceding observation can be rationalized as follows: when the optical power decreases to a certain level, the link changes from a stable connection state to an unstable connection state, and the bit error rate increases rapidly in this range. When the selected optical transmission rate is higher, the impact of channel noise on the link is greater, and the low transmission rate is less affected by noise; therefore, the channel can be established at a lower optical power. Figures 8(b)-(d) depict the eye diagrams at different orders of magnitude of BER, which is convenient for observation of link transmission.

After evaluating the impact of the algorithm sampling rate, optical power at the receiving end, and bit error rate on video transmission, we utilized three optical transmission rates (i.e., 1.25 Gbps, 2.5 Gbps, and 3.125 Gbps) and three algorithms (i.e., SPL, 2DRP, and 2DCS) to conduct experiments, and we recorded the reconstructed image indicators of different transmission rates and algorithms; subsequently, we analyzed the relationship between the atmospheric coherence length and the performance of the CS reconstruction. The measurement results are as follows:

Figures 9(a), 10(a) and 11(a) depecit the relationship between the atmospheric coherence length ${r_0}$ and the PSNR value. Meanwhile, Figs. 9(b), 10(b) and 11(b) depecit the relationship between ${r_0}$ and the SSIM value. Figures 910 indicate that due to the increase in the atmospheric coherence length, the reconstruction quality of the algorithm is gradually enhanced; when the degree of turbulence increases, the reconstruction quality of the video decreases. When utilizing 1.25 Gbps and 2.5 Gbps data transmission rates, when the atmospheric coherence length exceeds 2 cm, the transmission reconstruction effect of the 2DCS algorithm is basically saturated. Other algorithms can guarantee ideal video reconstruction quality only when the atmospheric coherence length attains 4 cm. The preceding observation is attributable to the global encryption code in the 2DCS algorithm; even if there are some data transmission errors, the algorithm can still exert a compensation effect.

 figure: Fig. 9.

Fig. 9. Experimental results of image reconstruction with different turbulence degrees at the 1.25 Gbps rate: (a) experimental results of the PSNR index, and (b) experimental results of the SSIM index.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Experimental results of image reconstruction with different turbulence degrees at the 2.5 Gbps rate: (a) experimental results of PSNR index, and (b) experimental results of the SSIM index.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Experimental results of image reconstruction with different turbulence degrees at the 3.125 Gbps rate: (a) experimental results of the PSNR index, and (b) experimental results of the SSIM index.

Download Full Size | PDF

Figure 11 indicates that when the 3.125 Gbps transmission rate is utilized for data transmission, the performance of each algorithm decreases. If we assume that the atmospheric coherence length is 4.24 cm, the PSNR index of the 2DCS algorithm compressed sensing video reconstruction decreases from 37.55 dB to 33.6 dB, whereas the SSIM index decreases from 0.9544 to 0.818. However, compared with other algorithms, its performance is still superior.

In the specified parameter channel, by adjusting the optical attenuation, a method of controlling the optical power of the receiving end, the PSNR and SSIM indicators of the reconstructed video with different algorithms are measured under different optical power. In the experiment, due to the effect of atmospheric turbulence, the optical power at the receiving end is in an unsteady state; however, it fluctuates within a range of approximately 2 dBm. We assume that the middle value is the optical power record at the receiving end.

Figures 12(a), 13(a) and 14(a) depecit the relationship between the optical power and the PSNR value. Meanwhile, Figs. 12(b), 13(b) and 14(b) depecit the relationship between the optical power and the SSIM value. Figures 1214 indicate the following: the experimental results reveal that the video can be transmitted and received from the limit state of optical power to the stable state of video transmission and reception (i.e., the corresponding 2DCS algorithm PSNR = 37.55 dB and SSIM = 0.9544; 2DRP algorithm PSNR = 33.73 dB and SSIM = 0.90; and SPL algorithm PSNR = 33.16 dB and SSIM = 0.9262). When the transmission reconstruction effect attains this index, it is assumed that that the link transmission is completely stable and reaches the algorithm reconstruction limit.

 figure: Fig. 12.

Fig. 12. Experiments on the optical power and video quality of a 4.24 cm atmospheric coherence length channel: (a) Statistics of the PSNR index experiment results, and (b) statistics of the SSIM index experiment results.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. Experiments on the optical power and video quality of the 1.45 cm atmospheric coherence length channel: (a) statistics of the PSNR index experiment results, and (b) statistics of the SSIM index experiment results.

Download Full Size | PDF

 figure: Fig. 14.

Fig. 14. Experiments on the optical power and video quality of the 0.85 cm atmospheric coherence length channel: (a) statistics of the PSNR index experiment results, and (b) statistics of the SSIM index experiment results.

Download Full Size | PDF

According to the experimental results, when the degree of atmospheric turbulence is higher (i.e., when the atmospheric coherence length is lower), the optical power required for the complete and stable establishment of the link is higher. When the atmospheric coherence length is 4.24 cm, the optical power at the receiving end should be -29.1 dBm; thus, link stability is ensured. When the atmospheric coherence length is reduced to 1.45 cm, the required optical power is -28.7 dBm. When the atmospheric coherence length decreases from 4.24 to 0.85 cm, an optical power of -28 dBm is required to maintain the complete stability of the link. After performing all the aforementioned experiments, it can be proved that the reconstructed video quality of the 2DCS algorithm in a stable link is significantly more optimal than that of the other two algorithms. When the atmospheric coherence length decreases, the reconstructed video quality of the algorithm decreases; however, the algorithm still exhibits more optimal video reconstruction performance.

Using the experiments herein, four CS algorithms (i.e., SPL, GPSR, 2DRP, and 2DCS) are selected for measuring. From the perspective of the CS algorithm sampling rate parameters, receiving end optical power and its corresponding transmission bit error rate, optical signal transmission rate, and atmospheric coherence length of the turbulent channel, the influencing factors of spatial optical video transmission are analyzed and compared. After experiments, we determined methods of effectively utilizing the transmission link to enhance the data transmission accuracy; enhancing the reconstructed video quality through the optimization algorithm and the encoding error correction effect; and matching the video pixel position of the received data as a means of optimizing the reconstruction results. Thus, the study develops a novel research direction through which the processing effect of the system can be further enhanced.

4. Conclusion

This study proposes a novel video transmission scheme that entails a free-space optical wireless video transmission system, and experimentally analyzes it. We successfully realized the optical wireless transmission reconstruction of compressed video frame sequences under the following conditions: optical power of -30.76 dBm and link bit error rate of 1.39 × 10−6. First, an optical video transceiver board is designed based on the Artix-7 series FPGA chip, and the FPGA is utilized as the main processor through which data is processed and transmitted; thus, system power consumption is saved, and system integration is enhanced. Moreover, using experiments, we analyze the operating performance of each CS algorithm under the conditions of different algorithm sampling rate, receiving end optical power, link bit error rate, and atmospheric turbulence. Meanwhile, we observed that for video reconstruction performance, the encoding of the Encrypt-then-compress (ETC) algorithm exhibits a satisfactory enhancement; for the 2DCS algorithm, the enhancement is attributable to the global coding characteristics. When the link transmission is relatively stable, its PSNR is 3-7 dB higher than that of other algorithms, and its SSIM is more than 5% higher than that of other algorithms. Under the condition of strong turbulent flow with a 3.125 Gbps transmission rate and a 1.45 cm atmospheric coherence length, the PSNR value of the reconstructed single frame image can still approximate 30 dB when the optical power at the receiving end is -30 dBm. When various variables are changed during the experiment, the processing results are also more optimal than those of other algorithms. These results indicate the feasibility of the CS algorithm applied in optical wireless video transmission, and they reveal the feasibility of the free space optical communication technology that is applied in wireless optical video transmission. The system utilizes the CS algorithm to reduce the amount of transmitted data, and to enhance the system’s working efficiency. In addition, FPGA chips are relatively cheap compared to processors such as GPUs and DSPs, which provides a reference value for the mass production and commercialization of the system.

Funding

Young Elite Scientists Sponsorship Program by CAST (YESS20220600); National Key Research and Development Program of the Ministry of Science and Technology (2021YFA0718804); National Natural Science Foundation of China (62105029, U2141231); State Key Laboratory Foundation of Applied Optics (SKLA02022001A11).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. A. Ahmad, S. Ahmad, M. H. Rehmani, et al., “A survey on radio resource allocation incognitive radio resource allocation incognitive radio sensor networks,” IEEE Commun. Surv. Tutorials 17(2), 888–917 (2015). [CrossRef]  

2. C. Chen, “Internet of Video Things: Next-Generation IoT With Visual Sensors,” IEEE Internet Things J. 7(8), 6676–6685 (2020). [CrossRef]  

3. D. Y. Lee, S. Paul, C. G. Bampis, et al., “A Subjective and Objective Study of Space-Time Subsampled Video Quality,” IEEE Trans. on Image Process. 31, 934–948 (2022). [CrossRef]  

4. G. Krishnan, R. Joshi, T. O’Connor, et al., “Optical signal detection in turbid water using multidimensional integral imaging with deep learning,” Opt. Express 29(22), 35691–35701 (2021). [CrossRef]  

5. Y. Huang, G. Krishnan, T. O’Connor, et al., “End-to-end integrated pipeline for underwater optical signal detection using 1D integral imaging capture with a convolutional neural network,” Opt. Express 31(2), 1367–1385 (2023). [CrossRef]  

6. Q. Chen, W. Liu, and H. Wen, “An enhanced 2-D color space diversity using RGB-LED overlapping for optical camera communication,” Opt. Commun. 545, 129636 (2023). [CrossRef]  

7. T. Tsai, C. Chow, Y. Chang, et al., “130-m Image sensor based Visible Light Communication (VLC) using under-sample modulation and spatial modulation,” Opt. Commun. 519, 128405 (2022). [CrossRef]  

8. C. Ni, Y. Huang, and P. Chen, “A Hardware-Friendlyand High-Efficiency H.265/HEVC Encoder for Visual Sensor Networks,” Sensors 23(5), 2625 (2023). [CrossRef]  

9. D. K. J. B. Saini, S. D. Kamble, R. Shankar, et al., “Fractal video compression for IOT-based smart cities applications using motion vector estimation,” Measurement: Sensors. 26, 100698 (2023). [CrossRef]  

10. X. Ma, “High-resolution image compression algorithms in remote sensing imaging,” Displays 79, 102462 (2023). [CrossRef]  

11. S. Zhou, X. Deng, C. Li, et al., “Recognition-Oriented Image Compressive Sensing With Deep Learning,” IEEE Trans. Multimedia 25, 2022–2032 (2023). [CrossRef]  

12. K. Hong, K. Jin, A. Song, et al., “Low sampling rate digital dechirp for Inverse Synthetic Aperture Ladar imaging processing,” Opt. Commun. 540, 129482 (2023). [CrossRef]  

13. S. Zhao, Y. Jiang, and B. Jalali, “Enhanced OFDM communication using optical dynamic range compression,” Opt. Commun. 508, 127773 (2022). [CrossRef]  

14. H. Yao, X. Ni, Z. Liu, et al., “Experimental demonstration of 4-PAM or high-speed indoor free-space OW communication based on cascade FIR-LMS adaptive equalizer,” Opt. Communications 426, 490–496 (2018). [CrossRef]  

15. Y. Chen, C. Zhang, M. Cui, et al., “Joint compressed sensing and JPEG coding based secure compression scheme in OFDM-PON,” Opt. Commun. 510, 127901 (2022). [CrossRef]  

16. G. Ye, M. Liu, and M. Wu, “Double image encryption algorithm based on compressive sensing and elliptic curve,” Alexandria Eng. J. 61(9), 6785–6795 (2022). [CrossRef]  

17. S. Nan, X. Feng, Y. Wu, et al., “Remote sensing image compression and encryption based on block compressive sensing and 2D-LCCCM,” Nonlinear Dyn 108(3), 2705–2729 (2022). [CrossRef]  

18. X. Chai, J. Fu, Z. Gan, et al., “An image encryption scheme based on multi-objective optimization and block compressed sensing,” Nonlinear Dyn 108(3), 2671–2704 (2022). [CrossRef]  

19. B. Wu, D. Xie, F. Chen, et al., “A multi-party secure encryption-sharing hybrid scheme for image data base on compressed sensing,” Digital Signal Processing 123, 103391 (2022). [CrossRef]  

20. S. Zheng, X. Zhang, J. Chen, et al., “A High-Efficiency Compressed Sensing-Based Terminal-to-Cloud Video Transmission System,” IEEE Trans. Multimedia 21(8), 1905–1920 (2019). [CrossRef]  

21. J.F. Monserrat, D. Martin-Sacristan, F. Bouchmal, et al., “Key technologies for the advent of the 6G,” Proceedings of the 2020 IEEE Wireless Communications and Networking Conference Workshops (WCNCW), IEEE (2020).

22. Y. Su, J. Meng, T. Wei, et al., “150 Gbps multi-wavelength FSO transmission with 25-GHz ITU-T grid in the mid-infrared region,” Opt. Express 31(9), 15156–15169 (2023). [CrossRef]  

23. H. Yao, X. Ni, C. Chen, et al., “Performance of M-PAM FSO communication systems in atmospheric turbulence based on APD detector,” Opt. Express 26(18), 23819–23830 (2018). [CrossRef]  

24. H. Yao, C. Chen, X. Ni, et al., “Analysis and evaluation of the performance between reciprocity and time delay in the atmospheric turbulence channel,” Opt. Express 27(18), 25000–25011 (2019). [CrossRef]  

25. M. Sungkwang and James E. Fowler, “Block compressed sensing of images using directional transforms,” 2009 16th IEEE International Conference on Image Processing (ICIP)11151201 (2009).

26. M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient Projection for Sparse Reconstruction: Application to Compressed Sensing and Other Inverse Problems,” IEEE J. Sel. Top. Signal Process. 1(4), 586–597 (2007). [CrossRef]  

27. B. Zhang, L. Yang, K. Wang, et al., “Block Compressed Sensing Using Two-demensional Random Permutation for Image Encryption-then-Compression Applications,” 14th IEEE International Conference on Signal Processing(ICSP)18493051 (2018).

28. B. Zhang, D. Xiao, Z. Zhang, et al., “Compressing Encrypted Images by Using 2D Compressed Sensing,” 2019 IEEE 21st International Conference on High Performance Computing and Communications;IEEE 17th International Conference on Smart City;IEEE 5th International Conference on Data Science and Systems(HPCC/SmartCity/DSS)19029666 (2019).

29. L. Gan, T. T. Do, and T. D. Tran, “Fast compressive imaging using scrambled block Hadamard ensemble,” in Proceedings of the European Signal Processing Conference, Lausanne, Switzerland (2008).

30. B. Li, H. Yao, L. Zhang, et al., “32 Gbps QPSK Optical Communication Technology Based on a New Equalizer in Atmospheric Turbulence,” IEEE Access 9, 130751 (2021). [CrossRef]  

31. D. Poobathy and R. M. Chezian, “Edge Detection Operators: “Peak Signal to Noise Ratio Based Comparison”,” I. J. Image, Graphics and signal processing 6, 55–61 (2014). [CrossRef]  

32. Z. Wang, A. C. Bovik, H. R. Sheikh, et al., “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

33. J. Pęksiński and G. Mikołajczak, “The synchronization of the images based on normalized mean square error algorithm,” Advances in Multimedia and Network Information System Technologies 80, 15–24 (2010). [CrossRef]  

34. W. Xue, L. Zhang, X. Mou, et al., “Gradient magnitude similarity deviation: a highly efficient perceptual image quality index,” IEEE Trans. on Image Process. 23(2), 684–695 (2014). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. Schematic diagram of the optical wireless video transmission experiment.
Fig. 2.
Fig. 2. Design of the optical wireless video transmission experimental system: (a) Principle of an optical wireless video transmission experimental system, (b) construction of the optical wireless video transmission experimental system, (c) PCB diagram of the optical wireless video transceiver board, and (d) the optical wireless video transceiver physical diagram of the board.
Fig. 3.
Fig. 3. Schematic diagram of atmospheric coherence length change curve [30].
Fig. 4.
Fig. 4. Reconstructed image under turbulence atmospheric channel, which atmospheric coherence length = ${r_0}$ 4.24 cm: (a) image reconstructed by 2DCS (PSNR = 37.55 dB), (b) image reconstructed by 2DRP (PSNR = 32.62 dB), (c) image reconstructed by GPSR (PSNR = 25.87 dB), and (d) image reconstructed by SPL (PSNR = 32.34 dB).
Fig. 5.
Fig. 5. Damaged image reconstructed under turbulence atmospheric channel, which atmospheric coherence length ${r_0}$ = 0.85 cm: (a) image reconstructed by 2DCS (PSNR = 27.75 dB); (b) image reconstructed by 2DRP (PSNR = 18.42 dB); (c) image reconstructed by GPSR (PSNR = 11.99 dB); and (d) image reconstructed by SPL (PSNR = 18.66 dB).
Fig. 6.
Fig. 6. Reconstruction results of optical wireless transmission in the school badges image, which atmospheric coherence length ${r_0}$ = 4.24 cm: (a) PSNR index statistics of the reconstructed image, (b) SSIM index statistics of the reconstructed image, (c) NMSE index statistics of the reconstructed image, and (d) GMSD index statistics of the reconstructed image.
Fig. 7.
Fig. 7. Optical wireless transmission reconstruction results of video generated from a total of 400 frame image sequences, which atmospheric coherence length ${r_0}$ = 4.24 cm: (a) PSNR index statistics and average value of the reconstructed image for each frame (Average value: 2DCS = 36.48 dB, 2DRP = 33.04 dB, SPL = 32.91 dB, GSPR = 26.2 dB), (b) SSIM index statistics and average value of the reconstructed image for each frame (Average value: 2DCS = 0.9575, 2DRP = 0.9168, SPL = 0.9053, GSPR = 0.7363).
Fig. 8.
Fig. 8. Opitcal video transmission experiments on the relationship between optical power and BER, which atmospheric coherence length ${r_0}$ = 4.24 cm: (a) The statistics of optical power and BER at 1.25 Gbps, 2.5 Gbps, and 3.125 Gbps, (b) BER = 5.7 × 10−5 transmission eye diagram, (c) BER = 6.645 × 10−9 transmission eye diagram, and (d) BER = 1.079 × 10−11 transmission eye diagram.
Fig. 9.
Fig. 9. Experimental results of image reconstruction with different turbulence degrees at the 1.25 Gbps rate: (a) experimental results of the PSNR index, and (b) experimental results of the SSIM index.
Fig. 10.
Fig. 10. Experimental results of image reconstruction with different turbulence degrees at the 2.5 Gbps rate: (a) experimental results of PSNR index, and (b) experimental results of the SSIM index.
Fig. 11.
Fig. 11. Experimental results of image reconstruction with different turbulence degrees at the 3.125 Gbps rate: (a) experimental results of the PSNR index, and (b) experimental results of the SSIM index.
Fig. 12.
Fig. 12. Experiments on the optical power and video quality of a 4.24 cm atmospheric coherence length channel: (a) Statistics of the PSNR index experiment results, and (b) statistics of the SSIM index experiment results.
Fig. 13.
Fig. 13. Experiments on the optical power and video quality of the 1.45 cm atmospheric coherence length channel: (a) statistics of the PSNR index experiment results, and (b) statistics of the SSIM index experiment results.
Fig. 14.
Fig. 14. Experiments on the optical power and video quality of the 0.85 cm atmospheric coherence length channel: (a) statistics of the PSNR index experiment results, and (b) statistics of the SSIM index experiment results.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

r 0 = 48 × Δ T 0.81 .
P S N R = 10 log 10 ( M A X I 2 M S E ) .
S S I M ( x , y ) = l ( x , y ) α c ( x , y ) β s ( x , y ) γ .
N M S E = n o r ( 1 m n i = 0 m 1 j = 0 n 1 [ I ( i , j ) K ( i , j ) ] 2 ) .
h x = [ 1 / 3 0 1 / 3 1 / 3 0 1 / 3 1 / 3 0 1 / 3 ] , h y = [ 1 / 3 1 / 3 1 / 3 0 0 0 1 / 3 1 / 3 1 / 3 ] .
m r ( i ) = ( r h x ) 2 ( i ) + ( r h y ) 2 ( i ) .
m d ( i ) = ( d h x ) 2 ( i ) + ( d h y ) 2 ( i ) .
G M S ( i ) = 2 m r ( i ) m d ( i ) + c m r 2 ( i ) + m d 2 ( i ) + c ,
G M S D = 1 N i = 1 N ( G M S ( i ) G M S M ) 2 .
min u , v 1 2 y A ( u v ) 2 2 + τ 1 n T u + τ 1 n T v , s . t . u 0 , v 0 ,
B = [ A T A A T A A T A A T A ] .
min z c T z + 1 2 z T B z F ( z ) , s . t . z 0.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.