Abstract

Turbulence resistance is a significant research area for orbital angular momentum shift keying-based free space optical communication (OAM-SK-FSO). We put forward a two-step combinational system to receive high fidelity image data from the atmospheric turbulence (AT) channels. Firstly, the AT-detector-based multi-CNN (ATDM-CNN) demodulator is proposed which is very different from the traditional single-CNN (S-CNN) demodulator. The AT detector detects the AT strength and then an AT-determined CNN-based demodulator is activated to recognize the incident OAM modes. Sufficient numeral simulations compare the recognition rates of ATDM-CNN and S-CNN. The results indicate a tremendous improvement owing to the ATDM-CNN demodulator. Base on the ATDM-CNN's significant advantage in OAM recognition, the significant optimization of image data quality is possible in the further correction. As an option, the residual information errors are corrected by jointly using the rank-order adaptive median filter (RAMF) and the very-deep super-resolution (VDSR) network with minor information loss in severe ATs. The data increase resulting from RAMF-VDSR is tested. In conclusion, the proposed two-step system can provide a much higher quality of receiving image data in the OAM-SK-FSO link.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Free-space optical communication (FSO) has been proposed as the most promising approach to the ‘last mile’ communication problem [1,2] rather than the traditional wireless communication [3] because of its large bandwidth, licensed-free and easy placement. The techniques based on orbital angular momentum (OAM) [4] allow us to increase the channel capacity in FSO because of the unlimited theoretical range of available states. The novel modulation format OAM-shift keying (OAM-SK) can modulate information into the multiplexed vortex beams (VB) carrying diverse OAM modes. OAM-SK has unique advantages of the low equipment cost, high photon efficiency, and useful information security [5]. Demodulation for the OAM-SK plays a vital role in the FSO link, and the qualified OAM-SK demodulator can recognize the OAM modes carried in the VB accurately. However, the wavefront aberrations caused by the random atmospheric turbulence (AT) results in the information mixture between the adjacent modes, which introduces channel crosstalk, signal degradation, even loss of information. These adverse effects make it difficult for the demodulator to demultiplex the OAM modes at the receiver, and degrade the communication quality.

Pattern recognition (PR)-based OAM-SK demodulator is becoming an exciting research area to address the problem of AT since it can avoid inefficient and expensive coherent demodulation. PR-based demodulator recognizes the OAM modes depending on their different diffractograms. For example, a self-organizing mapping (SOM) network has been applied to recognize the 16-array OAM modes with a distance of 3.0km [6]. Soon, the authors implemented the SOM to test the effect of AT in a 143km FSO link between two islands [7].

The convolutional neural networks (CNN) are most attractive among PR due to their excellent performance and simple implementation. CNNs are the neural networks that use convolutions in place of general matrix multiplication in at least one of their layers [8]. The employments of CNN-based OAM-SK demodulation have been extensively studied [919]. CNN-based demodulator was tested against the traditional demultiplexing in different levels of simulated ATs in [9]. In addition, the authors in [10] compared the performance of CNN-based demodulators with that of other deep networks (KNN, NBC, and BP-ANN). The above findings laid the foundation for the CNN-based demodulator's advantage.

The type of research on CNN-based OAM-SK demodulators has been conducted since then. Jin Li et al. [11] put forward a CNN-based joint technique of AT detection and adaptive demodulation, it estimated the strength of AT except for OAM-SK demodulation. The feed-forward neural network (FNN) in combination with a 2-dimensional fork grating as the features extractor is used to identify OAM modes in [12]. Later, the performance of different resnets on the OAM modes recognition is compared. In [13], a deep neural network with 18150 input neurons and 3 hidden layers is exploited to classify diversiform OAM modes under nearly 70% accuracy. In [14], Zhanwen Liu et al. propose the superhigh-resolution OAM recognition neuron network (ORNN) to precisely separate modes into sub-divisible space between the adjacent eigenmodes. Besides, the far-field diffractograms are transformed into the radon cumulative distribution transform (R-CDT) space in [15] and then classified by the low-computation-burden ‘shallow’ network. In addition, the unique diffractive deep neural network (D2NN) is adopted as the more efficient optical-device-based classifier to recognize the OAM modes [16].

However, the CNN-based demodulators perform less useful in severe ATs, because the OAM beams are seriously damaged so that they are far beyond the single CNN's recognizing ability, it will cause serious information errors. Systems equipped with an auxiliary unit for the OAM-SK demodulator are proposed [1719]. The adaptive optics system (AOS) is used to solve the problem of severe ATs in [18]. The AOS can reshape the wavefront before the OAM beam goes into the CNN demodulator. In this two-step way, we can suppress the negative influence of AT. Similarly, the last joint scheme uses the Gerchberg-Saxton (GS) algorithm to improve the beam quality before demodulating [19]. The drawbacks of AOS and GS are:

  • 1. They must depend on the wavefront compensating device, such as the space light modulator (SLM) or the micro-deformable mirror (MDM), which are too costly for widely use;
  • 2. The probe beam (PB) is necessary for wavefront detection, which increases the operational difficulty in the long-distance FSO propagation.

Qinghua Tian [17] provides a scheme of turbo code combining CNN demodulator to reduce the information loss in severe ATs. However, the decoding of turbo depends on the channel model, which is very hard to access in the practical OAM-SK-FSO link. Also, the output of the OAM demodulator is the digitized sequence, which will make the turbo decoder lose the best effectiveness. The above works inspire us of the combinational system to resist AT. However, a simple but effective data receiving scheme for the OAM-SK link is desperately needed.

This study focuses on image data transmission. To avoid the complicated turbo code [17], the costly AOS [18] and the inconvenient PB [18,19], the effective image data self-correcting is expected to be applied as a post-processing in this study. However, the best performance will still be hindered by the severe information error caused by the CNN demodulator's low recognition rate, because too many data errors always go beyond the correcting ability. The critical problem is to provide the high OAM mode recognition rate, and then the error corrector is hopeful of correcting the residual information errors. By making the finest use of CNN and considering the practical use of image transmission, a combinational system is designed to get the high fidelity data.

  • 1) The high enough OAM mode recognition rate is critical because too many information errors cause a heavy burden for further information correction. Unfortunately, the traditional CNN demodulator cannot provide a high recognition rate (especially in severe ATs), which primarily causes unqualified communication. The AT-detector-based multi-CNN (ATDM-CNN) demodulator is proposed to achieve an excellent high recognition rate. Different as before, each demodulator member of ATDM is separately trained by the data distributed in different AT (rather than the whole dataset); the most appropriate demodulator member would be activated based on the detected AT strength while the others are dormant. With the help of AT detector, each demodulator member would master more focused OAM features of a specific AT strength, by which it is expected to provide a higher recognition rate. The convolutional coding rather than the complicated coding [20,21] is also adopted. No other types of information or devices are needed in ATDM-CNN. The AT detector serves to reapportion the data pool for training the demodulator members rather than works just as a separate functional unit [11]. The meaning of ATDM-CNN is not only to get better recognition but also to make for further information loss correcting.
  • 2) Only the OAM-SK demodulator is insufficient to counteract the severe ATs [1719]. Fortunately, with the excellent performance of ATDM-CNN, the sporadic error data hidden in the received data has the potential to be thoroughly checked out and amended by the rank-order adaptive median filter (RAMF) [22]. RAMF degrades the image resolution because of some detail information loss. Inspired by the fast-developed technology of deep learning vision, we adopt a very deep super-resolution (VDSR) network [23] to restore the detail information. VDSR can output the high-resolution (HR) image from a low resolution (LR) image by extracting the marginal information; it uses residual-learning and high learning rate to optimize the very deep network fast. To get the best performance of VDSR, in training, we generate a unique dataset containing images output by RAMF to match the concrete task. It is also an attempt to recover the information loss with the deep learning technology in the OAM-SK-FSO link other than the complicated code [17].

The proposed idea will help the receiving system to escape the technical confines of the S-CNN demodulator. Each part is improved according to the requirement. It is not a simple dismountable combination; the two steps must work in a tight couple and the simulation results tell us it will not work well without the excellent recognition provided by ATDM-CNN. By using the two-step receiving system, we can finally restore the image data at the small loss in severe ATs. The costly AOS and PB are omitted, which helps to drive the popularizing of the OAM-SK-FSO link. Based on a large number of comparative simulations, we test the two-step receiving system against the traditional S-CNN demodulator in different ATs. At step one, compared with the S-CNN demodulator; we verify the high recognition rate of the ATDM-CNN demodulator. Then, at step two, we gain insight into the RAMF-VDSR system to reinforce the images with satisfying fidelity.

This paper as follows. In part 2, we retrospect the system principle of the OAM-SK-FSO link. Then we offer the scheme to construct an ATDM-CNN demodulator in detail. Also, the image information recovery by the hybrid scheme of RAMF-VDSR in severe ATs is introduced. In part 3, the ATDM-CNN's recognition rate is compared with that of the S-CNN. The output image quality of the two-step receiving system was assessed. Finally, the conclusions are assigned in part 4.

2. System principle

In this section, the model of the OAM-SK-FSO link combining the CNN-based demodulator is depicted in 2.1, and then the ATDM-CNN demodulator (2.2) and RAMF-VDSR system (2.3) will be illustrated respectively.

2.1 System model of OAM-SK-FSO system

The model of the OAM-SK-FSO system is illustrated as Fig. 1.

 

Fig. 1. Numerical model of the OAM-SK-FSO system, including the transmitter, the atmospheric channel, and the different receivers, i.e., the traditional receiver equipped with the S-CNN demodulator and the proposed receiver with the two-step system

Download Full Size | PPT Slide | PDF

2.1.1 Transmitter

At the transmitter, the bit stream is encoded into the sequence of values of OAM modes. And then, these values are mapped into diverse phase masks loaded in the space light modulator (SLM), which modulate Gaussian light beams into corresponding VBs. The required phase is created by the computer-generated hologram (CGH). The LG beam with the radial index p (assumed to be 0) and topological charge l at the transmitter is [24]

$$u_{LG}^l(r,\theta ) = \frac{1}{{\sqrt 2 }}{(\frac{{2p!}}{{(|l|+ p)!}})^{\frac{1}{2}}}\frac{1}{{{w_0}}}{(\frac{r}{{{w_0}}})^{|l|}}L_p^{|l|}(\frac{{{r^2}}}{{w_0^2}})\textrm{exp} ( - \frac{{{r^2}}}{{2w_0^2}})\textrm{exp} (il\theta ),$$
$i = \sqrt { - 1}$ is the imaginary unit. $(r,\theta )$ is the polar coordination, ${w_0}$ is the beam waist, $L_p^{|l|}$ is the generated Laguerre polynomial, and $\textrm{exp} (il\theta )$ is the helical phase distribution. The selected OAM modes should be less insensitive to the effects of the AT. For the sake of convenience, the tested OAM states {1, −2, 3, −5}, which constitute ${2^4} = 16$ different modes with characteristic features, are chosen as the basic states in this paper [17]. Four sequential bits will be encoded into an OAM mode in this condition, the multiplexed OAM beam consists of different topological charges is
$${u_{multi - OAM}}(r,\theta ) = \sum\limits_{1, - 2,3, - 5} {{\beta _l}} u_{LG}^l(r,\theta ),\quad {\beta _l} = 0,1,$$
where ${\beta _l}$ is the encoded bit.

2.1.2 Atmospheric channel

In the atmospheric channel, the random refractive index profile distorts the wavefront of VBs and further induces the signal fading and crosstalk. The widely utilized Hill-Andrews (HA) model [25] is employed here. The AT is emulated by random phase screens loaded with the spectrum of fluctuation in the refractive index

$${\theta _n}({k_x},{k_y}) = 0.033C_n^2[1 + 1.802\sqrt {\frac{{k_x^2 + k_y^2}}{{k_l^2}}} - 0.254{(\frac{{k_x^2 + k_y^2}}{{k_l^2}})^{\frac{7}{{12}}}}] \times \frac{{\textrm{exp} (\frac{{k_x^2 + k_y^2}}{{k_l^2}})}}{{{{(k_x^2 + k_y^2 + k_0^2)}^{\frac{{11}}{6}}}}}$$
$({k_x},{k_y})$ denotes the spatial frequency (rad/m), ${k_l} = \frac{{3.3}}{{{l_0}}}$, ${k_0} = \frac{{2\pi }}{{{L_0}}}$, ${l_0}$ and ${L_0}$ are the inner scale and outer scale of turbulence, respectively. The phase spectrum $\Phi ({k_x},{k_y})$, which depends on the distance $\Delta z$, is given by
$$\Phi ({k_x},{k_y}) = 2\pi {k^2}\Delta z{\theta _n}({k_x},{k_y}),$$
$k = {{2\pi } / \lambda }$ and $\lambda$ is the wavelength. The phase screen $\phi (x,y)$ with the dimension of $N \times N$ is
$$\phi (x,y) = \Re \{ {{\cal F}^{ - 1}}[{C_{NN}}\sigma ({k_x},{k_y})]\} ,$$
where
$${\sigma ^2}({k_x},{k_y}) = {(\frac{{2\pi }}{{N\Delta x}})^2}\phi (x,y),$$
${{\cal F}^{ - 1}}({\bullet} )$ is the inverse Fourier transform, $\Re$ corresponds to taking the real part of the function and ${C_{NN}}$ denotes the $N \times N$ dimensional complex random number array with the zero mean and the variance of 1, the subscript “NN” represents the distribution over a sampling grid of size $N \times N$, $\Delta x$ is the grid interval of the random phase screen. To better simulate the impact of AT, we constructed a numerical simulation of laser beam propagation through a series of random phase screens [26]. Each phase screen is a thin sheet that adjusts the phase of the beam. The beam undergoes propagation between phase screens with interval $\Delta z$ should be simulated by the Fresnel diffraction [27]
$${u_{multi - OAM}}(z + \Delta z) = {{\cal F}^{ - 1}}[{\cal F}({u_{multi - OAM}}(z) \times \textrm{exp} (il\theta )) \times {\rm H}(\Delta z)],$$
${\cal F}({\bullet} )$ is the Fourier transform, ${\rm H}({\bullet} )$ is the Fresnel Transfer Function (TF). After M random phase screens, we get the OAM beam after transmission through the whole distance $z = M\Delta z$. In the numerical model, we will follow the parameters in Table 1.

Tables Icon

Table 1. Parameters used to simulate propagation through the setups

2.1.3 Receiver with S-CNN demodulator

At the traditional receiver, intensity images of OAM beams carrying diverse modes can be considered as diverse categories. The CCD camera captures the OAM light images, and the CNN-based demodulator transfers them into the original bits online.

The architecture of the S-CNN demodulator is illustrated as Fig. 2. The CNN consists of one input layer, several convolutional and max-pooling layers, and then some fully connected layers follow. Finally, a softmax layer is attached to output the recognizing results [8].

 

Fig. 2. Architecture of CNN demodulator

Download Full Size | PPT Slide | PDF

Each input OAM light image is resized to the size of 128*128 and standardized before entering the network. A standardized profile of the OAM beam intensity

$${I^{\textrm{standardized}}}(x,y) = \frac{{I(x,y)}}{{255.0}} - 0.5$$
joins the CNN. $I(x,y) \in [0,255]$ is the detected grey-scale image. By using the 7*7@16 convolutional layer and the 2*2@16 max-pooling layer, the input image was convolved to 16 feature maps. The operations of convolution and max-pooling are repeated several times. The rectified linear unit (ReLU) is used in convolutional layers as the activation functions. The nodes in the fully connected layer are correlated with the nodes in the last pooling layer. The recognitions of superposed OAM states are output by the softmax classifier
$${S_j} = \frac{{\textrm{exp} ({l_j})}}{{\sum\limits_{i = 1}^{num} {\textrm{exp} ({l_j})} }}\quad j = 1,2,\ldots ,num;$$
where ${l_j}$ is the jth production of the last fully connected layer, $\sum\limits_{i = 1}^{num} {{S_i}} = 1$ and $num$ is the number of categories. The regularizing dropout unit with kept probability 0.5 is adopted to avoid the overfitting. Finally, the largest ${S_j}$ is decided to be the recognized result. All parameters are optimized through reducing the cost function of the cross-entropy
$$y ={-} \sum\limits_{i = 1}^{num} {y_i^{label}log({S_i})} ,$$
where ${y^{label}}$ is the label corresponding to the jth category
$${y^{label}} = {[0,\ldots ,1,0\ldots ,0]^T},$$
where the jth element is one while the others are all zeros. We also use this well-performed CNN architecture in the AT detector.

The OAM light images are distorted by the AT, which will result in recognizing errors, and the received image can be contaminated by the error pixels. The S-CNN demodulator seems insufficient to correctly recognize the OAM modes in the severe ATs, and the number of error pixels in the received images will soar. We will put forward the two-step system for the image receiving to reduce the errors in the next part, as well as illustrate the construct of the proposed system in detail.

2.2 Step one: ATDM-CNN demodulator

The ATDM-CNN demodulator is made up of a CNN-based AT detector [11] and multiple CNN demodulator members. However, different from [11], AT detector is used as the vital part rather than a separate functional unit, it detects the current AT strength and activates the most appropriate demodulator member. The active demodulator will recognize the OAM modes while the others are dormant in a time slot. With the help of AT detector, each demodulator member can learn more focused features in specific AT strength and outperform the simple S-CNN model. Also, the widely distributed AT strengths (5e-16m−2/3 to 5e-13m−2/3), which covers the majority of cases, are taken into account. Also, it avoids the problem of dataset reconstruction if discrete AT strengths with different intervals are considered [11].

2.2.1 AT detector

After being transmitted over different AT channels, the wavefront phase of OAM light beams has experienced diverse perturbations, which further changes the distortion degree of the received beams. Meanwhile, the intensity patterns of the beams carrying distinct OAM modes are different from each other. Therefore the intensity images of OAM beams contain both relevant AT and OAM mode information [11].

Figure 3 displays the wavefront perturbation caused by 4 classic ATs where $C_n^2$ ranges from 5e-16m−2/3 to 5e-13m−2/3. With the increasing AT strength, the peak-valley value of the wavefront phase increases from ∼0.4rad to above 10rad, so that the VBs are distorted gradually, and the appearances of the intensity images of OAM beams carrying the same mode become more and more indistinct. As shown in Fig. 4, the encoded data stream is marked in blue while the carried mode is marked in red, each column displays the anamorphic intensity images of the OAM lights carrying different modes, which could be regarded as the different categories. On the other hand, for each row, the patterns of intensity images corresponding to varied OAM modes are distinct from each other and they also can be considered as diverse classes.

 

Fig. 3. Wavefront perturbations (rad) cause by random phase screen with $C_n^2$ valued in (a) 5e-16m−2/3, (b) 5e-15m−2/3, (c) 5e-14m−2/3, (d) 5e-13m−2/3.

Download Full Size | PPT Slide | PDF

 

Fig. 4. Intensity images of the received OAM beams carrying the 16 kinds of OAM modes (consist of the states of {1, −2, 3, −5}) over the simulated 1,000m AT channels with $C_n^2$ valued in (a) 5e-16m−2/3, (b) 5e-15m−2/3, (c) 5e-14m−2/3, (d) 5e-13m−2/3.

Download Full Size | PPT Slide | PDF

AT detector benefits from the different distortion degrees of the received beams. For practical use, all possible AT strengths need to be considered rather than only the classic ones [11], and this causes a huge difficulty in AT detecting. In the medium and severe ATs, the similar wavefront perturbation causes the analogous appearance of the OAM intensity images. Thus, those indistinguishable ATs having similar characteristics will be combined into a merged category. To create an AT detector, we created a sample pool that consists of 480,000 OAM intensity images. The $C_n^2$ evenly covers the range of $(5e - 16{m^{ - 2/3}},5e - 13{m^{ - 2/3}})$, which considers weak, medium and severe AT cases. As mentioned in 2.1, 16 different OAM modes are included in the sample pool, and each mode occupies 30,000 images. We divided the range of $C_n^2$ into 8 equal parts, and each part is marked with numbers (AT1-AT8). Each part can be regarded as an AT category, the AT detector with the architecture illustrated in Fig. 2 will be trained using the whole sample pool, the variable $num$ in Eq. (9) is 8.

After complete training, the crosstalk matrix $\boldsymbol{\chi }$ of the AT detector is shown in Fig. 5. The element $\boldsymbol{\chi }(m,n)$ denotes the probability of real ATm recognized as ATn, where $m,n \in [1,8]$ and $m,n \in {N^ + }$. The diagonal elements of $\boldsymbol{\chi }$ mean the probability of correct AT detection while the others indicate the error recognitions. It can be deduced that the weak (AT1 and AT2) and severe (AT8) ATs are mostly like to be correctly recognized, for the elements $\boldsymbol{\chi }(1,1)$, $\boldsymbol{\chi }(2,2)$ and $\boldsymbol{\chi }(8,8)$ are closer to 1.0. On the contrary, the medium ATs intend to be misclassified. Notably, the recognition rates of AT5, AT6, AT7 are lower while the crosstalks $\boldsymbol{\chi }(m,n)$ ($m \ne n,m,n = 5,6,7,8$) between AT5, AT6, AT7 and AT8 are considerably high. Because of the similar wavefront perturbations from AT5 to AT8, it is tough for the AT detector to extract the distinct features of AT5-8 from the OAM intensity images respectively. These indistinguishable ATs (for example, AT5-AT8) could be combined into a joint category which is featured with their common characteristic.

 

Fig. 5. Random crosstalk matrix of AT detector

Download Full Size | PPT Slide | PDF

Through the above suggestion, we summarize a method to erase the unnecessary boundaries between the similar AT categories. Locating the smallest diagonal value of the crosstalk matrix $\boldsymbol{\chi }$, we intend to integrate the corresponding AT category with its most appropriate neighbor. For example, in the beginning, the smallest diagonal element $\boldsymbol{\chi }(7,7)$ is 0.10, the corresponding category is AT7. Then we must choose which neighbor (AT6 or AT8) would be combined with AT7. We follow the principle that AT7 will be assimilated with the neighbor which shares the lager crosstalk value with AT7. Comparing the left and right adjacent elements $\boldsymbol{\chi }(7,6)$ and $\boldsymbol{\chi }(7,8)$, one can find that AT8 shares the larger crosstalk value with AT7, so we combine AT8 and AT7 into a merged category, see Fig. 6(a). The merged crosstalk matrix after the first combination is shown in Fig. 6(b), integrating AT7 and AT8 generates a high-recognition-accuracy category AT7-8. Meanwhile, the error-prone category AT7 is cancelled. The concrete operation is, the new-generated row is computed by averaging the rows AT7 and AT8, then the new-generated column is got by summing the columns AT7 and AT8. After some repeated operations, the error-prone categories are gradually assimilated by their neighbors, and the crosstalk values are close to 0. Finally, we get the integrated crosstalk matrix with large diagonal value $\boldsymbol{\chi }(i,i)$, which means the high AT detecting accuracy. For the terminate condition, we use the following metric - each diagonal value in the crosstalk matrix is not smaller than $\kappa$. The result matrixes with $\kappa = 0.7$ and $\kappa = 0.8$ are respectively shown in Fig. 6(c) and (d). Take the result with $\kappa = 0.8$ for example, the ATs recognized as AT1 can be relabelled as AT-I, the ATs fall into AT2 and AT3 will be packaged as AT-II, and the rest will be regarded as AT-III.

 

Fig. 6. Crosstalk matrix of AT categories. (a) Original crosstalk matrix. The shadow area indicates the AT categories (AT7 and AT8) will be combined. (b) Crosstalk matrix after the first combination. The shadow area indicates the combined AT category (AT7-8). (c) Result crosstalk matrix with $\kappa = 0.7$. (d) Result crosstalk matrix with $\kappa = 0.8$.

Download Full Size | PPT Slide | PDF

2.2.2 Multiple CNN demodulator members

The S-CNN demodulator is trained by the data evenly sampled from the data pool because it aims at learning the general features covering all possible ATs. Different from the S-CNN, the object of each CNN of ATDM-CNN is extracting the unique features of the corresponding integrated AT category, so a concerned problem is designing a reasonable training data distribution for each CNN demodulator member. In the ATDM-CNN demodulator, we design a CNN member for each integrated AT category, for example, CNN demodulator X is prepared for an integrated AT category AT-X (X = I, II, III; $\kappa = 0.8$).

In this part, we recommend a simple method based on the posterior probability to construct the training data for each CNN demodulator. The only information available is the prior probability matrix $\boldsymbol{\chi }$ in Fig. 5, which means the recognizing accuracy of the AT detector. However, the information we need is the corresponding posterior probability matrix $\boldsymbol{\chi }{\__p}$, whose element $\boldsymbol{\chi }{\__p}(m,n)$ means the probability of real ATn given the recognized ATm. The posterior probability matrix $\boldsymbol{\chi }{\__p}$ can be easily got by using the Bayes formula on the prior probability matrix $\boldsymbol{\chi }$ (see Fig. 7-1).

$$\boldsymbol{\chi }{\__p}(n,m) = \frac{{p(m)\boldsymbol{\chi }(m,n)}}{{\sum\limits_{i = 1}^8 {p(i)\boldsymbol{\chi }(i,n)} }},\quad m,n = 1,\ldots ,8,$$
where $p({\bullet} )$ denotes the probability of the related real AT category.

 

Fig. 7. Sampling data and training the ATDM-CNN demodulator.

Download Full Size | PPT Slide | PDF

Based on the conclusion of 2.2.1, we would like to divide the posterior probability matrix $\boldsymbol{\chi }{\__p}$ into three parts AT-I, AT-II and AT-III (see Fig. 7-2), concretely $\boldsymbol{\chi }{\__p}(1,:)$ for AT-I, $\boldsymbol{\chi }{\__p}(2:3,:)$ for AT-II and $\boldsymbol{\chi }{\__p}(5:8,:)$ for AT-III. Sequently the sampling weight ${\boldsymbol{\omega }_{\textrm{AT - X}}}$ for the integrated category AT-X (X = I, II, III) can be obtained by summing the rows of each part (Fig. 7-3),

$$\begin{aligned} {\boldsymbol{\omega }_{\textrm{AT - I}}} &= sum(\boldsymbol{\chi }{\__p}(1,:)) = (0.93,0.09,0.00,0.00,0.00,0.00,0.00,0.00),\\ {\boldsymbol{\omega }_{\textrm{AT - II}}} &= sum(\boldsymbol{\chi }{\__p}(2:3,:)) = (0.04,0.81,0.87,0.26,0.02,0.00,0.00,0.00),\\ {\boldsymbol{\omega }_{\textrm{AT - III}}} &= sum(\boldsymbol{\chi }{\__p}(4:8,:)) = (0.05,0.05,0.19,0.73,1.08,1.23,0.98,0.71). \end{aligned}$$
The sampling weights ${\boldsymbol{\omega }_{\textrm{AT - I}}}$, ${\boldsymbol{\omega }_{\textrm{AT - II}}}$ and ${\boldsymbol{\omega }_{\textrm{AT - III}}}$ provide the guidelines to sample training data from the data pool for the integrated categories AT-I, AT-II, and AT-III, respectively. For example, the training data for the AT-II category is composed of 4% samples from AT1, 81% from AT2, 87% from AT3, 26% from AT4, and 2% from AT5 (Fig. 7-4). AT-I and AT-III are similar to AT-II, but the elements in ${\boldsymbol{\omega }_{\textrm{AT - X}}}$ larger than 1.0 should be truncated to 1.0. In this way, each CNN demodulator would be trained by the OAM light images in a certain distortion degree (Fig. 7-5), so that the CNN demodulator can focus on extracting the unique features of a certain AT category, while the samples in the other distortion degrees (other AT strength) will not cause interference to the training.

The scheme is very different from the joint model [11] because of the following reasons: 1. To significantly develop the practical meaning, the proposed system aims at covering all the possible AT strengths rather than only some specified AT strengths [11]. To precisely predict all AT strengths is very difficult so that the outputs of the AT detector cannot be simply labeled with the specified AT strengths but the ranges of AT strengths (see Fig. 6); 2. The recognition rate of the joint model [11] always lost qualified performance in severe ATs. To achieve better results, the dataset is reapportioned to the sub-datasets (dataset I, II, III in Fig. 7) which contain the OAM features in the weak, mediate and severe ATs respectively. The later simulations in 3.1 indicate that the trained OAM-SK demodulators will do much better in the weak, mediate and severe ATs respectively, which is also critical to the subsequent information loss restoration (step two, section 2.3).

No other types of data and devices are needed except the OAM dataset to build the ATDM-CNN. By creatively using the AT detector information as the feedback, the original dataset is apportioned into some sub-datasets which are distributed in different AT strengths. Each demodulator member is excepted to learn more focused features of special AT cases. Later simulation results verify that ATDM-CNN can provide a much higher recognition rate, which not only reduces the transmission errors but also provides great convenience for further correction. It is inspired by locally weighted learning, and we make modifications for straightforward implementation. Besides, the AT detector provides a ready-made baseline for local fields dividing, which largely simplifies the training. In addition, when an OAM mode enters an OAM-SK demodulator member, the next OAM mode enters the AT detector simultaneously, so that the time spent will be reduced. The later simulations will sufficiently demonstrate the superiority of the proposed ATDM-CNN demodulator.

2.3 Step two: Image error elimination and resolution enhancement

We offer a source (RAMF-VDSR) to verify the information loss restoration if ATDM-CNN is introduced. With the improvement of recognition, the error data caused by the OAM-SK link is expected to be corrected completely. Influenced by the varying ATs, the error bits are dispersedly distributed [6,12,17]. RAMF [22] is an effective way to eliminate the dispersed errors with minimum distortion of the original information, while VDSR can restore the detailed information corroded by RAMF. Some alternative methods [28,29] claimed to achieve the same effect but will not be probed here.

2.3.1 Error pixels elimination by RAMF

RAMF can adjust the filtering window size adaptively according to the surrounding pixels. Each pixel P will be processed by the workflow illustrated in Fig. 8. The upper limit of the filtering window length is ${L_{\max }} = 7$. ${P_{\max }}$ and ${P_{\min }}$ are the maximum and minimum pixel values in the filtering window, while ${P_{median}}$ is the median pixel value in the filtering window.

 

Fig. 8. Workflow of RAMF

Download Full Size | PPT Slide | PDF

However, the degraded image resolution caused by the RAMF has the slight negative effect on the quality of received data, so the VDSR is recommended to relieve this problem.

2.3.2 LR image enhancement by VDSR

VDSR addresses the problem of generating a HR image given a LR image, which is commonly referred to as single image super-resolution (SISR). VDSR uses the sizeable receptive field to take a broad image context into account. Also, VDSR resolves the issue of slow coverage with residual-learning and gradient clipping. The structure of VDSR is illustrated in Fig. 9. A pair of layers (convolutional and nonlinear) are cascaded repeatedly. A LR image goes through layers and transforms into a HR image. The network predicts a residual image $\textbf{r}$, the addition of LR $\textbf{x}$ and the residual $\textbf{r}$ give the desired output $\textbf{y}$. We use 64 filters for each convolutional layer. Most features after applying rectified linear units (ReLu) are zero [23].

 

Fig. 9. The network structure of VDSR, 19 pair of layers are used in VDSR

Download Full Size | PPT Slide | PDF

The training data of the VDSR is generated in this way:

  • • A large number of HR images are collected, and they should cover all aspects of scenes, commonly animals, scenery and figures;
  • • Collect the LR images by filtering the HR images, and the filtering windows lengths are 3, 5 and 7, respectively;
  • • For each pair of the sample, the LR image is taken as the input x, while the HR image acts as the output label y.

Given a training dataset $\{ {x^i},{y^i}\} _{i = 1}^N$, the goal is to learn a model f to predicts the estimate of the target HR image $\hat{y} = f(x)$. The mean squared error $\frac{1}{2}||y - f(x)|{|^2}$ averaged over the training set is minimized.

To get the top performance, the RAMF and VDSR are jointly used. Moreover, the training dataset is uniquely constructed by filtering the HR images by RAMF. In this way, the VDSR network would be appropriately trained to match our application. RAMF-VDSR is a beneficial complement for the ATDM-CNN demodulator, especially in severe ATs. RAMF-VDSR and ATDM-CNN must work in a tight couple. The excellent performance of the ATDM-CNN demodulator is the key to the perfect error correction by RAMF-VDSR. In other words, RAMF-VDSR will lose qualified performance without ATDM-CNN. The simulation results will provide related verification.

3. Numerical results and analysis

In this part, we simulate the effects of the ATDM-CNN demodulator and RAMF-VDSR system and compare the performance of the proposed two-step system with that of the traditional receiving system equipped with only the S-CNN demodulator. In 3.1, we would test the ATDM-CNN demodulator's recognition rate against that of the S-CNN demodulator. Further, we will probe the performance of the RAMF-VDSR system on information loss restoration in 3.2. Finally, some results by using S-CNN combining the RAMF-VDSR system are also presented, it offers proof that the RAMF-VDSR system makes qualified information restoration only in a tight couple with ATDM-CNN demodulator.

3.1 Comparison between ATDM-CNN and S-CNN demodulators

We compare the recognition rate of the ATDM-CNN demodulator and that of the traditional S-CNN demodulator. The S-CNN demodulator is trained by the 480,000 images of the data pool, while the ATDM-CNN demodulator with terminate conditions $\kappa = 0.7$ and $\kappa = 0.8$ are both trained by using the scheme mentioned in 2.2. Six different $C_n^2$s valued in 5.00e-16m−2/3, 1.115e-13m−2/3, 2.225e-13m−2/3, 3.335e-13m−2/3, 4.445e-13m−2/3, 5.00e-13m−2/3 are tested. And, 10,000 random experiments are conducted for each AT to calculate the mean recognition rate. The comparative results of OAM mode recognition are presented in Table 2.

Tables Icon

Table 2. Recognition rates in different ATs

The ATDM-CNN demodulator largely outperforms on the recognition rate than the S-CNN demodulator. We take the ATDM-CNN demodulator with $\kappa = 0.8$ for reference. In the weak ATs (5.00e-16m−2/3), the recognition rate of S-CNN is much the same as ATDM-CNN, that is because the OAM light images are not severely distorted in the weak ATs and the features of each mode are very obvious so that the OAM modes recognition is relatively easy for both S-CNN and ATDM-CNN. In the medium ATs, the ATDM-CNN demodulator’s performance on the recognition rate is gradually superior to that of the S-CNN demodulator. For example, when $C_n^2$ is 2.23e-13m−2/3, the recognition rate of the ATDM-CNN demodulator is ∼0.95, which is ∼0.04 higher than the S-CNN demodulator, whose recognition rate is ∼0.90.

The gaps in severe ATs (for example, 5.00e-13m−2/3) are apparent. The OAM light images will suffer severe damages, and the OAM modes classifying task may go beyond S-CNN's capability, which enormously influences the quality of communication. As shown in Table 2, the recognition rate of the ATDM-CNN demodulator achieves 0.8484 in severe AT, which is 0.18 higher than that of S-CNN (only ∼0.66). On the whole, the results of S-CNN are in keeping with that in [11], because the AT detector in [11] works as a separate functional unit rather than providing higher OAM modes recognition rate with the OAM demodulator synergistically. Each CNN member of the ATDM-CNN demodulator is separately trained by a sub-dataset belonging to a certain degree of AT aberration extracted from the distorted OAM images dataset (as illustrated in Fig. 7). Each CNN member successfully extracts more features exclusively belong to a unique AT strength than the S-CNN demodulator, and it effectively avoids the interference from other AT categories. Therefore, in the severe ATs, the activated CNN member will provide a much higher recognition rate than the S-CNN demodulator.

In order to get a more direct view, in Fig. 10, we present the random crosstalk matrixes $\boldsymbol{\zeta }$ of OAM modes recognition when $C_n^2$ is 5.00e-13m−2/3. The element $\boldsymbol{\zeta }(i,j)$ (i, j = 1,…,16) indicates the probability of the recognized OAM mode j given the real transmitted OAM mode i. The diagonal elements $\boldsymbol{\zeta }(i,i)$ (i=0,…,16) of the ATDM-CNN demodulator are more close to 1 than that of the S-CNN, which means much more OAM modes are correctly recognized when the ATDM-CNN demodulator is used. Meanwhile, the crosstalk elements $\boldsymbol{\zeta }(i,j)$ (i≠j) of ATDM-CNN demodulator are much smaller than that of S-CNN, which indicates the fewer error recognitions in severe ATs. Compared with the S-CNN, we can get the conclusion that the ATDM-CNN demodulator does better in dealing with the serious AT influence. Conclusions can be drawn that the advantage of ATDM-CNN is prominent and the number of error bits will be sharply reduced in this way.

 

Fig. 10. Random crosstalk matrixes of transmitted data with superposed OAM states when $C_n^2$ is 5.00e-13m−2/3. (a) Random crosstalk matrix of S-CNN demodulator. (b) Random crosstalk matrix of ATDM-CNN demodulator when $\kappa = 0.8$.

Download Full Size | PPT Slide | PDF

Figure 11(a)-(e) offer groups of comparative received images, the received panorama images are placed in the bottom right corner, and the regions of interest are magnified for the convenient observation. Intuitively, the image information reconstructed from the ATDM-CNN demodulator has a higher quality and less information loss than that of the S-CNN demodulator, for the former provides more sparse errors in the received data. We further use the numeral metric peak signal-to-noise ratio (PSNR) as an objective evaluation for the imaging, which is defined as [30]

$$MSE(a,b) = \frac{1}{{N \times M}}\sum\limits_{x = 0}^{N - 1} {\sum\limits_{y = 0}^{M - 1} {{{(a(x,y) - b(x,y))}^2}} } ,$$
and
$$PSNR = 10{\log _{10}}[\frac{{\max Va{l^2}}}{{MSE(a,b)}}],$$
where $a(x,y)$ and $b(x,y)$ are the intensity values of the original and the reconstructed image at the position $(x,y)$, $N \times M$ is the size of the image. The higher the PSNR is, the better quality the image has. Table 3 shows the corresponding PSNRs of Fig. 11.

 

Fig. 11. Six groups of received images by using the S-CNN (on the top) and the ATDM-CNN (on the bottom) when $C_n^2$ valued in 3.34e-13m−2/3, 3.89e-13m−2/3 and 4.45e-13m−2/3 respectively (from left to right).

Download Full Size | PPT Slide | PDF

Tables Icon

Table 3. PSNRs of received images by S-CNN and ATDM-CNN demodulator

The performance gap on PSNRs between ATDM-CNN and S-CNN is entirely distinct. The PSNRs achieve the average increments of 8.37dB when $C_n^2$ is 3.34e-16m−2/3, 10.78dB when $C_n^2$ is 3.89e-13m−2/3, and 10.27dB when $C_n^2$ is 4.45e-13m−2/3, which indicate the fewer error recognitions of OAM modes and significant improvement in data quality.

However, PSNRs still intend to deteriorate in severe ATs even the ATDM-CNN demodulator is in use. For example, PSNR only gets the lowest value of 22.83dB when $C_n^2$ is 4.45e-16m−2/3 (group (e)), which is much better than that of S-CNN but may not sufficient yet. The reason origins from the bottleneck of the CNN-based demodulator, the recognition rate cannot be further increased in the extreme severe ATs. To relieve the pressure on the CNN-based demodulator, we will later adopt RAMF-VDSR to improve the quality of the received data.

Here, we run the model with the E5-2620 CPU and the GPU acceleration (RTX2080TI). Groups of tests are carried and 200 continuous OAM modes are processed in each group; then, we calculated the average time of each OAM mode. The result (∼7.61e-5s per OAM mode, see Table 4) is very close to that of [12] (∼7.5e-5s per OAM mode), which demonstrate that the proposed demodulator provides a feasible way to rapidly identify the OAM modes of VBs with high-speed [12]; it shows great potential in the optical OAM communication. The rapid-developed professional processor, e.g., the tensor processing unit (TPU), is promising to decrease the time spent further dramatically.

Tables Icon

Table 4. Time spent in OAM-SK demodulation by using GPU acceleration

3.2 Further correction of RAMF-VDSR

RAMF-VDSR will suffer a significant setback if it is just used with the traditional S-CNN demodulator because too many transmission errors in severe ATs will exceed the capability of RAMF-VDSR. In 3.1, the quality of data received by ATDM-CNN is much improved, which has the potential to be significantly optimized by further correction. In this part, we use the RAMF-VDSR system to verify ATDM-CNN's performance, especially in severe ATs. The data output by the ATDM-CNN demodulator, the RAMF, and the VDSR in different ATs (3.34e-13m−2/3, 3.89e-13m−2/3 and 4.45e-13m−2/3) are tested.

For each group of Fig. 12, comparing the first column and the second column, we can observe that the RAMF can effectively eliminate the dispersed errors in ATDM-CNN-output images. However, some detailed information is lost due to filtering. For example, in group (a), when $C_n^2$ is 4.45e-13m−2/3 (the 3rd row), the ATDM-CNN-output data still has some errors; the error pixels on the butterfly wing negatively influence the data quality. After the filtering of RAMF, the residual errors are perfectly erased, but the resolution is degraded so that the edges and details of the butterfly wing are much more blurry than the original image. We sequentially use the VDSR network to recover the lost detail information, and the effects are remarkable. By comparing the second column and the third column, we can tell that the detailed information of the image is effectively improved, which greatly benefits from VDSR.

 

Fig. 12. Groups of images sequentially output by the ATDM-CNN demodulator, RAMF and VDSR (from left to right) when $C_n^2$ valued in 3.34e-13m−2/3, 3.89e-13m−2/3 and 4.45e-13m2/3 respectively (from top to bottom for each group).

Download Full Size | PPT Slide | PDF

Table 5 presents the corresponding PSNRs of the result data. The PSNRs of the VDSR-output data has a certain degree of increment compared with that of the ATDM-CNN-output data, which achieves the average increments of 3.425dB when $C_n^2$ is 3.34e-13m−2/3, 5.315dB when $C_n^2$ is 3.89e-13m−2/3, and 9.777dB when $C_n^2$ is 4.45e-13m−2/3.

Tables Icon

Table 5. PSNRs of images output by ATDM-CNN demodulator, RAMF, and VDSR

In severe ATs, RAMF contributes most of the increase in PSNR, because RAMF erases almost all the information errors and recovers the outline of the data. VDSR provides a small improvement in PSNR, but it is also important because it focuses on the restoration of the detail information, which is an essential optimization for data quality. It is extremely difficult to precisely recover the image data influenced by the severe ATs without any loss, but the proposed method can improve the received data to a qualified level. Although some losses are unavoidable, the received data of the combinational system in severe ATs is much better than that if only the traditional S-CNN demodulator is equipped.

3.3 Using RAMF-VDSR combining S-CNN demodulator

It has been pointed out that the significant improvement in the ATDM-CNN demodulator plays a key role in the combinational scheme. To solidly verify the point of view, this part gives some results if RAMF-VDSR is just used in combination with the S-CNN demodulator. The tests between S-CNN and ATDM-CNN demodulators are taken under the AT case of 4.45e-13m−2/3. The RAMF-VDSR systems are identical for both the OAM-SK demodulators. The comparative analysis is provided from two aspects, the objective metric (PSNR) and the image quality.

Table 6 shows the related PSNRs. When $C_n^2$ is 4.45e-13m−2/3, the PSNRs respectively reduce by 6.17dB, 6.35dB, 8.08dB, 5.88dB, 5.45dB, and 7.32dB if S-CNN is used instead of ATDM-CNN. The reason is that S-CNN's low recognition rate causes over-dense information errors, which are far beyond the RAMF's capacity for correcting. Further, the VDSR-output images cannot get rid of the negative effects of the residual errors; on the contrary, VDSR will make the detail of errors sharper, which is opposite to our expectation.

Tables Icon

Table 6. PSNRs of received images when $C_n^2$ is 4.45e-13m−2/3

The concrete image data comparisons are illustrated in Fig. 13, we use the images of a butterfly wing as an example. In the severe ATs (4.45e-13m−2/3), the received data by using the S-CNN demodulator is trapped in the problem of over-dense information errors. One can see the obvious noisy information from the S-CNN-output image (Fig. 13(b1)), these errors are so dense that they cannot be thoroughly erased by using the RAMF (Fig. 13(c1)), and some visible residual errors remain. The following VDSR could further sharpen these remained errors; finally, the VDSR will output an image with many clear errors (Fig. 13(d1)). As a contrasted, the ATDM-CNN demodulator can get much better output with fewer errors (Fig. 13(b2)), these sparse errors can be easily erased by the RAMF (Fig. 13(c2)); thus no errors would be enhanced by the VDSR, and we can successfully get the high fidelity image. Some more details belonging to (d1) and (d2) are magnified ((e1) ∼ (g1), (e2) ∼ (g2)) to observe the gaps between S-CNN and ATDM-CNN. Almost no noisy information could be found if ATDM-CNN is used. Otherwise, if ATDM-CNN is replaced by S-CNN, the apparent errors (circled in red) will be disastrous. In summary, we build the two-step combinational data receiving scheme for the OAM-SK-FSO image link. For the image data propagation through OAM-SK-FSO links, the ATDM-CNN demodulator is the core to get much less information loss, which benefits from the import of ATDM-CNN demodulator. With the high OAM modes recognition rate, the collaborative RAMF-VDSR is potential to correct the information loss significantly. The combinational scheme for OAM-SK demodulation inspires this idea, but the complicated operations and difficult realizations are both avoided.

 

Fig. 13. Comparison between a series of output images by using the S-CNN demodulator and the ATDM-CNN demodulator.

Download Full Size | PPT Slide | PDF

4. Conclusions

This study puts forward a high-fidelity image data receiving scheme for the OAM-SK-FSO link. Firstly, the ATDM-CNN demodulator is introduced, in which an AT detector is used to detect the AT strength, and then one of the CNN members is activated to recognize the OAM modes. The AT strength determined which CNN member would be activated, and the training data for each CNN is generated under the guideline of AT detector. By using the ATDM-CNN demodulator, the recognition rate of the OAM-SK demodulator increases by 15%, and the PSNRs also increase by ∼8dB to ∼10dB in severe ATs. With the significant improvement in OAM recognition, the great promotion in transmission information loss is possible. Benefits from the effective RAMF-VDSR information loss restoration system, the rest errors in data are almost corrected with only minor loss, even in severe ATs. A specialized training set for VDSR is also designed to get the top performance. The results in 3.2 indicate the qualitative leap in PSNR if RAMF-VDSR is used in a tight couple with the ATDM-CNN demodulator; the increments achieve ∼9dB when $C_n^2$ is 4.45e-13m−2/3. Moreover, to emphasize the point that the whole system should serve as one, the necessary tests are provided in 3.3. Without the critical cooperation of ATDM-CNN, RAMF-VDSR could not offer the perfect performance. Overall, the two-step combinational image data receiving system goes so far ahead of the traditional S-CNN demodulator-based receiving system, especially in the OAM-SK-FSO link influenced by severe ATs.

Funding

China Postdoctoral Science Foundation (BR0300115); National Natural Science Foundation of China (52041502, 61533012, 91748120).

Disclosures

The authors declare no conflicts of interest.

References

1. K. P. Peppas and P. T. Mathiopoulos, “Free-Space Optical Communication With Spatial Modulation and Coherent Detection Over H-K Atmospheric Turbulence Channels,” J. Lightwave Technol. 33(20), 4221–4232 (2015). [CrossRef]  

2. S. Y. Cai, Z. G. Zhang, and X. Chen, “Turbulence-Resistant All Optical Relaying Based on Few-Mode EDFA in Free-Space Optical Systems,” J. Lightwave Technol. 37(9), 2042–2049 (2019). [CrossRef]  

3. G. Sahu and S. S. Pawar, “Wireless backhaul networks: centralised vs. distributed scenario,” International Journal of Systems, Control and Communications. 11(3), 261–271 (2020). [CrossRef]  

4. S. H. Fu, Y. W. Zhai, H. Zhou, J. Q. Zhang, T. L. Wang, C. Yin, and C. Q. Gao, “Demonstration of free-space one-to-many multicasting link from orbital angular momentum encoding,” Opt. Lett. 44(19), 4753–4756 (2019). [CrossRef]  

5. C. Kai, P. Huang, F. Shen, H. Zhou, and Z. Guo, “Orbital Angular Momentum Shift Keying Based Optical Communication System,” IEEE Photonics J. 9(2), 1–10 (2017). [CrossRef]  

6. M. Krenn, R. Fickler, M. Fink, J. Handsteiner, M. Malik, T. Scheidl, R. Ursin, and A. Zeilinger, “Communication with spatial modulated light through turbulent air across Vienna,” New J. Phys. 16(11), 113028 (2014). [CrossRef]  

7. M. Krenn, J. Handsteiner, M. Fink, R. Fickler, R. Ursin, M. Malik, and A. Zeilinger, “Twisted light transmission over 143 km,” Proc. Natl. Acad. Sci. U. S. A. 113(48), 13648–13653 (2016). [CrossRef]  

8. Y. Bengio and A. Courville, Deep Learning (MIT University, 2016).

9. T. Doster and A. T. Watnik, “Machine learning approach to OAM beam demultiplexing via convolutional neural networks,” Appl. Opt. 56(12), 3386–3396 (2017). [CrossRef]  

10. L. R. Hofer, L. W. Jones, J. L. Goedert, and R. V. Dragone, “Hermite–Gaussian mode detection via convolution neural networks,” J. Opt. Soc. Am. A 36(6), 936–943 (2019). [CrossRef]  

11. J. Li, M. Zhang, D. Wang, S. Wu, and Y. Zhan, “Joint atmospheric turbulence detection and adaptive demodulation technique using the CNN for the OAM-FSO communication,” Opt. Express 26(8), 10494–10508 (2018). [CrossRef]  

12. Y. L. He, J. M. Liu, P. P. Wang, W. X. X. Zhou, J. Xiong, Y. X. Wu, Y. Cheng, Y. X. Gao, Y. Li, S. Q. Chen, and D. Y. Fan, “Detecting orbital angular momentum modes of vortex beams using feed-forward neural network,” J. Lightwave Technol. 37(23), 5848–5855 (2019). [CrossRef]  

13. K. M. Iftekharuddin, A. A. S. Awwal, M. García Vázquez, A. Márquez, M. A. Matin, E. M. Knutson, S. Lohani, O. Danaci, S. D. Huver, and R. T. Glasser, “Deep learning as a tool to distinguish between high orbital angular momentum optical modes,” Proc. SPIE 9970, 997013 (2016). [CrossRef]  

14. Z. W. Liu, S. Yan, H. G. Liu, and X. F. Chen, “Superhigh-Resolution Recognition of Optical Vortex Modes Assisted by a Deep-Learning Method,” Phys. Rev. Lett. 123(18), 183902 (2019). [CrossRef]  

15. S. R. Park, L. Cattell, J. M. Nichols, A. Watnik, T. Doster, and G. K. Rohde, “De-multiplexing vortex modes in optical communications using transport-based pattern recognition,” Opt. Express 26(4), 4004–4022 (2018). [CrossRef]  

16. Q. S. Zhao, S. Q. Hao, Y. Wang, L. Fang, and C. L. Xu, “Orbital angular momentum detection based on diffractive deep neural network,” Opt. Commun. 443, 245–249 (2019). [CrossRef]  

17. Q. Tian, Z. Li, K. Hu, L. Zhu, X. Pan, Q. Zhang, Y. Wang, F. Tian, X. Yin, and X. Xin, “Turbo-coded 16-ary OAM shift keying FSO communication system combining the CNN based adaptive demodulator,” Opt. Express 26(21), 27849–27864 (2018). [CrossRef]  

18. Z. K. Li, J. B. Su, and X. H. Zhao, “Atmospheric turbulence compensation with sensorless AO in OAM-FSO combining the deep learning-based demodulator,” Opt. Commun. 460, 125111 (2020). [CrossRef]  

19. M. I. Dedo, Z. K. Wang, K. Guo, and Z. Y. Guo, “OAM mode recognition based on joint scheme of combining the Gerchberg Saxton (GS) algorithm and convolutional neural network (CNN),” Opt. Commun. 456, 124696 (2020). [CrossRef]  

20. C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon limit error-correcting coding and decoding: Turbo-codes. 1,” Proceedings of IEEE International Conference on Communications.2 (1993).

21. M. C. Davey and D. J. C. MacKay, “Low density parity check codes over GF(q),” Information Theory Workshop (1998).

22. H. Hwang and R. A. Haddad, “Adaptive median filters: new algorithms and results,” IEEE Trans. on Image Process. 4(4), 499–502 (1995). [CrossRef]  

23. J. Kim, J. K. Lee, and K. M. Lee, “Accurate Image Super-Resolution Using Very Deep Convolutional Networks,” IEEE Conference on Computer Vision and Pattern Recognition, 1646–1654 (2016).

24. L. Wang, X. H. Ge, R. Zi, and C. X. Wang, “Capacity Analysis of Orbital Angular Momentum Wireless Channels,” IEEE Access 5, 23069–23077 (2017). [CrossRef]  

25. L. C. Andrews, “Laser beam propagation through random media,” Bellingham, Washington: SPIE press, 57–74 (2005).

26. J. D. Strasburg and W. W. Harper, “Impact of atmospheric turbulence on beam propagation,” Proc. SPIE 5413, 93–102 (2004). [CrossRef]  

27. J. W. Goodman, “Introduction to Fourier Optics, Third Edtion,” New York: Roberts and Company Publishers (2005).

28. K. Zhan, J. H. Shi, H. B. Wang, Y. G. Xie, and Q. Q. Li, “Computational Mechanisms of Networks: A Comprehensive Review,” Arch Computat. Methods Eng. 24(3), 573–588 (2017). [CrossRef]  

29. S. P. Xu, G. Z. Zhang, C. X. Li, T. Y. Liu, Y., and L. Tang, “A Fast Random-valued Impulse Noise Detection Algorithm Based on Deep Belief Network,” Journal of Electronics and Information Technology 41(5), 1130–1136 (2019).

30. S. M. Zhao, B. Wang, L. Y. Gong, Y. B. Sheng, W. W. Cheng, X. L. Dong, and B. Y. Zheng, “Improving the Atmosphere Turbulence Tolerance in Holographic Ghost Imaging System by Channel Coding,” J. Lightwave Technol. 31(17), 2823–2828 (2013). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. K. P. Peppas and P. T. Mathiopoulos, “Free-Space Optical Communication With Spatial Modulation and Coherent Detection Over H-K Atmospheric Turbulence Channels,” J. Lightwave Technol. 33(20), 4221–4232 (2015).
    [Crossref]
  2. S. Y. Cai, Z. G. Zhang, and X. Chen, “Turbulence-Resistant All Optical Relaying Based on Few-Mode EDFA in Free-Space Optical Systems,” J. Lightwave Technol. 37(9), 2042–2049 (2019).
    [Crossref]
  3. G. Sahu and S. S. Pawar, “Wireless backhaul networks: centralised vs. distributed scenario,” International Journal of Systems, Control and Communications. 11(3), 261–271 (2020).
    [Crossref]
  4. S. H. Fu, Y. W. Zhai, H. Zhou, J. Q. Zhang, T. L. Wang, C. Yin, and C. Q. Gao, “Demonstration of free-space one-to-many multicasting link from orbital angular momentum encoding,” Opt. Lett. 44(19), 4753–4756 (2019).
    [Crossref]
  5. C. Kai, P. Huang, F. Shen, H. Zhou, and Z. Guo, “Orbital Angular Momentum Shift Keying Based Optical Communication System,” IEEE Photonics J. 9(2), 1–10 (2017).
    [Crossref]
  6. M. Krenn, R. Fickler, M. Fink, J. Handsteiner, M. Malik, T. Scheidl, R. Ursin, and A. Zeilinger, “Communication with spatial modulated light through turbulent air across Vienna,” New J. Phys. 16(11), 113028 (2014).
    [Crossref]
  7. M. Krenn, J. Handsteiner, M. Fink, R. Fickler, R. Ursin, M. Malik, and A. Zeilinger, “Twisted light transmission over 143 km,” Proc. Natl. Acad. Sci. U. S. A. 113(48), 13648–13653 (2016).
    [Crossref]
  8. Y. Bengio and A. Courville, Deep Learning (MIT University, 2016).
  9. T. Doster and A. T. Watnik, “Machine learning approach to OAM beam demultiplexing via convolutional neural networks,” Appl. Opt. 56(12), 3386–3396 (2017).
    [Crossref]
  10. L. R. Hofer, L. W. Jones, J. L. Goedert, and R. V. Dragone, “Hermite–Gaussian mode detection via convolution neural networks,” J. Opt. Soc. Am. A 36(6), 936–943 (2019).
    [Crossref]
  11. J. Li, M. Zhang, D. Wang, S. Wu, and Y. Zhan, “Joint atmospheric turbulence detection and adaptive demodulation technique using the CNN for the OAM-FSO communication,” Opt. Express 26(8), 10494–10508 (2018).
    [Crossref]
  12. Y. L. He, J. M. Liu, P. P. Wang, W. X. X. Zhou, J. Xiong, Y. X. Wu, Y. Cheng, Y. X. Gao, Y. Li, S. Q. Chen, and D. Y. Fan, “Detecting orbital angular momentum modes of vortex beams using feed-forward neural network,” J. Lightwave Technol. 37(23), 5848–5855 (2019).
    [Crossref]
  13. K. M. Iftekharuddin, A. A. S. Awwal, M. García Vázquez, A. Márquez, M. A. Matin, E. M. Knutson, S. Lohani, O. Danaci, S. D. Huver, and R. T. Glasser, “Deep learning as a tool to distinguish between high orbital angular momentum optical modes,” Proc. SPIE 9970, 997013 (2016).
    [Crossref]
  14. Z. W. Liu, S. Yan, H. G. Liu, and X. F. Chen, “Superhigh-Resolution Recognition of Optical Vortex Modes Assisted by a Deep-Learning Method,” Phys. Rev. Lett. 123(18), 183902 (2019).
    [Crossref]
  15. S. R. Park, L. Cattell, J. M. Nichols, A. Watnik, T. Doster, and G. K. Rohde, “De-multiplexing vortex modes in optical communications using transport-based pattern recognition,” Opt. Express 26(4), 4004–4022 (2018).
    [Crossref]
  16. Q. S. Zhao, S. Q. Hao, Y. Wang, L. Fang, and C. L. Xu, “Orbital angular momentum detection based on diffractive deep neural network,” Opt. Commun. 443, 245–249 (2019).
    [Crossref]
  17. Q. Tian, Z. Li, K. Hu, L. Zhu, X. Pan, Q. Zhang, Y. Wang, F. Tian, X. Yin, and X. Xin, “Turbo-coded 16-ary OAM shift keying FSO communication system combining the CNN based adaptive demodulator,” Opt. Express 26(21), 27849–27864 (2018).
    [Crossref]
  18. Z. K. Li, J. B. Su, and X. H. Zhao, “Atmospheric turbulence compensation with sensorless AO in OAM-FSO combining the deep learning-based demodulator,” Opt. Commun. 460, 125111 (2020).
    [Crossref]
  19. M. I. Dedo, Z. K. Wang, K. Guo, and Z. Y. Guo, “OAM mode recognition based on joint scheme of combining the Gerchberg Saxton (GS) algorithm and convolutional neural network (CNN),” Opt. Commun. 456, 124696 (2020).
    [Crossref]
  20. C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon limit error-correcting coding and decoding: Turbo-codes. 1,” Proceedings of IEEE International Conference on Communications.2 (1993).
  21. M. C. Davey and D. J. C. MacKay, “Low density parity check codes over GF(q),” Information Theory Workshop (1998).
  22. H. Hwang and R. A. Haddad, “Adaptive median filters: new algorithms and results,” IEEE Trans. on Image Process. 4(4), 499–502 (1995).
    [Crossref]
  23. J. Kim, J. K. Lee, and K. M. Lee, “Accurate Image Super-Resolution Using Very Deep Convolutional Networks,” IEEE Conference on Computer Vision and Pattern Recognition, 1646–1654 (2016).
  24. L. Wang, X. H. Ge, R. Zi, and C. X. Wang, “Capacity Analysis of Orbital Angular Momentum Wireless Channels,” IEEE Access 5, 23069–23077 (2017).
    [Crossref]
  25. L. C. Andrews, “Laser beam propagation through random media,” Bellingham, Washington: SPIE press, 57–74 (2005).
  26. J. D. Strasburg and W. W. Harper, “Impact of atmospheric turbulence on beam propagation,” Proc. SPIE 5413, 93–102 (2004).
    [Crossref]
  27. J. W. Goodman, “Introduction to Fourier Optics, Third Edtion,” New York: Roberts and Company Publishers (2005).
  28. K. Zhan, J. H. Shi, H. B. Wang, Y. G. Xie, and Q. Q. Li, “Computational Mechanisms of Networks: A Comprehensive Review,” Arch Computat. Methods Eng. 24(3), 573–588 (2017).
    [Crossref]
  29. S. P. Xu, G. Z. Zhang, C. X. Li, T. Y. Liu, Y., and L. Tang, “A Fast Random-valued Impulse Noise Detection Algorithm Based on Deep Belief Network,” Journal of Electronics and Information Technology 41(5), 1130–1136 (2019).
  30. S. M. Zhao, B. Wang, L. Y. Gong, Y. B. Sheng, W. W. Cheng, X. L. Dong, and B. Y. Zheng, “Improving the Atmosphere Turbulence Tolerance in Holographic Ghost Imaging System by Channel Coding,” J. Lightwave Technol. 31(17), 2823–2828 (2013).
    [Crossref]

2020 (3)

G. Sahu and S. S. Pawar, “Wireless backhaul networks: centralised vs. distributed scenario,” International Journal of Systems, Control and Communications. 11(3), 261–271 (2020).
[Crossref]

Z. K. Li, J. B. Su, and X. H. Zhao, “Atmospheric turbulence compensation with sensorless AO in OAM-FSO combining the deep learning-based demodulator,” Opt. Commun. 460, 125111 (2020).
[Crossref]

M. I. Dedo, Z. K. Wang, K. Guo, and Z. Y. Guo, “OAM mode recognition based on joint scheme of combining the Gerchberg Saxton (GS) algorithm and convolutional neural network (CNN),” Opt. Commun. 456, 124696 (2020).
[Crossref]

2019 (7)

S. P. Xu, G. Z. Zhang, C. X. Li, T. Y. Liu, Y., and L. Tang, “A Fast Random-valued Impulse Noise Detection Algorithm Based on Deep Belief Network,” Journal of Electronics and Information Technology 41(5), 1130–1136 (2019).

S. H. Fu, Y. W. Zhai, H. Zhou, J. Q. Zhang, T. L. Wang, C. Yin, and C. Q. Gao, “Demonstration of free-space one-to-many multicasting link from orbital angular momentum encoding,” Opt. Lett. 44(19), 4753–4756 (2019).
[Crossref]

S. Y. Cai, Z. G. Zhang, and X. Chen, “Turbulence-Resistant All Optical Relaying Based on Few-Mode EDFA in Free-Space Optical Systems,” J. Lightwave Technol. 37(9), 2042–2049 (2019).
[Crossref]

L. R. Hofer, L. W. Jones, J. L. Goedert, and R. V. Dragone, “Hermite–Gaussian mode detection via convolution neural networks,” J. Opt. Soc. Am. A 36(6), 936–943 (2019).
[Crossref]

Y. L. He, J. M. Liu, P. P. Wang, W. X. X. Zhou, J. Xiong, Y. X. Wu, Y. Cheng, Y. X. Gao, Y. Li, S. Q. Chen, and D. Y. Fan, “Detecting orbital angular momentum modes of vortex beams using feed-forward neural network,” J. Lightwave Technol. 37(23), 5848–5855 (2019).
[Crossref]

Z. W. Liu, S. Yan, H. G. Liu, and X. F. Chen, “Superhigh-Resolution Recognition of Optical Vortex Modes Assisted by a Deep-Learning Method,” Phys. Rev. Lett. 123(18), 183902 (2019).
[Crossref]

Q. S. Zhao, S. Q. Hao, Y. Wang, L. Fang, and C. L. Xu, “Orbital angular momentum detection based on diffractive deep neural network,” Opt. Commun. 443, 245–249 (2019).
[Crossref]

2018 (3)

2017 (4)

T. Doster and A. T. Watnik, “Machine learning approach to OAM beam demultiplexing via convolutional neural networks,” Appl. Opt. 56(12), 3386–3396 (2017).
[Crossref]

C. Kai, P. Huang, F. Shen, H. Zhou, and Z. Guo, “Orbital Angular Momentum Shift Keying Based Optical Communication System,” IEEE Photonics J. 9(2), 1–10 (2017).
[Crossref]

K. Zhan, J. H. Shi, H. B. Wang, Y. G. Xie, and Q. Q. Li, “Computational Mechanisms of Networks: A Comprehensive Review,” Arch Computat. Methods Eng. 24(3), 573–588 (2017).
[Crossref]

L. Wang, X. H. Ge, R. Zi, and C. X. Wang, “Capacity Analysis of Orbital Angular Momentum Wireless Channels,” IEEE Access 5, 23069–23077 (2017).
[Crossref]

2016 (2)

M. Krenn, J. Handsteiner, M. Fink, R. Fickler, R. Ursin, M. Malik, and A. Zeilinger, “Twisted light transmission over 143 km,” Proc. Natl. Acad. Sci. U. S. A. 113(48), 13648–13653 (2016).
[Crossref]

K. M. Iftekharuddin, A. A. S. Awwal, M. García Vázquez, A. Márquez, M. A. Matin, E. M. Knutson, S. Lohani, O. Danaci, S. D. Huver, and R. T. Glasser, “Deep learning as a tool to distinguish between high orbital angular momentum optical modes,” Proc. SPIE 9970, 997013 (2016).
[Crossref]

2015 (1)

2014 (1)

M. Krenn, R. Fickler, M. Fink, J. Handsteiner, M. Malik, T. Scheidl, R. Ursin, and A. Zeilinger, “Communication with spatial modulated light through turbulent air across Vienna,” New J. Phys. 16(11), 113028 (2014).
[Crossref]

2013 (1)

2004 (1)

J. D. Strasburg and W. W. Harper, “Impact of atmospheric turbulence on beam propagation,” Proc. SPIE 5413, 93–102 (2004).
[Crossref]

1995 (1)

H. Hwang and R. A. Haddad, “Adaptive median filters: new algorithms and results,” IEEE Trans. on Image Process. 4(4), 499–502 (1995).
[Crossref]

Andrews, L. C.

L. C. Andrews, “Laser beam propagation through random media,” Bellingham, Washington: SPIE press, 57–74 (2005).

Awwal, A. A. S.

K. M. Iftekharuddin, A. A. S. Awwal, M. García Vázquez, A. Márquez, M. A. Matin, E. M. Knutson, S. Lohani, O. Danaci, S. D. Huver, and R. T. Glasser, “Deep learning as a tool to distinguish between high orbital angular momentum optical modes,” Proc. SPIE 9970, 997013 (2016).
[Crossref]

Bengio, Y.

Y. Bengio and A. Courville, Deep Learning (MIT University, 2016).

Berrou, C.

C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon limit error-correcting coding and decoding: Turbo-codes. 1,” Proceedings of IEEE International Conference on Communications.2 (1993).

Cai, S. Y.

Cattell, L.

Chen, S. Q.

Chen, X.

Chen, X. F.

Z. W. Liu, S. Yan, H. G. Liu, and X. F. Chen, “Superhigh-Resolution Recognition of Optical Vortex Modes Assisted by a Deep-Learning Method,” Phys. Rev. Lett. 123(18), 183902 (2019).
[Crossref]

Cheng, W. W.

Cheng, Y.

Courville, A.

Y. Bengio and A. Courville, Deep Learning (MIT University, 2016).

Danaci, O.

K. M. Iftekharuddin, A. A. S. Awwal, M. García Vázquez, A. Márquez, M. A. Matin, E. M. Knutson, S. Lohani, O. Danaci, S. D. Huver, and R. T. Glasser, “Deep learning as a tool to distinguish between high orbital angular momentum optical modes,” Proc. SPIE 9970, 997013 (2016).
[Crossref]

Davey, M. C.

M. C. Davey and D. J. C. MacKay, “Low density parity check codes over GF(q),” Information Theory Workshop (1998).

Dedo, M. I.

M. I. Dedo, Z. K. Wang, K. Guo, and Z. Y. Guo, “OAM mode recognition based on joint scheme of combining the Gerchberg Saxton (GS) algorithm and convolutional neural network (CNN),” Opt. Commun. 456, 124696 (2020).
[Crossref]

Dong, X. L.

Doster, T.

Dragone, R. V.

Fan, D. Y.

Fang, L.

Q. S. Zhao, S. Q. Hao, Y. Wang, L. Fang, and C. L. Xu, “Orbital angular momentum detection based on diffractive deep neural network,” Opt. Commun. 443, 245–249 (2019).
[Crossref]

Fickler, R.

M. Krenn, J. Handsteiner, M. Fink, R. Fickler, R. Ursin, M. Malik, and A. Zeilinger, “Twisted light transmission over 143 km,” Proc. Natl. Acad. Sci. U. S. A. 113(48), 13648–13653 (2016).
[Crossref]

M. Krenn, R. Fickler, M. Fink, J. Handsteiner, M. Malik, T. Scheidl, R. Ursin, and A. Zeilinger, “Communication with spatial modulated light through turbulent air across Vienna,” New J. Phys. 16(11), 113028 (2014).
[Crossref]

Fink, M.

M. Krenn, J. Handsteiner, M. Fink, R. Fickler, R. Ursin, M. Malik, and A. Zeilinger, “Twisted light transmission over 143 km,” Proc. Natl. Acad. Sci. U. S. A. 113(48), 13648–13653 (2016).
[Crossref]

M. Krenn, R. Fickler, M. Fink, J. Handsteiner, M. Malik, T. Scheidl, R. Ursin, and A. Zeilinger, “Communication with spatial modulated light through turbulent air across Vienna,” New J. Phys. 16(11), 113028 (2014).
[Crossref]

Fu, S. H.

Gao, C. Q.

Gao, Y. X.

García Vázquez, M.

K. M. Iftekharuddin, A. A. S. Awwal, M. García Vázquez, A. Márquez, M. A. Matin, E. M. Knutson, S. Lohani, O. Danaci, S. D. Huver, and R. T. Glasser, “Deep learning as a tool to distinguish between high orbital angular momentum optical modes,” Proc. SPIE 9970, 997013 (2016).
[Crossref]

Ge, X. H.

L. Wang, X. H. Ge, R. Zi, and C. X. Wang, “Capacity Analysis of Orbital Angular Momentum Wireless Channels,” IEEE Access 5, 23069–23077 (2017).
[Crossref]

Glasser, R. T.

K. M. Iftekharuddin, A. A. S. Awwal, M. García Vázquez, A. Márquez, M. A. Matin, E. M. Knutson, S. Lohani, O. Danaci, S. D. Huver, and R. T. Glasser, “Deep learning as a tool to distinguish between high orbital angular momentum optical modes,” Proc. SPIE 9970, 997013 (2016).
[Crossref]

Glavieux, A.

C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon limit error-correcting coding and decoding: Turbo-codes. 1,” Proceedings of IEEE International Conference on Communications.2 (1993).

Goedert, J. L.

Gong, L. Y.

Goodman, J. W.

J. W. Goodman, “Introduction to Fourier Optics, Third Edtion,” New York: Roberts and Company Publishers (2005).

Guo, K.

M. I. Dedo, Z. K. Wang, K. Guo, and Z. Y. Guo, “OAM mode recognition based on joint scheme of combining the Gerchberg Saxton (GS) algorithm and convolutional neural network (CNN),” Opt. Commun. 456, 124696 (2020).
[Crossref]

Guo, Z.

C. Kai, P. Huang, F. Shen, H. Zhou, and Z. Guo, “Orbital Angular Momentum Shift Keying Based Optical Communication System,” IEEE Photonics J. 9(2), 1–10 (2017).
[Crossref]

Guo, Z. Y.

M. I. Dedo, Z. K. Wang, K. Guo, and Z. Y. Guo, “OAM mode recognition based on joint scheme of combining the Gerchberg Saxton (GS) algorithm and convolutional neural network (CNN),” Opt. Commun. 456, 124696 (2020).
[Crossref]

Haddad, R. A.

H. Hwang and R. A. Haddad, “Adaptive median filters: new algorithms and results,” IEEE Trans. on Image Process. 4(4), 499–502 (1995).
[Crossref]

Handsteiner, J.

M. Krenn, J. Handsteiner, M. Fink, R. Fickler, R. Ursin, M. Malik, and A. Zeilinger, “Twisted light transmission over 143 km,” Proc. Natl. Acad. Sci. U. S. A. 113(48), 13648–13653 (2016).
[Crossref]

M. Krenn, R. Fickler, M. Fink, J. Handsteiner, M. Malik, T. Scheidl, R. Ursin, and A. Zeilinger, “Communication with spatial modulated light through turbulent air across Vienna,” New J. Phys. 16(11), 113028 (2014).
[Crossref]

Hao, S. Q.

Q. S. Zhao, S. Q. Hao, Y. Wang, L. Fang, and C. L. Xu, “Orbital angular momentum detection based on diffractive deep neural network,” Opt. Commun. 443, 245–249 (2019).
[Crossref]

Harper, W. W.

J. D. Strasburg and W. W. Harper, “Impact of atmospheric turbulence on beam propagation,” Proc. SPIE 5413, 93–102 (2004).
[Crossref]

He, Y. L.

Hofer, L. R.

Hu, K.

Huang, P.

C. Kai, P. Huang, F. Shen, H. Zhou, and Z. Guo, “Orbital Angular Momentum Shift Keying Based Optical Communication System,” IEEE Photonics J. 9(2), 1–10 (2017).
[Crossref]

Huver, S. D.

K. M. Iftekharuddin, A. A. S. Awwal, M. García Vázquez, A. Márquez, M. A. Matin, E. M. Knutson, S. Lohani, O. Danaci, S. D. Huver, and R. T. Glasser, “Deep learning as a tool to distinguish between high orbital angular momentum optical modes,” Proc. SPIE 9970, 997013 (2016).
[Crossref]

Hwang, H.

H. Hwang and R. A. Haddad, “Adaptive median filters: new algorithms and results,” IEEE Trans. on Image Process. 4(4), 499–502 (1995).
[Crossref]

Iftekharuddin, K. M.

K. M. Iftekharuddin, A. A. S. Awwal, M. García Vázquez, A. Márquez, M. A. Matin, E. M. Knutson, S. Lohani, O. Danaci, S. D. Huver, and R. T. Glasser, “Deep learning as a tool to distinguish between high orbital angular momentum optical modes,” Proc. SPIE 9970, 997013 (2016).
[Crossref]

Jones, L. W.

Kai, C.

C. Kai, P. Huang, F. Shen, H. Zhou, and Z. Guo, “Orbital Angular Momentum Shift Keying Based Optical Communication System,” IEEE Photonics J. 9(2), 1–10 (2017).
[Crossref]

Kim, J.

J. Kim, J. K. Lee, and K. M. Lee, “Accurate Image Super-Resolution Using Very Deep Convolutional Networks,” IEEE Conference on Computer Vision and Pattern Recognition, 1646–1654 (2016).

Knutson, E. M.

K. M. Iftekharuddin, A. A. S. Awwal, M. García Vázquez, A. Márquez, M. A. Matin, E. M. Knutson, S. Lohani, O. Danaci, S. D. Huver, and R. T. Glasser, “Deep learning as a tool to distinguish between high orbital angular momentum optical modes,” Proc. SPIE 9970, 997013 (2016).
[Crossref]

Krenn, M.

M. Krenn, J. Handsteiner, M. Fink, R. Fickler, R. Ursin, M. Malik, and A. Zeilinger, “Twisted light transmission over 143 km,” Proc. Natl. Acad. Sci. U. S. A. 113(48), 13648–13653 (2016).
[Crossref]

M. Krenn, R. Fickler, M. Fink, J. Handsteiner, M. Malik, T. Scheidl, R. Ursin, and A. Zeilinger, “Communication with spatial modulated light through turbulent air across Vienna,” New J. Phys. 16(11), 113028 (2014).
[Crossref]

Lee, J. K.

J. Kim, J. K. Lee, and K. M. Lee, “Accurate Image Super-Resolution Using Very Deep Convolutional Networks,” IEEE Conference on Computer Vision and Pattern Recognition, 1646–1654 (2016).

Lee, K. M.

J. Kim, J. K. Lee, and K. M. Lee, “Accurate Image Super-Resolution Using Very Deep Convolutional Networks,” IEEE Conference on Computer Vision and Pattern Recognition, 1646–1654 (2016).

Li, C. X.

S. P. Xu, G. Z. Zhang, C. X. Li, T. Y. Liu, Y., and L. Tang, “A Fast Random-valued Impulse Noise Detection Algorithm Based on Deep Belief Network,” Journal of Electronics and Information Technology 41(5), 1130–1136 (2019).

Li, J.

Li, Q. Q.

K. Zhan, J. H. Shi, H. B. Wang, Y. G. Xie, and Q. Q. Li, “Computational Mechanisms of Networks: A Comprehensive Review,” Arch Computat. Methods Eng. 24(3), 573–588 (2017).
[Crossref]

Li, Y.

Li, Z.

Li, Z. K.

Z. K. Li, J. B. Su, and X. H. Zhao, “Atmospheric turbulence compensation with sensorless AO in OAM-FSO combining the deep learning-based demodulator,” Opt. Commun. 460, 125111 (2020).
[Crossref]

Liu, H. G.

Z. W. Liu, S. Yan, H. G. Liu, and X. F. Chen, “Superhigh-Resolution Recognition of Optical Vortex Modes Assisted by a Deep-Learning Method,” Phys. Rev. Lett. 123(18), 183902 (2019).
[Crossref]

Liu, J. M.

Liu, T. Y.

S. P. Xu, G. Z. Zhang, C. X. Li, T. Y. Liu, Y., and L. Tang, “A Fast Random-valued Impulse Noise Detection Algorithm Based on Deep Belief Network,” Journal of Electronics and Information Technology 41(5), 1130–1136 (2019).

Liu, Z. W.

Z. W. Liu, S. Yan, H. G. Liu, and X. F. Chen, “Superhigh-Resolution Recognition of Optical Vortex Modes Assisted by a Deep-Learning Method,” Phys. Rev. Lett. 123(18), 183902 (2019).
[Crossref]

Lohani, S.

K. M. Iftekharuddin, A. A. S. Awwal, M. García Vázquez, A. Márquez, M. A. Matin, E. M. Knutson, S. Lohani, O. Danaci, S. D. Huver, and R. T. Glasser, “Deep learning as a tool to distinguish between high orbital angular momentum optical modes,” Proc. SPIE 9970, 997013 (2016).
[Crossref]

MacKay, D. J. C.

M. C. Davey and D. J. C. MacKay, “Low density parity check codes over GF(q),” Information Theory Workshop (1998).

Malik, M.

M. Krenn, J. Handsteiner, M. Fink, R. Fickler, R. Ursin, M. Malik, and A. Zeilinger, “Twisted light transmission over 143 km,” Proc. Natl. Acad. Sci. U. S. A. 113(48), 13648–13653 (2016).
[Crossref]

M. Krenn, R. Fickler, M. Fink, J. Handsteiner, M. Malik, T. Scheidl, R. Ursin, and A. Zeilinger, “Communication with spatial modulated light through turbulent air across Vienna,” New J. Phys. 16(11), 113028 (2014).
[Crossref]

Márquez, A.

K. M. Iftekharuddin, A. A. S. Awwal, M. García Vázquez, A. Márquez, M. A. Matin, E. M. Knutson, S. Lohani, O. Danaci, S. D. Huver, and R. T. Glasser, “Deep learning as a tool to distinguish between high orbital angular momentum optical modes,” Proc. SPIE 9970, 997013 (2016).
[Crossref]

Mathiopoulos, P. T.

Matin, M. A.

K. M. Iftekharuddin, A. A. S. Awwal, M. García Vázquez, A. Márquez, M. A. Matin, E. M. Knutson, S. Lohani, O. Danaci, S. D. Huver, and R. T. Glasser, “Deep learning as a tool to distinguish between high orbital angular momentum optical modes,” Proc. SPIE 9970, 997013 (2016).
[Crossref]

Nichols, J. M.

Pan, X.

Park, S. R.

Pawar, S. S.

G. Sahu and S. S. Pawar, “Wireless backhaul networks: centralised vs. distributed scenario,” International Journal of Systems, Control and Communications. 11(3), 261–271 (2020).
[Crossref]

Peppas, K. P.

Rohde, G. K.

Sahu, G.

G. Sahu and S. S. Pawar, “Wireless backhaul networks: centralised vs. distributed scenario,” International Journal of Systems, Control and Communications. 11(3), 261–271 (2020).
[Crossref]

Scheidl, T.

M. Krenn, R. Fickler, M. Fink, J. Handsteiner, M. Malik, T. Scheidl, R. Ursin, and A. Zeilinger, “Communication with spatial modulated light through turbulent air across Vienna,” New J. Phys. 16(11), 113028 (2014).
[Crossref]

Shen, F.

C. Kai, P. Huang, F. Shen, H. Zhou, and Z. Guo, “Orbital Angular Momentum Shift Keying Based Optical Communication System,” IEEE Photonics J. 9(2), 1–10 (2017).
[Crossref]

Sheng, Y. B.

Shi, J. H.

K. Zhan, J. H. Shi, H. B. Wang, Y. G. Xie, and Q. Q. Li, “Computational Mechanisms of Networks: A Comprehensive Review,” Arch Computat. Methods Eng. 24(3), 573–588 (2017).
[Crossref]

Strasburg, J. D.

J. D. Strasburg and W. W. Harper, “Impact of atmospheric turbulence on beam propagation,” Proc. SPIE 5413, 93–102 (2004).
[Crossref]

Su, J. B.

Z. K. Li, J. B. Su, and X. H. Zhao, “Atmospheric turbulence compensation with sensorless AO in OAM-FSO combining the deep learning-based demodulator,” Opt. Commun. 460, 125111 (2020).
[Crossref]

Tang, L.

S. P. Xu, G. Z. Zhang, C. X. Li, T. Y. Liu, Y., and L. Tang, “A Fast Random-valued Impulse Noise Detection Algorithm Based on Deep Belief Network,” Journal of Electronics and Information Technology 41(5), 1130–1136 (2019).

Thitimajshima, P.

C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon limit error-correcting coding and decoding: Turbo-codes. 1,” Proceedings of IEEE International Conference on Communications.2 (1993).

Tian, F.

Tian, Q.

Ursin, R.

M. Krenn, J. Handsteiner, M. Fink, R. Fickler, R. Ursin, M. Malik, and A. Zeilinger, “Twisted light transmission over 143 km,” Proc. Natl. Acad. Sci. U. S. A. 113(48), 13648–13653 (2016).
[Crossref]

M. Krenn, R. Fickler, M. Fink, J. Handsteiner, M. Malik, T. Scheidl, R. Ursin, and A. Zeilinger, “Communication with spatial modulated light through turbulent air across Vienna,” New J. Phys. 16(11), 113028 (2014).
[Crossref]

Wang, B.

Wang, C. X.

L. Wang, X. H. Ge, R. Zi, and C. X. Wang, “Capacity Analysis of Orbital Angular Momentum Wireless Channels,” IEEE Access 5, 23069–23077 (2017).
[Crossref]

Wang, D.

Wang, H. B.

K. Zhan, J. H. Shi, H. B. Wang, Y. G. Xie, and Q. Q. Li, “Computational Mechanisms of Networks: A Comprehensive Review,” Arch Computat. Methods Eng. 24(3), 573–588 (2017).
[Crossref]

Wang, L.

L. Wang, X. H. Ge, R. Zi, and C. X. Wang, “Capacity Analysis of Orbital Angular Momentum Wireless Channels,” IEEE Access 5, 23069–23077 (2017).
[Crossref]

Wang, P. P.

Wang, T. L.

Wang, Y.

Q. S. Zhao, S. Q. Hao, Y. Wang, L. Fang, and C. L. Xu, “Orbital angular momentum detection based on diffractive deep neural network,” Opt. Commun. 443, 245–249 (2019).
[Crossref]

Q. Tian, Z. Li, K. Hu, L. Zhu, X. Pan, Q. Zhang, Y. Wang, F. Tian, X. Yin, and X. Xin, “Turbo-coded 16-ary OAM shift keying FSO communication system combining the CNN based adaptive demodulator,” Opt. Express 26(21), 27849–27864 (2018).
[Crossref]

Wang, Z. K.

M. I. Dedo, Z. K. Wang, K. Guo, and Z. Y. Guo, “OAM mode recognition based on joint scheme of combining the Gerchberg Saxton (GS) algorithm and convolutional neural network (CNN),” Opt. Commun. 456, 124696 (2020).
[Crossref]

Watnik, A.

Watnik, A. T.

Wu, S.

Wu, Y. X.

Xie, Y. G.

K. Zhan, J. H. Shi, H. B. Wang, Y. G. Xie, and Q. Q. Li, “Computational Mechanisms of Networks: A Comprehensive Review,” Arch Computat. Methods Eng. 24(3), 573–588 (2017).
[Crossref]

Xin, X.

Xiong, J.

Xu, C. L.

Q. S. Zhao, S. Q. Hao, Y. Wang, L. Fang, and C. L. Xu, “Orbital angular momentum detection based on diffractive deep neural network,” Opt. Commun. 443, 245–249 (2019).
[Crossref]

Xu, S. P.

S. P. Xu, G. Z. Zhang, C. X. Li, T. Y. Liu, Y., and L. Tang, “A Fast Random-valued Impulse Noise Detection Algorithm Based on Deep Belief Network,” Journal of Electronics and Information Technology 41(5), 1130–1136 (2019).

Yan, S.

Z. W. Liu, S. Yan, H. G. Liu, and X. F. Chen, “Superhigh-Resolution Recognition of Optical Vortex Modes Assisted by a Deep-Learning Method,” Phys. Rev. Lett. 123(18), 183902 (2019).
[Crossref]

Yin, C.

Yin, X.

Zeilinger, A.

M. Krenn, J. Handsteiner, M. Fink, R. Fickler, R. Ursin, M. Malik, and A. Zeilinger, “Twisted light transmission over 143 km,” Proc. Natl. Acad. Sci. U. S. A. 113(48), 13648–13653 (2016).
[Crossref]

M. Krenn, R. Fickler, M. Fink, J. Handsteiner, M. Malik, T. Scheidl, R. Ursin, and A. Zeilinger, “Communication with spatial modulated light through turbulent air across Vienna,” New J. Phys. 16(11), 113028 (2014).
[Crossref]

Zhai, Y. W.

Zhan, K.

K. Zhan, J. H. Shi, H. B. Wang, Y. G. Xie, and Q. Q. Li, “Computational Mechanisms of Networks: A Comprehensive Review,” Arch Computat. Methods Eng. 24(3), 573–588 (2017).
[Crossref]

Zhan, Y.

Zhang, G. Z.

S. P. Xu, G. Z. Zhang, C. X. Li, T. Y. Liu, Y., and L. Tang, “A Fast Random-valued Impulse Noise Detection Algorithm Based on Deep Belief Network,” Journal of Electronics and Information Technology 41(5), 1130–1136 (2019).

Zhang, J. Q.

Zhang, M.

Zhang, Q.

Zhang, Z. G.

Zhao, Q. S.

Q. S. Zhao, S. Q. Hao, Y. Wang, L. Fang, and C. L. Xu, “Orbital angular momentum detection based on diffractive deep neural network,” Opt. Commun. 443, 245–249 (2019).
[Crossref]

Zhao, S. M.

Zhao, X. H.

Z. K. Li, J. B. Su, and X. H. Zhao, “Atmospheric turbulence compensation with sensorless AO in OAM-FSO combining the deep learning-based demodulator,” Opt. Commun. 460, 125111 (2020).
[Crossref]

Zheng, B. Y.

Zhou, H.

S. H. Fu, Y. W. Zhai, H. Zhou, J. Q. Zhang, T. L. Wang, C. Yin, and C. Q. Gao, “Demonstration of free-space one-to-many multicasting link from orbital angular momentum encoding,” Opt. Lett. 44(19), 4753–4756 (2019).
[Crossref]

C. Kai, P. Huang, F. Shen, H. Zhou, and Z. Guo, “Orbital Angular Momentum Shift Keying Based Optical Communication System,” IEEE Photonics J. 9(2), 1–10 (2017).
[Crossref]

Zhou, W. X. X.

Zhu, L.

Zi, R.

L. Wang, X. H. Ge, R. Zi, and C. X. Wang, “Capacity Analysis of Orbital Angular Momentum Wireless Channels,” IEEE Access 5, 23069–23077 (2017).
[Crossref]

Appl. Opt. (1)

Arch Computat. Methods Eng. (1)

K. Zhan, J. H. Shi, H. B. Wang, Y. G. Xie, and Q. Q. Li, “Computational Mechanisms of Networks: A Comprehensive Review,” Arch Computat. Methods Eng. 24(3), 573–588 (2017).
[Crossref]

IEEE Access (1)

L. Wang, X. H. Ge, R. Zi, and C. X. Wang, “Capacity Analysis of Orbital Angular Momentum Wireless Channels,” IEEE Access 5, 23069–23077 (2017).
[Crossref]

IEEE Photonics J. (1)

C. Kai, P. Huang, F. Shen, H. Zhou, and Z. Guo, “Orbital Angular Momentum Shift Keying Based Optical Communication System,” IEEE Photonics J. 9(2), 1–10 (2017).
[Crossref]

IEEE Trans. on Image Process. (1)

H. Hwang and R. A. Haddad, “Adaptive median filters: new algorithms and results,” IEEE Trans. on Image Process. 4(4), 499–502 (1995).
[Crossref]

International Journal of Systems, Control and Communications. (1)

G. Sahu and S. S. Pawar, “Wireless backhaul networks: centralised vs. distributed scenario,” International Journal of Systems, Control and Communications. 11(3), 261–271 (2020).
[Crossref]

J. Lightwave Technol. (4)

J. Opt. Soc. Am. A (1)

Journal of Electronics and Information Technology (1)

S. P. Xu, G. Z. Zhang, C. X. Li, T. Y. Liu, Y., and L. Tang, “A Fast Random-valued Impulse Noise Detection Algorithm Based on Deep Belief Network,” Journal of Electronics and Information Technology 41(5), 1130–1136 (2019).

New J. Phys. (1)

M. Krenn, R. Fickler, M. Fink, J. Handsteiner, M. Malik, T. Scheidl, R. Ursin, and A. Zeilinger, “Communication with spatial modulated light through turbulent air across Vienna,” New J. Phys. 16(11), 113028 (2014).
[Crossref]

Opt. Commun. (3)

Q. S. Zhao, S. Q. Hao, Y. Wang, L. Fang, and C. L. Xu, “Orbital angular momentum detection based on diffractive deep neural network,” Opt. Commun. 443, 245–249 (2019).
[Crossref]

Z. K. Li, J. B. Su, and X. H. Zhao, “Atmospheric turbulence compensation with sensorless AO in OAM-FSO combining the deep learning-based demodulator,” Opt. Commun. 460, 125111 (2020).
[Crossref]

M. I. Dedo, Z. K. Wang, K. Guo, and Z. Y. Guo, “OAM mode recognition based on joint scheme of combining the Gerchberg Saxton (GS) algorithm and convolutional neural network (CNN),” Opt. Commun. 456, 124696 (2020).
[Crossref]

Opt. Express (3)

Opt. Lett. (1)

Phys. Rev. Lett. (1)

Z. W. Liu, S. Yan, H. G. Liu, and X. F. Chen, “Superhigh-Resolution Recognition of Optical Vortex Modes Assisted by a Deep-Learning Method,” Phys. Rev. Lett. 123(18), 183902 (2019).
[Crossref]

Proc. Natl. Acad. Sci. U. S. A. (1)

M. Krenn, J. Handsteiner, M. Fink, R. Fickler, R. Ursin, M. Malik, and A. Zeilinger, “Twisted light transmission over 143 km,” Proc. Natl. Acad. Sci. U. S. A. 113(48), 13648–13653 (2016).
[Crossref]

Proc. SPIE (2)

K. M. Iftekharuddin, A. A. S. Awwal, M. García Vázquez, A. Márquez, M. A. Matin, E. M. Knutson, S. Lohani, O. Danaci, S. D. Huver, and R. T. Glasser, “Deep learning as a tool to distinguish between high orbital angular momentum optical modes,” Proc. SPIE 9970, 997013 (2016).
[Crossref]

J. D. Strasburg and W. W. Harper, “Impact of atmospheric turbulence on beam propagation,” Proc. SPIE 5413, 93–102 (2004).
[Crossref]

Other (6)

J. W. Goodman, “Introduction to Fourier Optics, Third Edtion,” New York: Roberts and Company Publishers (2005).

C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon limit error-correcting coding and decoding: Turbo-codes. 1,” Proceedings of IEEE International Conference on Communications.2 (1993).

M. C. Davey and D. J. C. MacKay, “Low density parity check codes over GF(q),” Information Theory Workshop (1998).

J. Kim, J. K. Lee, and K. M. Lee, “Accurate Image Super-Resolution Using Very Deep Convolutional Networks,” IEEE Conference on Computer Vision and Pattern Recognition, 1646–1654 (2016).

L. C. Andrews, “Laser beam propagation through random media,” Bellingham, Washington: SPIE press, 57–74 (2005).

Y. Bengio and A. Courville, Deep Learning (MIT University, 2016).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Numerical model of the OAM-SK-FSO system, including the transmitter, the atmospheric channel, and the different receivers, i.e., the traditional receiver equipped with the S-CNN demodulator and the proposed receiver with the two-step system
Fig. 2.
Fig. 2. Architecture of CNN demodulator
Fig. 3.
Fig. 3. Wavefront perturbations (rad) cause by random phase screen with $C_n^2$ valued in (a) 5e-16m−2/3, (b) 5e-15m−2/3, (c) 5e-14m−2/3, (d) 5e-13m−2/3.
Fig. 4.
Fig. 4. Intensity images of the received OAM beams carrying the 16 kinds of OAM modes (consist of the states of {1, −2, 3, −5}) over the simulated 1,000m AT channels with $C_n^2$ valued in (a) 5e-16m−2/3, (b) 5e-15m−2/3, (c) 5e-14m−2/3, (d) 5e-13m−2/3.
Fig. 5.
Fig. 5. Random crosstalk matrix of AT detector
Fig. 6.
Fig. 6. Crosstalk matrix of AT categories. (a) Original crosstalk matrix. The shadow area indicates the AT categories (AT7 and AT8) will be combined. (b) Crosstalk matrix after the first combination. The shadow area indicates the combined AT category (AT7-8). (c) Result crosstalk matrix with $\kappa = 0.7$. (d) Result crosstalk matrix with $\kappa = 0.8$.
Fig. 7.
Fig. 7. Sampling data and training the ATDM-CNN demodulator.
Fig. 8.
Fig. 8. Workflow of RAMF
Fig. 9.
Fig. 9. The network structure of VDSR, 19 pair of layers are used in VDSR
Fig. 10.
Fig. 10. Random crosstalk matrixes of transmitted data with superposed OAM states when $C_n^2$ is 5.00e-13m−2/3. (a) Random crosstalk matrix of S-CNN demodulator. (b) Random crosstalk matrix of ATDM-CNN demodulator when $\kappa = 0.8$.
Fig. 11.
Fig. 11. Six groups of received images by using the S-CNN (on the top) and the ATDM-CNN (on the bottom) when $C_n^2$ valued in 3.34e-13m−2/3, 3.89e-13m−2/3 and 4.45e-13m−2/3 respectively (from left to right).
Fig. 12.
Fig. 12. Groups of images sequentially output by the ATDM-CNN demodulator, RAMF and VDSR (from left to right) when $C_n^2$ valued in 3.34e-13m−2/3, 3.89e-13m−2/3 and 4.45e-13m2/3 respectively (from top to bottom for each group).
Fig. 13.
Fig. 13. Comparison between a series of output images by using the S-CNN demodulator and the ATDM-CNN demodulator.

Tables (6)

Tables Icon

Table 1. Parameters used to simulate propagation through the setups

Tables Icon

Table 2. Recognition rates in different ATs

Tables Icon

Table 3. PSNRs of received images by S-CNN and ATDM-CNN demodulator

Tables Icon

Table 4. Time spent in OAM-SK demodulation by using GPU acceleration

Tables Icon

Table 5. PSNRs of images output by ATDM-CNN demodulator, RAMF, and VDSR

Tables Icon

Table 6. PSNRs of received images when C n 2 is 4.45e-13m−2/3

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

u L G l ( r , θ ) = 1 2 ( 2 p ! ( | l | + p ) ! ) 1 2 1 w 0 ( r w 0 ) | l | L p | l | ( r 2 w 0 2 ) exp ( r 2 2 w 0 2 ) exp ( i l θ ) ,
u m u l t i O A M ( r , θ ) = 1 , 2 , 3 , 5 β l u L G l ( r , θ ) , β l = 0 , 1 ,
θ n ( k x , k y ) = 0.033 C n 2 [ 1 + 1.802 k x 2 + k y 2 k l 2 0.254 ( k x 2 + k y 2 k l 2 ) 7 12 ] × exp ( k x 2 + k y 2 k l 2 ) ( k x 2 + k y 2 + k 0 2 ) 11 6
Φ ( k x , k y ) = 2 π k 2 Δ z θ n ( k x , k y ) ,
ϕ ( x , y ) = { F 1 [ C N N σ ( k x , k y ) ] } ,
σ 2 ( k x , k y ) = ( 2 π N Δ x ) 2 ϕ ( x , y ) ,
u m u l t i O A M ( z + Δ z ) = F 1 [ F ( u m u l t i O A M ( z ) × exp ( i l θ ) ) × H ( Δ z ) ] ,
I standardized ( x , y ) = I ( x , y ) 255.0 0.5
S j = exp ( l j ) i = 1 n u m exp ( l j ) j = 1 , 2 , , n u m ;
y = i = 1 n u m y i l a b e l l o g ( S i ) ,
y l a b e l = [ 0 , , 1 , 0 , 0 ] T ,
χ _ p ( n , m ) = p ( m ) χ ( m , n ) i = 1 8 p ( i ) χ ( i , n ) , m , n = 1 , , 8 ,
ω AT - I = s u m ( χ _ p ( 1 , : ) ) = ( 0.93 , 0.09 , 0.00 , 0.00 , 0.00 , 0.00 , 0.00 , 0.00 ) , ω AT - II = s u m ( χ _ p ( 2 : 3 , : ) ) = ( 0.04 , 0.81 , 0.87 , 0.26 , 0.02 , 0.00 , 0.00 , 0.00 ) , ω AT - III = s u m ( χ _ p ( 4 : 8 , : ) ) = ( 0.05 , 0.05 , 0.19 , 0.73 , 1.08 , 1.23 , 0.98 , 0.71 ) .
M S E ( a , b ) = 1 N × M x = 0 N 1 y = 0 M 1 ( a ( x , y ) b ( x , y ) ) 2 ,
P S N R = 10 log 10 [ max V a l 2 M S E ( a , b ) ] ,

Metrics