Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

768-ary Laguerre-Gaussian-mode shift keying free-space optical communication based on convolutional neural networks

Open Access Open Access

Abstract

Beyond orbital angular momentum of Laguerre-Gaussian (LG) modes, the radial index can also be exploited as information channel in free-space optical (FSO) communication to extend the communication capacity, resulting in the LG- shift keying (LG-SK) FSO communications. However, the recognition of radial index is critical and tough when the superposed high-order LG modes are disturbed by the atmospheric turbulences (ATs). In this paper, the convolutional neural network (CNN) is utilized to recognize both the azimuthal and radial index of superposed LG modes. We experimentally demonstrate the application of CNN model in a 10-meter 768-ary LG-SK FSO communication system at the AT of $C_n^2$ = 1e-14 m−2/3. Based on the high recognition accuracy of the CNN model (>95%) in the scheme, a colorful image can be transmitted and the peak signal-to-noise ratio of the received image can exceed 35 dB. We anticipate that our results can stimulate further researches on the utilization of the potential applications of LG modes with non-zero radial index based on the artificial-intelligence-enhanced optoelectronic systems.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

To meet the ever-increasing demand for high capacity optical devices in information optics, various physically-orthogonal dimensions of light (e.g. the amplitude, phase, wavelength, polarization) have been employed as information carriers. A special utilization is the physical dimension of orbital angular momentum (OAM), carried by the light beam featuring a helical phase term of exp(ilφ) (l represents azimuthal index, φ is azimuthal angle) [1]. Due to the theoretically unlimited orthogonality, it has been utilized to boost the information capacity in quantum information optics [2], holography [3,4] and optical communication [5,6].

The current OAM-based optical communication systems can be divided into two categories, including OAM division multiplexing (OAM-DM) system where separate OAM beams are treated as individual signal carriers [710] and OAM shift keying (OAM-SK) system where each OAM state can represent a data symbol [1113]. Indeed, OAM is only a subspace of well-known Laguerre-Gaussian modes, which own two degrees of freedom: an azimuthal index l and a radial index p [14]. More recent works have reported that the radial degree of freedom could also increase the bandwidth capacity in optical communication, resulting in the LG modes free-space optical (FSO) communication systems [15,16].

The OAM demodulation technique plays a key role in OAM FSO communication system. The common OAM detection methods range from the modes converter combining with the modified Mach-Zehnder interferometer (MZI) [17]. inverse modes conversion based on special-designed computer generated holograms (CGHs) [18], the Surface-Plasmon-Polaritons-based OAM-sensitive nanophotonic structures [19], and the coordinates transformation scheme based on multi-plane phase modulation [20,21]. The recognition methods of the radial index of LG modes, either by more complicated CGHs [22] or bulky optical MZI systems based on fractional Fourier transformation [23,24], are similar as the previous methods [17,18] in principle. However, the modes detection range and diffraction efficiency of the above-mentioned schemes are quite limited, and the performance degrades seriously due to the misalignments of optical setups and the phase distortion caused by the environmental perturbation. As such, the ubiquitous atmospheric turbulences (ATs) become a tough issue in the practical hybrid complex spatial-mode-involved FSO communications [25].

The adaptive optics (AO) with intelligent algorithms, such as Gerchberg-Sxton (GS) [26] and stochastic-parallel-gradient-descent (SPGD) [27] algorithms, is a potential solution to cope with the effects of turbulence [28]. However, the requirements of wavefront compensating devices and probe beams increase the cost and system complexity. Moreover, multiple iterations are needed in the common GS and SPGD algorithms, resulting in a long processing time. And the lack of learning ability in these algorithms even leads to the failing of convergence in the processes of iterative calculations. In the past several years, benefiting from the capability to extract and recognize the intrinsic features of the input raw images, convolutional neural network (CNN), as a model of deep learning [29], has been applied in OAM-based optical communication systems [3039]. The intensity patterns of integer OAM modes [38], fractional OAM modes [40], Hermite-Gaussian modes [36] and other hybrid complex spatial modes [39,41] have been successfully categorized and demodulated with high accuracies. Particularly, CNN could be used to extract and compensate AT from the received intensity patterns [42], showing great demodulation capability especially in AT-involved FSO communication systems. Up to now, such CNN-based OAM-SK FSO communication systems have developed from 8-ary [43], 16-ary [37,44,45] to 32-ary [46]. In this paper, we have experimentally demonstrated the LG shift keying (LG-SK) FSO communication to transmit a colorful image. Based on the remarkable ability of spatial mode recognition of CNN, the alphabet in our scheme can be as large as 768 consist of 24 different LG modes. Meanwhile, the issue of the AT has also been considered.

2. Conceptual illustration of CNN-based LG-SK FSO communication

The conceptual illustration of CNN-based LG-SK FSO communication is shown in Fig. 1. More than the azimuthal degree of freedom (l) in conventional OAM optical communications, the radial index (p) of the LG modes is also implemented as information carrier in our scheme. In particular, the information is encoded into the radial index (p) and azimuthal index (l) of coherent superposed LG modes, which transmit in free space with certain AT in time sequence. In the demodulation processes, the CNN is utilized to recognize the hybrid spatial modes. The training data are obtained from the Charge Coupled Device (CCD) camera in Fig. 1(a). As such, the efficient performance of the CNN leads to the increment of the accuracy. More notably, no rigorous requirements on the alignment in the acquisition of the data result in the improvement of the robustness compared to the detection methods based on mode conversion. Finally, as shown in Fig. 1(b), the target information can be decoded based on the radial and azimuthal index of the hybrid spatial modes acquired from the CNN.

 figure: Fig. 1.

Fig. 1. The conceptual illustration of CNN-based LG-SK FSO communication. (a) A coherent superposition of multiple LG modes with the different radial index (p) and azimuthal index (l) of LG modes transmitted in free space with AT and the intensity is captured by a CCD camera. (b) The radial index and azimuthal index can be obtained based on the CNN.

Download Full Size | PDF

3. Experimental setup for the generation of the complex-amplitude field

As introduced below, LG mode, a common type of OAM beam, is implemented as information carrier in our scheme of optical communication. In the cylindrical coordinate (r, θ, z), the complex field of a LG mode at the waist plane (z=0) can be described as:

$${\textrm{u}_{\textrm{LG}}}({r,\theta } )= \sqrt {\frac{{2p!}}{{\pi {w_0}^2({p + |l |} )!}}} {\left( {\frac{{\sqrt 2 }}{{{w_0}}}} \right)^{|l |}}{L_p}^{|l |}\left( {\frac{{2{r^2}}}{{{w_0}^2}}} \right)\exp \left( { - \frac{{{r^2}}}{{{w_0}^2}}} \right)\exp ({il\theta } ), $$
where ${L_p}^{|l |}$ is the generalized Laguerre polynomial, $\lambda $ is the wavelength, ${w_0}$ represents the beam waist, $k = 2\pi /\lambda $ is the wave number, l and p is the topological charge and the radial index of LG mode, respectively.

The schematic diagram of the experimental setup is illustrated in Fig. 2(a). A 532 nm beam from a continuous-wave laser, passes through a half-wave plate (HWP) in combination with a polarization beam-splitter (PBS), which is used to coarsely adjust the laser power. With the adjustment of a beam expansion system consisting of two convex lenses with 50-mm- and 300-mm- focal lengths, the light beam with horizontal polarization direction is illuminated on the phase-only holograms imprinted on the spatial light modulator (SLM, Holoeye Pluto-2-NIR-011). To achieve the complex amplitude single or superposed LG modes, we adopted Double-Fourier transform optical setup in our scheme (Fig. 2(b)) [47] . Notably, to spatially isolate the encoded complex-amplitude field in the back focal plane of L3 (u-v plane in Fig. 2(b)), the carrier phase modulation 2π(u0x + v0y) with spatial frequencies (u0, v0) has been added on the phase-only computer-generated-hologram (CGH) loaded on SLM. This allows the separation of Fourier spectrum of the CGH. As such, a spatial filter is put in the focal plane of L3 to eliminate the noise from the other diffraction orders. Finally, the target complex field can be recovered from the first order of the Fourier series of the phase-only CGH. The intensity of target filed after a 10-meter propagation distance is eventually collected by the Charge Coupled Device (CCD) camera (Thorlabs DCU224C).

 figure: Fig. 2.

Fig. 2. Experimental setup. (a) Overview of the optical setup for the generation and propagation of complex amplitude field. (b) The Double-Fourier transform optical setup is implemented to achieve the complex amplitude field.

Download Full Size | PDF

Notably, the impact of AT is critical in the study of optical communication. In our experiment, the modified Von Karman turbulence spectrum model, containing both large-scale and small-scale influences, is adopted to analyse the amplitude fluctuation of the spatial modes. The atmospheric refractive index power spectrum is described as:

$${\Phi _n}(k) = 0.033C_n^2{({k^2} + 1/{L_0}^2)^{ - 11/6}}\exp ( - {k^2}/{k_l}^2), $$
where ${k_l} = 5.92/{k_0},$ 0≤k≤∞ means this spectral model is applicable to all wavenumber scenarios. $C_n^2$, the refractive index structure parameter, has a typical value from 10−17 to 10−13 m−2/3, standing for week turbulence and strong turbulence. As such, the additional fluctuation phases embedded into the original hologram could represent the AT. As shown in the first column of Fig. 3, the refractive index structure parameter with larger $C_n^2$ results in a disturbed phase term with higher spatial frequency. The intensity distributions of LG modes with azimuthal index varying from 1 to 3 (l=1,2,3) and radial index from 1 to 8 (p=1-8) after a 10-meter- propagation distance have been shown in the third column of Fig. 3. While AT (= $C_n^2 = 1e - 16\textrm{ }{m^{ - 2/3}}$) has a negligible influence on the intensity distributions of a single LG mode, the detection of the radial and azimuthal index after AT ($C_n^2 = 1e - 14{m^{ - 2/3}}$) calls for a robust method.

 figure: Fig. 3.

Fig. 3. The influence of AT on the intensity distribution of single LG modes. (a) (d) (g) The fluctuation phase distributions caused by AT with different refractive index structure parameters in Van Karman model. (b) (e) (h) The hologram of a single LG mode and a disturbed phase terms with different refractive index structure parameters result in the final holograms. (c) (f) (i) Experimental results of the intensity distributions of different LG modes influenced by AT with different refractive index structure parameters.

Download Full Size | PDF

Next, in order to improve the efficiency of information transmission, a superposition of multiple LG modes can be generated simultaneously to implement LG-SK FSO communications. The complex electrical field of such coherent superposition can be expressed as

$$u_{_{\textrm{LG}}}^s({r,\theta } )= \sum\limits_{\textrm{l} = 0}^\textrm{n} {\sum\limits_{\textrm{p} = 1}^n {\sqrt {\frac{{2p!}}{{\pi {w_0}^2({p + |l |} )!}}} {{\left( {\frac{{\sqrt 2 }}{{{w_0}}}} \right)}^{|l |}}{L_p}^{|l |}\left( {\frac{{2{r^2}}}{{{w_0}^2}}} \right)\exp \left( { - \frac{{{r^2}}}{{{w_0}^2}}} \right)\exp ({il\theta } )} }$$

To achieve the complex-amplitude field $u_{_{\textrm{LG}}}^s({r,\theta } )= {u_{\textrm{LG}}}\left( {\sqrt {{x^2} + {y^2}} ,\arctan \frac{y}{x}} \right)$ $= a({x,y} )\exp [{i\phi ({x,y} )} ]$, where $a(x,y)$ and $\phi (x,y)$ represents amplitude and phase respectively, the phase-only CGH can be expressed as

$$h(x,y) = \exp[\textrm{i}\psi \textrm{(}a\textrm{,}\phi \textrm{)]}, $$

The Fourier series in the domain of $\phi $ can be expressed as

$$h(x,y) = \sum\limits_{q ={-} \infty }^\infty {{h_q}(x,y)} , $$
where
$${h_q}(x,y) = c_q^a\exp (iq\phi ), $$
$$c_q^a = (2\pi)^{ - 1}\int_{ - \pi }^\pi {\exp [i\psi (\phi ,a)} ]\exp ( - iq\phi )d\phi. $$

Finally, the complex-amplitude field can be recovered from the first-order term (q=1) if the identity $\textrm{c}_1^a = Aa$. Here, A is a positive constant.

As an example illustrated in Fig. 4(a), five LG modes (LG01, LG10, LG20, LG21, LG22) are randomly selected and coherently superposed. The intensity patterns of the superposed modes influenced by AT with different refractive index structure parameters are listed here. In comparison to the negligible influence of the small AT on the transmission of a single LG mode, the intensity pattern of the superposed LG modes has an obvious distortion caused by the AT $C_n^2$ = 1e-16 m−2/3. As illustrated in Fig. 4(b), the center of the intensity pattern remains moving with the increase of the $C_n^2$ (from 1e-16 m−2/3 to 1e-14 m−2/3), suggesting the difficulty of modes detection in LG multiplexing optical communication based on the traditional inverse mode-conversion method.

 figure: Fig. 4.

Fig. 4. The influence of the AT on the intensity distribution of superposed LG modes. (a) The intensity pattern of superposed five LG modes (LG01, LG10, LG20, LG21, LG22) are listed in the first line. And the intensity patterns influenced by ATs with different refractive index structure parameters are listed in the second line. (b) The intensity distributions of the superposed LG modes in (a) along the centre axis.

Download Full Size | PDF

4. CNN model for the recognition of superposed LG modes

Inspired by the amazing performance of CNN for image classification tasks, we have designed a CNN model to identify the hybrid spatial modes consisting of superposed coaxial LG modes in this work. Here, we set the radial index and the azimuthal index of the constituent LG modes ranging from 0-2, resulting in a whole spatial mode set (LG00, LG01, LG02, LG10, LG11, LG12, LG20, LG21, LG22). Thus, the possible hybrid spatial modes have 29=512 categories. Using the experimental setup in Fig. 2(a), two data sets of intensity distributions of the hybrid spatial modes have been acquired at different ATs ($C_n^2 = 1e - 16\textrm{ }{m^{ - 2/3}}$ and $C_n^2 = 1e - 14\textrm{ }{m^{ - 2/3}}$), respectively. To improve the fidelity of CNN, huge amount of data should be collected to construct the data set. As such, we synchronized the loading frequency of holograms on SLM with the automatic exposure interval time of CCD camera. Finally, for each data set at a special AT, the total number is 512×120 = 61440 patterns split into ∼80% for the training and ∼20% for the validation. As a set of examples, the 120 intensity patterns of the superposed modes (LG01, LG10, LG20, LG21, LG22) at ATs ($C_n^2 = 1e - 14\textrm{ }{m^{ - 2/3}}$) have been listed in Fig. 5(a).

 figure: Fig. 5.

Fig. 5. The CNN model for the recognition of 512-ary superposed LG modes. (a) Intensity patterns of the superposed modes(LG01, LG10, LG20, LG21, LG22) at AT ($C_n^2 = 1e - 14\textrm{ }{m^{ - 2/3}}$). (b) The architecture of the CNN model. (c) The accuracy and loss curves as the function of epoch in the training process.

Download Full Size | PDF

The proposed CNN architecture is illustrated in Fig. 5(b), which consists of an input, output and 5 hidden layers. Firstly, the images with a size of 256×256 is input into the first layer featuring 16 convolutional filters with 5 × 5 dimension. The next four layers have a convolutional filters of 3 × 3 dimension, while the number of each convolution filters is 32, 64, 126, 256, respectively. After passing through these 5 layers of convolution filters, the size of the image reaches 256×5×5. Then, output values of the feature extraction layers are flattened out and served as input to the fully connected classification layers. ReLU activation function is used and dropout in the fully-connected layers is added to prevent overfitting. The last layer provided the output with 512 nodes, one for each element corresponds to one state of multiplexing LG modes. Finally, the Softmax activation function guarantees that only one of these nodes is activated for corresponded single input after the training of the model.

The left panel of Fig. 5(c) shows the accuracy as the function of epoch in training process. Following by the dramatical increment in the first 200 epochs, the accuracies of modes recognition remain stable and achieved approximately 100% and 95% after multiple iterations under ATs of $C_n^2 = 1e - 16\textrm{ }{m^{ - 2/3}}$ and $C_n^2 = 1e - 14\textrm{ }{m^{ - 2/3}}$. Here, the mean time required for testing one pattern takes only 2.7 ms by using an Intel Xeon Platinum 8160 CPU and NVIDIA Titan RTX GPU, indicating that the CNN model can accurately and rapidly identify superposed LG modes. Besides, the right panel shows that loss will continuously decline by the epoch. Under ATs of $C_n^2 = 1e - 16\textrm{ }{m^{ - 2/3}}$, the loss function tends to be stable and reaches around 0 when the training iterations reach 400. Notably, with the increase of the ATs, it needs more iterations (400) to reduce the loss value to 0 ($C_n^2 = 1e - 14\textrm{ }{m^{ - 2/3}}$).

5. CNN-based LG-SK FSO communication

To further demonstrate the availability of CNN-based method in exploiting superposed LG modes in optical communication, we have experimentally constructed a 10-m free-space communication link to transmit an 8-bit colorful image. The encoding process is illustrated in Fig. 6(a). The colorful image has been divided into three color channels (R, G and B), denoted by the radial index of the LG modes (p=1, 2 and 3). Additionally, the pixel values ranging from 0 to 255 have been mapped to octets, which are represented by the azimuthal index of LG modes (l=1-8). For example, the pixel value 125 in the red channel can be denoted by an octet (10010011), resulting in the special superposed LG modes set (LG11, LG41, LG71, LG81). Moreover, the green pixel with a value of 193 and blue pixel with a value of 52 can be represented by the LG modes set (LG12, LG32, LG52, LG62, LG82) and (LG13, LG43, LG73), respectively. Finally, the holograms have been loaded on SLM in time sequence. As such, to transmit a colorful image with a size of 128×128 in our scheme, the number of the holograms is 128×128×3 and the total transmission time is ∼13.7 min. After a distance of 10 meters in our lab, the images captured by CCD camera were input into the CNN. The colorful picture can eventually be reconstructed with the information of color and pixel value decoded by the CNN.

 figure: Fig. 6.

Fig. 6. The scheme of LG modes multiplexing optical communication for a colourful image. (a) The encoding mechanism of the colourful image with superposed LG modes. (b) The illustration of decoding process based on CNN.

Download Full Size | PDF

Notably, the hybrid spatial modes set for the current architecture have 3×28=768 categories. At a special AT ($C_n^2 = 1e - 14\textrm{ }{m^{ - 2/3}}$), the total number is 768×120 = 92160 patterns. As shown in Fig. 7(a), when the number of a special superposed spatial modes for training (training number) increases from 20 to 100, the decoded image gradually approaches the ground truth. Specifically, the peak signal-to-noise ratio (PSNR) [37,48] is used as a criterion to evaluate the quality of the image in Fig. 7(b). When the training number is 60, the PSNR reaches 34.16 dB, which is comparable to the typical values for the image compression [49]. Moreover, the accuracy and loss of the CNN model are also analyzed. Similar to the curve of PSNR, the accuracy rate improves dramatically (from 48.84% to 85.12%) when the training number increases from 20 to 60, and it finally reaches 95.12% for a training number of 100. Furthermore, all of the loss function can reach around 0 when the training number is larger than 60 while it needed more training iterations for the smaller training number (Fig. 7(c)). To illustrate the remarkable performance of the CNN model, the confusion matrix of the CNN model with a training number of 100 is given in Fig. 7(d).

 figure: Fig. 7.

Fig. 7. Experimental performance of the CNN-based free-space LG modes multiplexing optical communications. (a) The colourful images decoded by different CNN models. (b) PSNR of the decoded images in (a). (c) The accuracy and loss curves as a function of epoch. (d) The confusion matrix of the CNN modes with a training number of 100.

Download Full Size | PDF

In our experiment, the size of receiving aperture determined by the CCD camera limits the propagation distance to 10-meter [16]. To analyze the influence of propagation distance on the CNN model, we achieve the training (76800 patterns) and test data (15360 patterns) through numerically calculating the intensity patterns of the superposed LG modes at different propagation distances (from 10 m to 100 m). As shown in Fig. 8(a), the distortion of the intensity patterns increases with the propagation distances under the same AT ($C_n^2 = 1e - 14\textrm{ }{m^{ - 2/3}}$), resulting in the decrease of the accuracy of the CNN model from 95.26% to 70.92% (Fig. 8(b)). Then we also study the effect of AT on the accuracy at the propagation distance of 10 m. As mentioned in the previous section (Fig. 4(a)), the additional random phase distributions representing the ATs weakens the differences among the intensity distributions of the different superposed LG modes, especially for the stronger ATs. As a result, the curve in Fig. 8(c) shows that the accuracies with $C_n^2$ varies from 1e-16 m−2/3 to 1e-14 m−2/3can achieve over 95%, but with the further increase of $C_n^2$ to 5e-14 m−2/3 and 1e-13 m−2/3, the accuracies decrease to 87.62% and 80.50%, respectively.

 figure: Fig. 8.

Fig. 8. Influences of propagation distances and ATs on the performance of the CNN model. (a) Intensity patterns of the superposed LG modes (LG12, LG22, LG32, LG42, LG62, LG72) at propagation distances from 10 m to 100 m. The accuracies of CNN model (b) at different propagation distances and (c) under different ATs.

Download Full Size | PDF

6. Conclusion and discussion

In this paper, a CNN-based method has been proposed to recognize both the radial index and azimuthal index of the superposed LG modes. And we have experimentally demonstrated a 10-meter 768-ary LG modes multiplexing optical communication system to transmit a colorful image. Even at an AT of $C_n^2 = 1e - 14\textrm{ }{m^{ - 2/3}}$, the CNN-based method exhibits the advantages of high-accuracy (> 95%), fast speed (< 2.7 ms), and no strict requirements on alignments in the process of mode detection, resulting in a received image with high PSNR (>35 dB). In order to further improve the capacity of such system, the LG modes with negative azimuthal index should also be exploited as information channels without an increase of the intensity area. The input LG modes can be converted into Hermitian-Gaussian modes by a placing π/2-mode-converter [50] before the CCD camera, breaking the limitation caused by the same intensity distributions of the LG modes with inverse azimuthal index and the same radial index. In addition, some phase compensation methods, such as GS algorithm [5154] and self-adaptive methods [44,55], should be adopted at a stronger AT. Last but not least, we notice that while the decoding process is fast, the processes of data collection and training for the current CNN model is time-consuming (∼ 2 days), which is caused by the slow response speed of the digital devices such as CCD camera and computer. In the future, the all-optical diffractive neuron networks (DNN) [5658] can be introduced to solve this urgent problem in practical applications.

Funding

Shanghai Rising-Star Program (20QA1404100); National Natural Science Foundation of China (62005164); Zhangjiang National Innovation Demonstration Zone (ZJ2019-ZD-005).

Disclosures

The authors declare no conflicts of interest.

References

1. L. Allen, M. W. Beijersbergen, R. Spreeuw, and J. Woerdman, “Orbital angular momentum of light and the transformation of laguerre-gaussian laser modes,” Phys. Rev. A 45(11), 8185–8189 (1992). [CrossRef]  

2. A. Mair, A. Vaziri, G. Weihs, and A. Zeilinger, “Entanglement of the orbital angular momentum states of photons,” Nature 412(6844), 313–316 (2001). [CrossRef]  

3. X. Fang, H. Ren, and M. Gu, “Orbital angular momentum holography for high-security encryption,” Nat. Photonics 14(2), 102–108 (2020). [CrossRef]  

4. X. Fang, H. Wang, H. Yang, Z. Ye, Y. Wang, Y. Zhang, X. Hu, S. Zhu, and M. Xiao, “Multichannel nonlinear holography in a two-dimensional nonlinear photonic crystal,” Phys. Rev. A 102(4), 043506 (2020). [CrossRef]  

5. N. Bozinovic, Y. Yue, Y. Ren, M. Tur, P. Kristensen, H. Huang, A. E. Willner, and S. Ramachandran, “Terabit-scale orbital angular momentum mode division multiplexing in fibers,” Science 340(6140), 1545–1548 (2013). [CrossRef]  

6. J. Wang, J.-Y. Yang, I. M. Fazal, N. Ahmed, Y. Yan, H. Huang, Y. Ren, Y. Yue, S. Dolinar, and M. Tur, “Terabit free-space data transmission employing orbital angular momentum multiplexing,” Nat. Photonics 6(7), 488–496 (2012). [CrossRef]  

7. T. Lei, M. Zhang, Y. Li, P. Jia, G. N. Liu, X. Xu, Z. Li, C. Min, J. Lin, and C. Yu, “Massive individual orbital angular momentum channels for multiplexing enabled by dammann gratings,” Light: Sci. Appl. 4(3), e257 (2015). [CrossRef]  

8. K. Pang, H. Song, Z. Zhao, R. Zhang, H. Song, G. Xie, L. Li, C. Liu, J. Du, and A. F. Molisch, “400-gbit/s qpsk free-space optical communication link based on four-fold multiplexing of hermite–gaussian or laguerre–gaussian modes by varying both modal indices,” Opt. Lett. 43(16), 3889–3892 (2018). [CrossRef]  

9. Y. Yue, Y. Yan, N. Ahmed, J.-Y. Yang, L. Zhang, Y. Ren, H. Huang, K. M. Birnbaum, B. I. Erkmen, and S. Dolinar, “Mode properties and propagation effects of optical orbital angular momentum (oam) modes in a ring fiber,” IEEE Photonics J. 4(2), 535–543 (2012). [CrossRef]  

10. A. Wang, L. Zhu, Y. Zhao, S. Li, W. Lv, J. Xu, and J. Wang, “Adaptive water-air-water data information transfer using orbital angular momentum,” Opt. Express 26(7), 8669–8678 (2018). [CrossRef]  

11. G. Gibson, J. Courtial, M. J. Padgett, M. Vasnetsov, V. Pas’ko, S. M. Barnett, and S. Franke-Arnold, “Free-space information transfer using light beams carrying orbital angular momentum,” Opt. Express 12(22), 5448–5456 (2004). [CrossRef]  

12. M. Krenn, J. Handsteiner, M. Fink, R. Fickler, R. Ursin, M. Malik, and A. Zeilinger, “Twisted light transmission over 143 km,” Proc. Natl. Acad. Sci. 113(48), 13648–13653 (2016). [CrossRef]  

13. A. Trichili, A. B. Salem, A. Dudley, M. Zghal, and A. Forbes, “Encoding information using laguerre gaussian modes over free space turbulence media,” Opt. Lett. 41(13), 3086–3089 (2016). [CrossRef]  

14. A. M. Yao and M. J. Padgett, “Orbital angular momentum: Origins, behavior and applications,” Adv. Opt. Photonics 3(2), 161–204 (2011). [CrossRef]  

15. A. Trichili, C. Rosales-Guzmán, A. Dudley, B. Ndagano, A. Ben Salem, M. Zghal, and A. Forbes, “Optical communication beyond orbital angular momentum,” Sci. Rep. 6(1), 27674 (2016). [CrossRef]  

16. L. Li, G. Xie, Y. Yan, Y. Ren, P. Liao, Z. Zhao, N. Ahmed, Z. Wang, C. Bao, and A. J. Willner, “Power loss mitigation of orbital-angular-momentum-multiplexed free-space optical links using nonzero radial index laguerre–gaussian beams,” J. Opt. Soc. Am. B 34(1), 1–6 (2017). [CrossRef]  

17. J. Leach, M. J. Padgett, S. M. Barnett, S. Franke-Arnold, and J. Courtial, “Measuring the orbital angular momentum of a single photon,” Phys. Rev. Lett. 88(25), 257901 (2002). [CrossRef]  

18. L. Zhu, M. Sun, M. Zhu, J. Chen, X. Gao, W. Ma, and D. Zhang, “Three-dimensional shape-controllable focal spot array created by focusing vortex beams modulated by multi-value pure-phase grating,” Opt. Express 22(18), 21354–21367 (2014). [CrossRef]  

19. Z. Yue, H. Ren, S. Wei, J. Lin, and M. Gu, “Angular-momentum nanometrology in an ultrathin plasmonic topological insulator film,” Nat. Commun. 9(1), 4413 (2018). [CrossRef]  

20. Y. Wen, I. Chremmos, Y. Chen, G. Zhu, J. Zhang, J. Zhu, Y. Zhang, J. Liu, and S. Yu, “Compact and high-performance vortex mode sorter for multi-dimensional multiplexed fiber communication systems,” Optica 7(3), 254–262 (2020). [CrossRef]  

21. Y. Wen, I. Chremmos, Y. Chen, J. Zhu, Y. Zhang, and S. Yu, “Spiral transformation for high-resolution and efficient sorting of optical vortex modes,” Phys. Rev. Lett. 120(19), 193904 (2018). [CrossRef]  

22. S. Pachava, A. Dixit, and B. Srinivasan, “Modal decomposition of laguerre gaussian beams with different radial orders using optical correlation technique,” Opt. Express 27(9), 13182–13193 (2019). [CrossRef]  

23. X. Gu, M. Krenn, M. Erhard, and A. Zeilinger, “Gouy phase radial mode sorter for light: Concepts and experiments,” Phys. Rev. Lett. 120(10), 103601 (2018). [CrossRef]  

24. Y. Zhou, M. Mirhosseini, D. Fu, J. Zhao, S. M. H. Rafsanjani, A. E. Willner, and R. W. Boyd, “Sorting photons by radial quantum number,” Phys. Rev. Lett. 119(26), 263602 (2017). [CrossRef]  

25. A. E. Willner, H. Huang, Y. Yan, Y. Ren, N. Ahmed, G. Xie, C. Bao, L. Li, Y. Cao, and Z. Zhao, “Optical communications using orbital angular momentum beams,” Adv. Opt. Photonics 7(1), 66–106 (2015). [CrossRef]  

26. S. Fu, S. Zhang, T. Wang, and C. Gao, “Pre-turbulence compensation of orbital angular momentum beams based on a probe and the gerchberg–saxton algorithm,” Opt. Lett. 41(14), 3185–3188 (2016). [CrossRef]  

27. G. Xie, Y. Ren, H. Huang, M. P. Lavery, N. Ahmed, Y. Yan, C. Bao, L. Li, Z. Zhao, and Y. Cao, “Phase correction for a distorted orbital angular momentum beam using a zernike polynomials-based stochastic-parallel-gradient-descent algorithm,” Opt. Lett. 40(7), 1197–1200 (2015). [CrossRef]  

28. Y. Ren, G. Xie, H. Huang, N. Ahmed, Y. Yan, L. Li, C. Bao, M. P. Lavery, M. Tur, and M. A. Neifeld, “Adaptive-optics-based simultaneous pre-and post-turbulence compensation of multiple orbital-angular-momentum beams in a bidirectional free-space optical link,” Optica 1(6), 376–382 (2014). [CrossRef]  

29. Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015). [CrossRef]  

30. Z. Wang and Z. Guo, “Adaptive demodulation technique for efficiently detecting orbital angular momentum (oam) modes based on the improved convolutional neural network,” IEEE Access 7, 163633–163643 (2019). [CrossRef]  

31. Z. Li, J. Su, and X. Zhao, “Atmospheric turbulence compensation with sensorless ao in oam-fso combining the deep learning-based demodulator,” Opt. Commun. 460, 125111 (2020). [CrossRef]  

32. J. Liu, P. Wang, X. Zhang, Y. He, X. Zhou, H. Ye, Y. Li, S. Xu, S. Chen, and D. Fan, “Deep learning based atmospheric turbulence compensation for orbital angular momentum beam distortion and communication,” Opt. Express 27(12), 16671–16688 (2019). [CrossRef]  

33. Z. Liu, S. Yan, H. Liu, and X. Chen, “Superhigh-resolution recognition of optical vortex modes assisted by a deep-learning method,” Phys. Rev. Lett. 123(18), 183902 (2019). [CrossRef]  

34. Z. Li, J. Su, and X. Zhao, “Two-step system for image receiving in oam-sk-fso link,” Opt. Express 28(21), 30520–30541 (2020). [CrossRef]  

35. M. I. Dedo, Z. Wang, K. Guo, and Z. Guo, “Oam mode recognition based on joint scheme of combining the gerchberg–saxton (gs) algorithm and convolutional neural network (cnn),” Opt. Commun. 456, 124696 (2020). [CrossRef]  

36. L. Hofer, L. Jones, J. Goedert, and R. Dragone, “Hermite–gaussian mode detection via convolution neural networks,” J. Opt. Soc. Am. A 36(6), 936–943 (2019). [CrossRef]  

37. S. A. El-Meadawy, H. M. Shalaby, N. A. Ismail, F. E. Abd El-Samie, and A. E. Farghal, “Free-space 16-ary orbital angular momentum coded optical communication system based on chaotic interleaving and convolutional neural networks,” Appl. Opt. 59(23), 6966–6976 (2020). [CrossRef]  

38. P. Wang, J. Liu, L. Sheng, Y. He, W. Xiong, Z. Huang, X. Zhou, Y. Li, S. Chen, and X. Zhang, “Convolutional neural network-assisted optical orbital angular momentum recognition and communication,” IEEE Access 7, 162025–162035 (2019). [CrossRef]  

39. Z. Wang, M. I. Dedo, K. Guo, K. Zhou, F. Shen, Y. Sun, S. Liu, and Z. Guo, “Efficient recognition of the propagated orbital angular momentum modes in turbulences with the convolutional neural network,” IEEE Photonics J. 11(3), 1–14 (2019). [CrossRef]  

40. Y. Na and D.-K. Ko, “Deep-learning-based high-resolution recognition of fractional-spatial-mode-encoded data for free-space optical communications,” Sci. Rep. 11(1), 1–11 (2021). [CrossRef]  

41. T. Doster and A. T. Watnik, “Machine learning approach to oam beam demultiplexing via convolutional neural networks,” Appl. Opt. 56(12), 3386–3396 (2017). [CrossRef]  

42. S. Lohani and R. T. Glasser, “Turbulence correction with artificial neural networks,” Opt. Lett. 43(11), 2611–2614 (2018). [CrossRef]  

43. J. Li, M. Zhang, and D. Wang, “Adaptive demodulator using machine learning for orbital angular momentum shift keying,” IEEE Photonics Technol. Lett. 29(17), 1455–1458 (2017). [CrossRef]  

44. J. Li, M. Zhang, D. Wang, S. Wu, and Y. Zhan, “Joint atmospheric turbulence detection and adaptive demodulation technique using the cnn for the oam-fso communication,” Opt. Express 26(8), 10494–10508 (2018). [CrossRef]  

45. Q. Tian, Z. Li, K. Hu, L. Zhu, X. Pan, Q. Zhang, Y. Wang, F. Tian, X. Yin, and X. Xin, “Turbo-coded 16-ary oam shift keying fso communication system combining the cnn-based adaptive demodulator,” Opt. Express 26(21), 27849–27864 (2018). [CrossRef]  

46. J. Du and J. Wang, “High-dimensional structured light coding/decoding for free-space optical communications free of obstructions,” Opt. Lett. 40(21), 4827–4830 (2015). [CrossRef]  

47. V. Arrizón, U. Ruiz, R. Carrada, and L. A. González, “Pixelated phase computer holograms for the accurate encoding of scalar complex fields,” J. Opt. Soc. Am. A 24(11), 3500–3507 (2007). [CrossRef]  

48. Z. Li and J. Su, “Performance comparison of two oam shift keying cnn demodulating schemes,” in International Conference on Optoelectronic and Microelectronic Technology and Application, (International Society for Optics and Photonics, 2020), 1161703.

49. R. Dubolia, R. Singh, S. S. Bhadoria, and R. Gupta, “Digital image watermarking by using discrete wavelet transform and discrete cosine transform and comparison based on psnr,” in 2011 International Conference on Communication Systems and Network Technologies, (IEEE, 2011), 593–596.

50. X. Fang, Z. Kuang, P. Chen, H. Yang, Q. Li, W. Hu, Y. Lu, Y. Zhang, and M. Xiao, “Examining second-harmonic generation of high-order laguerre–gaussian modes through a single cylindrical lens,” Opt. Lett. 42(21), 4387–4390 (2017). [CrossRef]  

51. H. Chang, X.-L. Yin, X.-Z. Cui, Z.-C. Zhang, J.-X. Ma, G.-H. Wu, L.-J. Zhang, and X.-J. Xin, “Adaptive optics compensation of orbital angular momentum beams with a modified gerchberg–saxton-based phase retrieval algorithm,” Opt. Commun. 405, 271–275 (2017). [CrossRef]  

52. I. B. Djordjevic, J. A. Anguita, and B. Vasic, “Error-correction coded orbital-angular-momentum modulation for fso channels affected by turbulence,” J. Lightwave Technol. 30(17), 2846–2852 (2012). [CrossRef]  

53. A. Jesacher, A. Schwaighofer, S. Fürhapter, C. Maurer, S. Bernet, and M. Ritsch-Marte, “Wavefront correction of spatial light modulators using an optical vortex image,” Opt. Express 15(9), 5801–5808 (2007). [CrossRef]  

54. M. Li, M. Cvijetic, Y. Takashima, and Z. Yu, “Evaluation of channel capacities of oam-based fso link with real-time wavefront correction by adaptive optics,” Opt. Express 22(25), 31337–31346 (2014). [CrossRef]  

55. T. Rakia, H.-C. Yang, M.-S. Alouini, and F. Gebali, “Outage analysis of practical fso/rf hybrid system with adaptive combining,” IEEE Commun. Lett. 19(8), 1366–1369 (2015). [CrossRef]  

56. X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361(6406), 1004–1008 (2018). [CrossRef]  

57. M. Gu, X. Fang, H. Ren, and E. Goi, “Optically digitalized holography: A perspective for all-optical machine learning,” Engineering 5(3), 363–365 (2019). [CrossRef]  

58. E. Goi, X. Chen, Q. Zhang, B. P. Cumming, S. Schoenhardt, H. Luan, and M. Gu, “Nanoprinted high-neuron-density optical linear perceptrons performing near-infrared inference on a cmos chip,” Light: Sci. Appl. 10(1), 1–11 (2021). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. The conceptual illustration of CNN-based LG-SK FSO communication. (a) A coherent superposition of multiple LG modes with the different radial index (p) and azimuthal index (l) of LG modes transmitted in free space with AT and the intensity is captured by a CCD camera. (b) The radial index and azimuthal index can be obtained based on the CNN.
Fig. 2.
Fig. 2. Experimental setup. (a) Overview of the optical setup for the generation and propagation of complex amplitude field. (b) The Double-Fourier transform optical setup is implemented to achieve the complex amplitude field.
Fig. 3.
Fig. 3. The influence of AT on the intensity distribution of single LG modes. (a) (d) (g) The fluctuation phase distributions caused by AT with different refractive index structure parameters in Van Karman model. (b) (e) (h) The hologram of a single LG mode and a disturbed phase terms with different refractive index structure parameters result in the final holograms. (c) (f) (i) Experimental results of the intensity distributions of different LG modes influenced by AT with different refractive index structure parameters.
Fig. 4.
Fig. 4. The influence of the AT on the intensity distribution of superposed LG modes. (a) The intensity pattern of superposed five LG modes (LG01, LG10, LG20, LG21, LG22) are listed in the first line. And the intensity patterns influenced by ATs with different refractive index structure parameters are listed in the second line. (b) The intensity distributions of the superposed LG modes in (a) along the centre axis.
Fig. 5.
Fig. 5. The CNN model for the recognition of 512-ary superposed LG modes. (a) Intensity patterns of the superposed modes(LG01, LG10, LG20, LG21, LG22) at AT ($C_n^2 = 1e - 14\textrm{ }{m^{ - 2/3}}$). (b) The architecture of the CNN model. (c) The accuracy and loss curves as the function of epoch in the training process.
Fig. 6.
Fig. 6. The scheme of LG modes multiplexing optical communication for a colourful image. (a) The encoding mechanism of the colourful image with superposed LG modes. (b) The illustration of decoding process based on CNN.
Fig. 7.
Fig. 7. Experimental performance of the CNN-based free-space LG modes multiplexing optical communications. (a) The colourful images decoded by different CNN models. (b) PSNR of the decoded images in (a). (c) The accuracy and loss curves as a function of epoch. (d) The confusion matrix of the CNN modes with a training number of 100.
Fig. 8.
Fig. 8. Influences of propagation distances and ATs on the performance of the CNN model. (a) Intensity patterns of the superposed LG modes (LG12, LG22, LG32, LG42, LG62, LG72) at propagation distances from 10 m to 100 m. The accuracies of CNN model (b) at different propagation distances and (c) under different ATs.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

u LG ( r , θ ) = 2 p ! π w 0 2 ( p + | l | ) ! ( 2 w 0 ) | l | L p | l | ( 2 r 2 w 0 2 ) exp ( r 2 w 0 2 ) exp ( i l θ ) ,
Φ n ( k ) = 0.033 C n 2 ( k 2 + 1 / L 0 2 ) 11 / 6 exp ( k 2 / k l 2 ) ,
u LG s ( r , θ ) = l = 0 n p = 1 n 2 p ! π w 0 2 ( p + | l | ) ! ( 2 w 0 ) | l | L p | l | ( 2 r 2 w 0 2 ) exp ( r 2 w 0 2 ) exp ( i l θ )
h ( x , y ) = exp [ i ψ ( a , ϕ )] ,
h ( x , y ) = q = h q ( x , y ) ,
h q ( x , y ) = c q a exp ( i q ϕ ) ,
c q a = ( 2 π ) 1 π π exp [ i ψ ( ϕ , a ) ] exp ( i q ϕ ) d ϕ .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.