Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Towards fine recognition of orbital angular momentum modes through smoke

Open Access Open Access

Abstract

Light beams carrying orbital angular momentum (OAM) have been constantly developing in free-space optical (FSO) communications. However, perturbations in the free space link, such as rain, fog, and atmospheric turbulence, may affect the transmission efficiency of this technique. If the FSO communications procedure takes place in a smoke condition with low visibility, the communication efficiency also will be worse. Here, we use deep learning methods to recognize OAM eigenstates and superposition states in a thick smoke condition. In a smoke transmission link with visibility about 5 m to 6 m, the experimental recognition accuracy reaches 99.73% and 99.21% for OAM eigenstates and superposition states whose Bures distance is 0.05. Two 6 bit/pixel pictures were also successfully transmitted in the extreme smoke conditions. This work offers a robust and generalized proposal for FSO communications based on OAM modes and allows an increase of the communication capacity under the low visibility smoke conditions.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Due to the large communication capacity and high-speed transmission, free-space optical (FSO) communications have been widely proved to have great prospects and compelling advantages [1,2] which use laser as a carrier. FSO means no need to use optical fiber, and it is eminently suitable for brown-field situations, where it is impossible or impractical to install fiber. Recently, there have been many works on FSO communications with structure beams [3,4] as they provide a new degree of freedom to encode information, thereby greatly increasing the system capacity and spectral efficiency within a finite spatial bandwidth optical channel. In 1992, orbital angular momentum (OAM) of photons [5] was discovered by Allen et al. Since the topological charge $l$ of OAM can be any integers, OAM-carrying beams have countless orthogonal eigenstates, which allow them to have higher-dimensional characteristics. Therefore, OAM can be regarded as a new degree of freedom, which can be used as a data information carrier or encoder together with wavelength, polarization, and other multiplexing methods.

With the higher-dimensional characteristics of OAM states, the applications of FSO communications with OAM of photons (FSO-OAM) have been widely studied in the laboratory and real urban environment [612]. In 2004, Gibson et al. firstly demonstrated the transmission of information encoded on OAM states of a light beam in free space [6]. Awaji et al. implemented the OAM modes division multiplexing in telecommunication wavelength [8]. The total data rate of the FSO communications link demonstrated in laboratory experiments has reached terabit or even beyond by using OAM multiplexing [9]. With the development of FSO-OAM, how to identify OAM modes quickly and effectively has become an important problem [13,14]. Interferometer [15], mirror interference [16], diffraction [1721] are the commonly used methods for OAM identification. However, the beam divergence, atmospheric turbulence, beam misalignment may reduce the identification efficiency and data rate greatly [10].

With the development of computing power, deep learning methods have emerged as a versatile toolbox to tackle a variety of tasks [22,23]. In 2014, Krenn et al. implemented OAM multiplexing with 16 OAM modes in 3 km urban atmosphere [24], and an artificial neural network was used as the OAM modes classifier with recognition error of 1.7$\%$. Further, the authors conducted another FSO-OAM experiment over 143 km sea surface (oceanic atmospheric channel) [25], where an artificial neural network was used for distinguishing the OAM modes. The use of deep learning methods for OAM identification and multiplexing were also implemented in the tasks of precisely recognizing OAM modes with fractional topological charges [26], classification of vector vortex beams [27], OAM mode recognition with reference misalignment [28]. Because the performance of FSO systems are very sensitive to the features of communication links [29], the experiments above are carried out in a clean communication link. A bad communication link leads to crosstalk between different modes, which will greatly increase the error and affect the correctness of information transmission.

Till now, many proposals have been studied to deal with the issues of complex communication links, such as atmospheric turbulence [3035], extreme weather with rain [3638], fog [3941], and desert [42,43]. Smoke is also one of the complex FSO links in urban and forest areas [44]. Smoke is generally formed in outdoor atmosphere from the combustion of different substances such as carbon, glycerol, and household emission, and it is considered to be a visible gaseous substance in which small dry solid particles stay and disperse in the atmosphere for a long time. In the process of FSO communications under smoke conditions, beams will be scattered by the tiny particles, resulting in serious degradation of image quality and serious attenuation of the power of beams [4547]. Therefore, researches for OAM modes recognition in smoke communication link are important and valuable for FSO communications. In addition, FSO-OAM provides an additional reliable temporary communication method in unexpected situations such as fires, explosions.

To the best of our knowledge, no work has been reported in literature for FSO communications with OAM modes in a smoke transmission link. In this work, we established a laboratory smoke chamber to investigate the OAM multiplexing in a low visibility environment. In the experiments, we quantitatively characterized the visibility of the smoke in real time. Then a deep learning method was used to identify OAM eigenstates and superposition states with a Bures distance close to 0.05 with accuracy of 99.73$\%$ and 99.21$\%$, respectively. After recognizing the OAM modes, we successfully transmitted two pictures in the smoke chamber with transmission accuracy higher than $97.88\%$. Then we carefully analyzed the accuracy of the results under different smoke visibility sub-ranges. Our proposal is useful for the conditions where smoke hinders the FSO communications.

2. Method

2.1 OAM modes

Laguerre-Gaussian (LG) beams which carrying OAM are used as the light source. In cylindrical coordinate $(r,\varphi,z)$, the complex field of LG beam [5] can be presented as:

$$\begin{aligned} \mathrm{LG}_{p,l}(r,\varphi ,z) = & \sqrt {\frac{2p!}{\pi (p + |l|)!}} \frac{1}{\omega(z)}\left[\frac{\sqrt 2 r}{\omega(z)}\right]^{|l|}L_p^{|l|}\left[\frac{2r^2}{\omega^2(z)}\right]e^{ - \frac{r^2}{\omega^2(z)}}\\ & \times e^{ - \frac{ikr^2}{2R(z)}}e^{{-}i(2p + |l| + 1)\mathrm{arctan}(\frac{z}{z_R})}e^{{-}il\varphi} \end{aligned}$$
where $p$ and $l$ are the radial and azimuthal indices of LG beams, respectively. $k$ is the number of waves, and $\omega (z)$ is defined to be the radius for which the intensity of the Gaussian term falls to $1/{e^2}$ of its on-axis value. $R(z)$ denotes the radius of wavefront curvature. The Rayleigh range $z_R=k\omega _0^2/2$ is the propagation distance from the waist at which the beam area has doubled. $L_p^{|l|}(\cdot )$ stands for the associated Laguerre polynomial, and $(2p + |l| + 1)\mathrm {arctan}(z/{z_R})$ is the Gouy phase. $e^{il\varphi }$ is the phase factor, which enables LG beams to carry OAM information.

To demonstrate the recognition accuracy of our method under smoke conditions, the OAM superposition states are generated and studied in experiments. As shown in Fig. 1(a), two mutually orthogonal LG modes $|l = \pm 1\rangle$ with radial index $p=1$, are chosen to construct a Bloch sphere. Each point on the surface of the sphere represents a superposition state $|\psi \rangle$ constructed by this set of bases:

$$|\psi\rangle = \cos \frac{\theta }{2}|1\rangle+ \sin \frac{\theta }{2}{e^{i\phi }}| - 1\rangle$$
where $\theta$ and $\phi$ are the polar angle and azimuthal angle in the Bloch sphere. We take 20 and 40 points uniformly from $[0,\pi ]$ and $[0,2\pi ]$ according to the respective ranges of $\theta$ and $\phi$. In this way, 800 points with an interval of 0.05$\pi$ evenly distributed on the spherical surface are obtained. Without loss of generality, we randomly select an area on the Bloch sphere with nine superposition states. Figure 1(b) shows the intensity patterns of the nine OAM superposition states with $\theta =\{0.515\pi, 0.539\pi, 0.563\pi \}$ and $\phi =\{0.541\pi, 0.565\pi, 0.589\pi \}$. These intensity patterns are so similar and difficult to be distinguished visually.

 figure: Fig. 1.

Fig. 1. (a) Schematic diagram of a Bloch sphere which is constructed with OAM states $| l = \pm 1\rangle$ and radial index $p=1$. The gray box schematically indicates the position distribution of the nine selected superposition states. (b) Intensity patterns of the nine OAM superposition states.

Download Full Size | PDF

2.2 Experimental setup

The experiments are carried out in laboratory to simulate the extreme smoke situation in the real environment by generating smoke in a smoke chamber. It brings us several advantages under the conditions of laboratory. The first point is that the experimental conditions are more controllable. One can control the concentration of smoke in the smoke chamber by controlling the switching time of the smoke generator, so the visibility range of the transmission channel can be accurately controlled. Second, accurate visibility of each recorded intensity pattern can be obtained by real time visibility monitoring. Third, there are fewer uncertainties under experimental conditions, which can avoid many unnecessary conditions, such as physical obstructions, scintillation, and geometric losses.

The experimental setup shown in Fig. 2 can be specified as two parts. The first one is the main light beam part, whose light beam carries OAM. The second one is the subsidiary light beam part, which is used to accurately measure the visibility of the smoke. In the main light beam part, a laser (Laser 1) with wavelength 532 nm is utilized as the light source to generate OAM modes. After the beam from Laser 1 is collimated and expanded by lens L1 and L2, it is projected on the plane of a DMD (Digital Micromirror Device, DLP4500, 1140 $\times$ 912 diamond pixel array of $7.6\times 7.6\ {\mathrm{\mu} \mathrm{m}}^2$ mirrors). The DMD is an amplitude-only spatial light modulator, so we encode binary amplitude holograms [48] on the DMD to generate the desired OAM modes. Then, the light beam carrying OAM propagates through the smoke chamber. In order to study the impact of a smoke link on the quality of the transmitted OAM modes, we design and implement a $200\times 50\times 50\ \mathrm {cm}^3$ smoke chamber. After leaving the smoke chamber, the intensity profile of the beam is recorded by a CMOS (Complementary Metal Oxide Semiconductor, DaHeng, MER-132-43U3M/C-L, camera resolution: 1292 $\times$ 964 pixels) camera.

 figure: Fig. 2.

Fig. 2. Experimental setup. Laser 1: semiconductor laser with 532 nm wavelength; L1: 30 mm lens; L2: 250 mm lens; M1: mirror; L3: 250 mm lens; L4: 250 mm lens; DMD: digital micromirror device; CMOS: CMOS camera; Laser 2: semiconductor laser with 532 nm wavelength; BS: beam splitter; D1, D2: detectors; P: PC oscilloscopes; PC: computer.

Download Full Size | PDF

In the subsidiary light beam part, another laser (Laser 2, $\lambda =532$ nm) is utilized as the light source of the smoke monitoring device. The beam from Laser 2 is firstly divided into two parts with a beam splitter (BS). One beam is directly received by a detector (D1, ThorLabs, PDA36A-EC) without passing through the smoke chamber. The other beam which passed through the smoke chamber is received by detector D2. Both of the recorded signals will be sampled with a PC oscilloscopes (P, PicoScope, 3205D) and analyzed on the computer. With this set of devices, we can obtain the optical transmittance $T=I_2/I_1$ of the smoke chamber in real time. Here, $I_2$ and $I_1$ are the output and input beam intensity for the smoke chamber. In the experiments, the values of $I_2$ and $I_1$ are obtained by detectors D2 and D1, respectively.

In the experiments, we use a smoke generator (LingDing, YWQ-180) to generate smoke by the following operations: (1), The smoke generator is first placed in the smoke chamber and then powered on by remote control. (2), The smoke agent filled in the smoke generator is atomized into micron particles and which form a thick white smoke in the whole smoke chamber. The particles of the smoke are small and suspended in the air, and the concentration of smoke will decrease with time once we stop the smoke generator. This smoke may degrade the incident light mode, because of the scattering from suspended smoke particulates.

To quantitatively characterize the visibility of the smoke in experiments, we use the beam from Laser 2 in Fig. 2 to measure the varied concentration of smoke in real time. The visibility of the smoke is defined as the signal attenuation of light beam with wavelength 532 nm by the following relationship [40] :

$$V ={-} \frac{{10\log {T_{th}}}}{\alpha }\ \ \mathrm{(km)},$$
$$\alpha ={-} \frac{{10\log T}}{{4.343L}}\ \ \mathrm{(dB/km)},$$
where ${T_{th}}$ is known as the visible threshold of the atmospheric propagation path. According to Koschmieder law, ${T_{th}} = 2\%$. $\alpha$ is the signal attenuation coefficient. $T$ is the transmittance of the optical signal at 532nm. $L$ is the link length and $L = 0.002$ km in our experiment. Figure 3 shows the experimentally measured visibility and signal attenuation coefficient of smoke as a function of time. During the time range of about half an hour, the smoke inside the chamber gradually dissipates. The visibility and attenuation coefficient of the smoke only keep stable in the first few minutes.

 figure: Fig. 3.

Fig. 3. The visibility and attenuation as a function of time in the smoke chamber.

Download Full Size | PDF

3. Recognition of OAM modes

3.1 Dataset generation and processing

In the dataset generation process, the visibility of the smoke is always limited in the range of $V \in [5,6]$ m, which is extremely low visibility smoke condition. In the experiments, we can get the accurate visibility from the subsidiary light beam part, and the smoke generator was opened until the visibility reaches the range we want. Once the visibility monitored by the subsidiary light beam detection falls into the interval we set, the CMOS camera in the main light beam part begins to record the intensity patterns of OAM modes. First, we measure the intensity patterns of OAM eigenstates. Figures 4(a1)-(a8) show some examples of the intensity patterns of OAM eigenstates. Due to the influence of the smoke, the intensity patterns at the end of the smoke chamber became visually worse. We obtain 4000 pictures in total, and each OAM eigenstate $l$ has 500 different intensity patterns. The dataset will be divided into the training set and testing set in proportion $4:1$ for further data processing.

 figure: Fig. 4.

Fig. 4. The intensity patterns recorded by CMOS camera. All these patterns are sampled when the visibility of the smoke is limited in range $V \in [5,6]$ m. (a1)-(a8) Examples of the experimental intensity patterns of OAM eigenstates $p = 1$, ${l} \in \{1, 2, \ldots, 8\}$. (b1)-(b8) Examples of the experimental intensity patterns for OAM superposition states $\{\mathrm {mode1}, \mathrm {mode2}, \ldots, \mathrm {mode8}\}$ defined in Fig. 1(b).

Download Full Size | PDF

The OAM superposition states used in the experiments are $\{\mathrm {mode1}, \mathrm {mode2}, \ldots, \mathrm {mode8}\}$ which are shown in Fig. 1(b). Here, we utilize the Bures distance [49] to quantitatively measure the distance of these eight OAM modes. The distance between adjacent OAM modes is 0.05, which means their intensity patterns are so similar and difficult to be recognized. Figures 4(b1)-(b8) show some intensity patterns of the eight OAM superposition states, and they cannot be easily visually distinguished. 2800 intensity patterns of the OAM superposition states are collected as the dataset. 2400 images will be used as the training set, and 400 images as the testing set.

To classify different OAM modes in a robust way, we used an end-to-end neural network (DenseNet-121) to realize OAM recognition in the extremely low visibility smoke conditions. The DenseNet-121 comprises convolutional layers, pooling layers, a fully connected layer, and four dense blocks as shown in Fig. 5. The intensity patterns obtained by the camera are cropped and resized to $224\times 224$ pixels and their topological charge labels are used as the input of the neural network. The initial convolutions are set to size $7\times 7$ with stride 2 followed by a $3\times 3$ max pooling layer with stride 2. After that, four dense blocks with 6, 12, 24, and 16 convolution units, respectively, are implemented. The first three dense blocks are followed by a transition layer comprising $1\times 1$ convolution layer and $2\times 2$ max pooling. At the end of the last dense block, a global average pooling and a fully connected layer were implemented. The output of the DenseNet-121 is the prediction that represents the category it belongs to.

 figure: Fig. 5.

Fig. 5. The structure of DenseNet-121 for OAM modes recognition.

Download Full Size | PDF

3.2 Classification accuracy

For OAM eigenstates, a total of 4000 experimental intensity patterns and the values of their corresponding topological charge $l$ as labels are used as the dataset. All 4000 samples are randomly shuffled, of which the first 3200 samples are used as the training set; the remaining 800 samples never participate in the training process. We get the accuracy of $\eta _e=99.73\%$ with a smoke visibility range of 5 m to 6 m. It means that our method can effectively classify different OAM eigenstates in such extreme smoke conditions. For the OAM superposition states, 2400 and 400 samples were used for training and testing, respectively, and we get a classification accuracy of $\eta _s=99.21\%$. Figure 6 shows the confusion matrix of the OAM eigenstates and superposition states. The program in our experiment was implemented on the Keras framework with Python 3.6 and sped up by a pair of GPUs (NVIDIA GTX 1080ti).

 figure: Fig. 6.

Fig. 6. (a) Confusion matrix for the recognition of OAM eigenstates ${l} \in \{1, 2, \ldots, 8\}$. (b) Confusion matrix for the recognition of OAM superposition states: $\{\mathrm {mode1}, \mathrm {mode2}, \ldots, \mathrm {mode8}\}$ defined in Fig. 1(b).

Download Full Size | PDF

4. OAM multiplexing in smoke condition

4.1 Image transmission in smoke condition

To further demonstrate the performance of the proposed method, we experimentally implement an OAM multiplexing system. Same as the experimental setup discussed above, the system simulates a 2 m FSO-OAM communication environment, where the transmission link is thick with smoke and its visibility varies in the range of 5 m to 6 m. The scenario of encoding and decoding information is proposed by the combination of two elements in OAM modes set: $\{\mathrm {mode1}, \mathrm {mode2}, \ldots, \mathrm {mode8}\}$. In the following OAM multiplexing experiments, these 8 OAM modes are the first 8 OAM superposition states in Fig. 1(b). Figure 7 shows the details of encoding and decoding rules of 64 grayscale settings. The combination of two OAM modes representing the grey value of a pixel in the image brings us a 6-bit image with 64 grayscale.

 figure: Fig. 7.

Fig. 7. The encoding and decoding scheme with pairwise combined OAM modes: The horizontal-axis and vertical-axis are 8 OAM modes. The pairwise combination of 8 OAM modes result in 64-pixel values, which corresponds to 64 grayscale settings.

Download Full Size | PDF

According to the gray value of each pixel in a picture to be transmitted, we choose the pairwise combined OAM modes for transmission according to the encoding scheme in Fig. 7. Two OAM states representing the grey value of a pixel are generated consecutively at the transmitting end. After the OAM modes passed through the smoke chamber, the degraded OAM modes are detected at the receiving end. At the same time, we use the recorded intensity patterns as input to predict the modes information by a trained neural network and decode the pairwise combined OAM modes into pixel grayscale values.

To demonstrate the feasibility and robustness of our proposed scheme for FSO-OAM, we selected two typical pictures in the OAM multiplexing experiments. Figure 8(a1) and (b1) are the original pictures of the landmark building in Xi’an and a portrait (Ludwig Boltzmann), respectively. Both of these two pictures are $64\times 64$ pixels, and 64 grayscale per pixel. Therefore, 8192 intensity patterns are recorded for each picture. All these intensity patterns we collected have never been used in training the neural network. Figure 8(a2) and (b2) show decoded pictures after the degrading of the smoke. Although a few pixels in the decoded pictures are slightly distorted, the quality of the transmitted pictures is visually good.

 figure: Fig. 8.

Fig. 8. (a1), (b1) The original landmark and portrait pictures that need to be transmitted in the smoke chamber. Both of them are 6 bit/pixel images with images size $64\times 64$ pixels. (a2), (b2) The received landmark and portrait pictures with $64\times 64$ pixels and 6 bit/pixel grayscale.

Download Full Size | PDF

In contrast with the ground truth, $97.88\%$ pixel values in the decoded landmark picture are the same as the values in the original picture. By defining the recognition accuracy $\eta _p$ as the ratio between the correct detected modes and all of the detected modes, we can conclude that the recognition accuracy of the landmark picture is $\eta _{p1}=97.88\%$. The recognition accuracy of the portrait is $\eta _{p2}=98.99\%$. Since the scheme of pairwise combination of OAM modes is used to encode the grayscale values of a picture, the recognition accuracy of a picture should approach $\eta _p=\eta _s^2=98.43\%$. The recognition accuracy $\eta _{p1}$ and $\eta _{p2}$ of the above two pictures are slightly different from the value of $\eta _p$, it is because the training dataset and the datasets of the two pictures have different distributions in a wide range of visibility. Figure 9 shows the distribution proportion for the training dataset and the picture datasets in 5 visibility sub-ranges: $V_1\in [5.0, 5.2)$ m, $V_2\in [5.2, 5.4)$ m, $V_3\in [5.4, 5.6)$ m, $V_4\in [5.6, 5.8)$ m, $V_5\in [5.8, 6.0]$ m. It is clear that the distribution proportion of the datasets for the portrait and landmark is different from that of the training dataset. The above dataset difference a result of the varied concentration of smoke in the data collecting process.

 figure: Fig. 9.

Fig. 9. The distribution proportion for the training, portrait and landmark datasets in 5 visibility sub-ranges.

Download Full Size | PDF

4.2 Further analyzation

To further analyze the performance of the proposed method, we discuss the recognition accuracy when the data from only one sub-range are used as the training dataset. In each of the visibility sub-range in Fig. 9, 400 samples are randomly selected. In each of the 400 samples, 300 samples will be used for training, 100 samples for testing. Different from the training steps in the previous sections, 300 samples from only one of the 5 sub-ranges are used as the training dataset, and the 100 samples in each sub-ranges are used as the testing dataset. Figure 10(a) shows one analyzing situation. In this situation, 300 samples from visibility sub-range $V_4\in [5.6, 5.8)$ m are chosen to train the neural network. In each of the 5 visibility sub-ranges, 100 samples are used to analyze recognition accuracy. Since the training dataset is chosen from sub-range $V_4$, the testing data from sub-range $V_4$ has the highest accuracy which reaches 1. However, when the visibility sub-range is away from $V_4$, the testing dataset from this visibility sub-range will result in lower accuracy. In Fig. 10(a), the testing dataset from visibility sub-range $V_1$ shows the lowest accuracy. In Fig. 10(b), 300 samples from visibility sub-range $V_5$ are chosen to train the neural network. The accuracy becomes lower and lower when the visibility sub-range, where the testing dataset belongs to, is away from sub-range $V_5$.

 figure: Fig. 10.

Fig. 10. Recognition accuracy for different visibility sub-ranges. (a) The visibility sub-range of the training dataset is $V_4\in [5.6, 5.8)$ m, and the testing dataset is the data of all the 5 visibility sub-ranges. (b) The visibility sub-range of the training set is $V_5\in [5.8, 6.0]$ m, and the testing dataset is the data of all the 5 visibility sub-ranges.

Download Full Size | PDF

The analysis above proves that, when the training dataset and testing dataset are in the same visibility sub-range, the highest recognition accuracy can be obtained. However, when the training dataset and testing dataset are not come from the same visibility sub-range, the accuracy will decline. When the visibility sub-range gap between the training dataset and testing dataset is narrow, the testing data can still be recognized due to the robustness of the neural network. This analysis also explains why the recognition accuracy $\eta _{p1}$ and $\eta _{p2}$ of the two pictures are slightly different from $\eta _p$. It is because the distribution of the number of samples is not uniform in different visibility sub-ranges, and the distribution proportions of each OAM multiplexing mission are different.

5. Conclusion

In conclusion, we experimentally implemented the recognition of OAM modes in a thick smoke environment. A deep learning method was proved to be robust and generalized for the FSO-OAM missions when smoke with visibility $V\in [5, 6]$ m is filled in the transmission channel. The experimental recognition accuracy is 99.73$\%$ for OAM eigenstates, and 99.21$\%$ for OAM superpositions states whose Bures distance is 0.05. We also successfully transmitted two different pictures in the thick smoke environment and discussed the OAM multiplexing performance by dividing the visibility range into multiple sub-ranges. Our method confirms that, although the extreme situation of smoke affects the performance of FSO-OAM, OAM modes recognition in the smoke environment with very low visibility can still be realized. This proposed scheme may be applied to realize the information transmission under extremely low visibility smoke conditions, such as the temporary communication at the fire scene.

Funding

National Natural Science Foundation of China (12074307); Ministry of Science and Technology of the People's Republic of China (2016YFA0301404); Fundamental Research Funds for the Central Universities.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. H. A. Willebrand and B. S. Ghuman, “Fiber optics without fiber,” IEEE Spectrum 38(8), 40–45 (2001). [CrossRef]  

2. A. Trichili, M. A. Cox, B. S. Ooi, and M.-S. Alouini, “Roadmap to free space optics,” J. Opt. Soc. Am. B 37(11), A184–A201 (2020). [CrossRef]  

3. H. Rubinsztein-Dunlop, A. Forbes, M. V. Berry, M. R. Dennis, D. L. Andrews, M. Mansuripur, C. Denz, C. Alpmann, P. Banzer, T. Bauer, E. Karimi, L. Marrucci, M. Padgett, M. Ritsch-Marte, N. M. Litchinitser, N. P. Bigelow, C. Rosales-Guzmán, A. Belmonte, J. P. Torres, T. W. Neely, M. Baker, R. Gordon, A. B. Stilgoe, J. Romero, A. G. White, R. Fickler, A. E. Willner, G. Xie, B. McMorran, and A. M. Weiner, “Roadmap on structured light,” J. Opt. 19(1), 013001 (2017). [CrossRef]  

4. A. Forbes, M. de Oliveira, and M. R. Dennis, “Structured light,” Nat. Photonics 15(4), 253–262 (2021). [CrossRef]  

5. L. Allen, M. W. Beijersbergen, R. Spreeuw, and J. Woerdman, “Orbital angular momentum of light and the transformation of laguerre-gaussian laser modes,” Phys. Rev. A 45(11), 8185–8189 (1992). [CrossRef]  

6. G. Gibson, J. Courtial, M. J. Padgett, M. Vasnetsov, V. Pas’ko, S. M. Barnett, and S. Franke-Arnold, “Free-space information transfer using light beams carrying orbital angular momentum,” Opt. Express 12(22), 5448–5456 (2004). [CrossRef]  

7. J. Lin, X.-C. Yuan, S. Tao, and R. Burge, “Multiplexing free-space optical signals using superimposed collinear orbital angular momentum states,” Appl. Opt. 46(21), 4680–4685 (2007). [CrossRef]  

8. Y. Awaji, N. Wada, and Y. Toda, “Demonstration of spatial mode division multiplexing using laguerre-gaussian mode beam in telecom-wavelength,” in 2010 23rd Annual Meeting of the IEEE Photonics Society, (IEEE, 2010), pp. 551–552.

9. J. Wang, J.-Y. Yang, I. M. Fazal, N. Ahmed, Y. Yan, H. Huang, Y. Ren, Y. Yue, S. Dolinar, M. Tur, and A. E. Willner, “Terabit free-space data transmission employing orbital angular momentum multiplexing,” Nat. Photonics 6(7), 488–496 (2012). [CrossRef]  

10. A. E. Willner, H. Huang, Y. Yan, Y. Ren, N. Ahmed, G. Xie, C. Bao, L. Li, Y. Cao, Z. Zhao, J. Wang, M. P. J. Lavery, M. Tur, S. Ramachandran, A. F. Molisch, N. Ashrafi, and S. Ashrafi, “Optical communications using orbital angular momentum beams,” Adv. Opt. Photonics 7(1), 66–106 (2015). [CrossRef]  

11. M. P. Lavery, C. Peuntinger, K. Günthner, P. Banzer, D. Elser, R. W. Boyd, M. J. Padgett, C. Marquardt, and G. Leuchs, “Free-space propagation of high-dimensional structured optical fields in an urban environment,” Sci. Adv. 3(10), e1700552 (2017). [CrossRef]  

12. B. Paroli, M. Siano, and M. Potenza, “Dense-code free space transmission by local demultiplexing optical states of a composed vortex,” Opt. Express 29(10), 14412–14424 (2021). [CrossRef]  

13. A. Forbes, A. Dudley, and M. McLaren, “Creation and detection of optical modes with spatial light modulators,” Adv. Opt. Photonics 8(2), 200–227 (2016). [CrossRef]  

14. Y. Bai, H. Lv, X. Fu, and Y. Yang, “Vortex beam: generation and detection of orbital angular momentum,” Chin. Opt. Lett. 20(1), 012601 (2022). [CrossRef]  

15. J. Leach, M. J. Padgett, S. M. Barnett, S. Franke-Arnold, and J. Courtial, “Measuring the orbital angular momentum of a single photon,” Phys. Rev. Lett. 88(25), 257901 (2002). [CrossRef]  

16. M. Harris, C. Hill, P. Tapster, and J. Vaughan, “Laser modes with helical wave fronts,” Phys. Rev. A 49(4), 3119–3122 (1994). [CrossRef]  

17. J. Hickmann, E. Fonseca, W. Soares, and S. Chávez-Cerda, “Unveiling a truncated optical lattice associated with a triangular aperture using light’s orbital angular momentum,” Phys. Rev. Lett. 105(5), 053904 (2010). [CrossRef]  

18. L. E. de Araujo and M. E. Anderson, “Measuring vortex charge with a triangular aperture,” Opt. Lett. 36(6), 787–789 (2011). [CrossRef]  

19. L. Yongxin, T. Hua, P. Jixiong, and L. Baida, “Detecting the topological charge of vortex beams using an annular triangle aperture,” Opt. Laser Technol. 43(7), 1233–1236 (2011). [CrossRef]  

20. R. Liu, J. Long, F. Wang, Y. Wang, P. Zhang, H. Gao, and F. Li, “Characterizing the phase profile of a vortex beam with angular-double-slit interference,” J. Opt. 15(12), 125712 (2013). [CrossRef]  

21. Y. Yang, Q. Zhao, L. Liu, Y. Liu, C. Rosales-Guzmán, and C.-w. Qiu, “Manipulation of orbital-angular-momentum spectrum using pinhole plates,” Phys. Rev. Appl. 12(6), 064007 (2019). [CrossRef]  

22. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4(9), 1117–1125 (2017). [CrossRef]  

23. X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361(6406), 1004–1008 (2018). [CrossRef]  

24. M. Krenn, R. Fickler, M. Fink, J. Handsteiner, M. Malik, T. Scheidl, R. Ursin, and A. Zeilinger, “Communication with spatially modulated light through turbulent air across vienna,” New J. Phys. 16(11), 113028 (2014). [CrossRef]  

25. M. Krenn, J. Handsteiner, M. Fink, R. Fickler, R. Ursin, M. Malik, and A. Zeilinger, “Twisted light transmission over 143 km,” Proc. Natl. Acad. Sci. 113(48), 13648–13653 (2016). [CrossRef]  

26. Z. Liu, S. Yan, H. Liu, and X. Chen, “Superhigh-resolution recognition of optical vortex modes assisted by a deep-learning method,” Phys. Rev. Lett. 123(18), 183902 (2019). [CrossRef]  

27. T. Giordani, A. Suprano, E. Polino, F. Acanfora, L. Innocenti, A. Ferraro, M. Paternostro, N. Spagnolo, and F. Sciarrino, “Machine learning-based classification of vector vortex beams,” Phys. Rev. Lett. 124(16), 160401 (2020). [CrossRef]  

28. X. Wang, Y. Qian, J. Zhang, G. Ma, S. Zhao, R. Liu, H. Li, P. Zhang, H. Gao, F. Huang, and F. Li, “Learning to recognize misaligned hyperfine orbital angular momentum modes,” Photonics Res. 9(4), B81–B86 (2021). [CrossRef]  

29. A. Trichili, K.-H. Park, M. Zghal, B. S. Ooi, and M.-S. Alouini, “Communicating using spatial mode multiplexing: Potentials, challenges, and perspectives,” IEEE Commun. Surv. Tutorials 21(4), 3175–3203 (2019). [CrossRef]  

30. S. Li, S. Chen, C. Gao, A. E. Willner, and J. Wang, “Atmospheric turbulence compensation in orbital angular momentum communications: Advances and perspectives,” Opt. Commun. 408, 68–81 (2018). [CrossRef]  

31. P. Wang, J. Liu, L. Sheng, Y. He, W. Xiong, Z. Huang, X. Zhou, Y. Li, S. Chen, X. Zhang, and D. Fan, “Convolutional neural network-assisted optical orbital angular momentum recognition and communication,” IEEE Access 7, 162025–162035 (2019). [CrossRef]  

32. Z. Wang, M. I. Dedo, K. Guo, K. Zhou, F. Shen, Y. Sun, S. Liu, and Z. Guo, “Efficient recognition of the propagated orbital angular momentum modes in turbulences with the convolutional neural network,” IEEE Photonics J. 11(3), 1–14 (2019). [CrossRef]  

33. Y. Hao, L. Zhao, T. Huang, Y. Wu, T. Jiang, Z. Wei, D. Deng, A.-P. Luo, and H. Liu, “High-accuracy recognition of orbital angular momentum modes propagated in atmospheric turbulences based on deep learning,” IEEE Access 8, 159542–159551 (2020). [CrossRef]  

34. M. Dong, C. Zhao, Y. Cai, and Y. Yang, “Partially coherent vortex beams: Fundamentals and applications,” Sci. China Phys. Mech. Astron. 64(2), 224201 (2021). [CrossRef]  

35. M. A. Cox, T. Celik, Y. Genga, and A. V. Drozdov, “Interferometric orbital angular momentum mode detection in turbulence with deep learning,” Appl. Opt. 61(7), D1–D6 (2022). [CrossRef]  

36. B. Cochenour, K. Morgan, K. Miller, E. Johnson, K. Dunn, and L. Mullen, “Propagation of modulated optical beams carrying orbital angular momentum in turbid water,” Appl. Opt. 55(31), C34–C38 (2016). [CrossRef]  

37. A. E. Willner, Z. Zhao, Y. Ren, L. Li, G. Xie, H. Song, C. Liu, R. Zhang, C. Bao, and K. Pang, “Underwater optical communications using orbital angular momentum-based spatial division multiplexing,” Opt. Commun. 408, 21–25 (2018). [CrossRef]  

38. G. G. Soni, A. Tripathi, A. Mandloi, and S. Gupta, “Compensating rain induced impairments in terrestrial fso links using aperture averaging and receiver diversity,” Opt. Quantum Electron. 51(7), 244 (2019). [CrossRef]  

39. R. M. Pierce, J. Ramaprasad, and E. C. Eisenberg, “Optical attenuation in fog and clouds,” in Optical Wireless Communications IV, vol. 4530 (International Society for Optics and Photonics, 2001), pp. 58–71.

40. M. Ijaz, Z. Ghassemlooy, J. Pesek, O. Fiser, H. Le Minh, and E. Bentley, “Modeling of fog and smoke attenuation in free space optical communications link under controlled laboratory conditions,” J. Lightwave Technol. 31(11), 1720–1726 (2013). [CrossRef]  

41. D. Arora, S. Kaur, and A. Rajan, “Performance evaluation under different fog conditions for fso link,” in 2020 8th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), (IEEE, 2020), pp. 412–416.

42. M. A. Esmail, H. Fathallah, and M.-S. Alouini, “An experimental study of fso link performance in desert environment,” IEEE Commun. Lett. 20(9), 1888–1891 (2016). [CrossRef]  

43. A. Ragheb, W. Saif, A. Trichili, I. Ashry, M. A. Esmail, M. Altamimi, A. Almaiman, E. Altubaishi, B. S. Ooi, M.-S. Alouini, and S. Alshebeili, “Identifying structured light modes in a desert environment using machine learning algorithms,” Opt. Express 28(7), 9753–9763 (2020). [CrossRef]  

44. A. Malik and P. Singh, “Free space optics: current applications and future challenges,” Int. J. Opt. 2015, 1–7 (2015). [CrossRef]  

45. A. Maranghides, W. Mell, W. D. Walton, E. L. Johnsson, and N. P. Bryner, “Free space optics communication system testing in smoke and fire environments,” National Communication System, NISTIR7317 (2006).

46. M. Ijaz, Z. Ghassemlooy, A. Gholami, and X. Tang, “Smoke attenuation in free space optical communication under laboratory controlled conditions,” in 7’th International Symposium on Telecommunications (IST’2014), (IEEE, 2014), pp. 758–762.

47. P. Lin, T. Wang, W. Ma, Q. Yang, and Z. Liu, “Transmission characteristics of 1.55 and 2.04 μm laser carriers in a simulated smoke channel based on an actively mode-locked fiber laser,” Opt. Express 28(26), 39216–39226 (2020). [CrossRef]  

48. M. Mirhosseini, O. S. Magana-Loaiza, C. Chen, B. Rodenburg, M. Malik, and R. W. Boyd, “Rapid generation of light beams carrying orbital angular momentum,” Opt. Express 21(25), 30196–30203 (2013). [CrossRef]  

49. J. Dajka, J. Łuczka, and P. Hänggi, “Distance between quantum states in the presence of initial qubit-environment correlations: A comparative study,” Phys. Rev. A 84(3), 032120 (2011). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. (a) Schematic diagram of a Bloch sphere which is constructed with OAM states $| l = \pm 1\rangle$ and radial index $p=1$. The gray box schematically indicates the position distribution of the nine selected superposition states. (b) Intensity patterns of the nine OAM superposition states.
Fig. 2.
Fig. 2. Experimental setup. Laser 1: semiconductor laser with 532 nm wavelength; L1: 30 mm lens; L2: 250 mm lens; M1: mirror; L3: 250 mm lens; L4: 250 mm lens; DMD: digital micromirror device; CMOS: CMOS camera; Laser 2: semiconductor laser with 532 nm wavelength; BS: beam splitter; D1, D2: detectors; P: PC oscilloscopes; PC: computer.
Fig. 3.
Fig. 3. The visibility and attenuation as a function of time in the smoke chamber.
Fig. 4.
Fig. 4. The intensity patterns recorded by CMOS camera. All these patterns are sampled when the visibility of the smoke is limited in range $V \in [5,6]$ m. (a1)-(a8) Examples of the experimental intensity patterns of OAM eigenstates $p = 1$, ${l} \in \{1, 2, \ldots, 8\}$. (b1)-(b8) Examples of the experimental intensity patterns for OAM superposition states $\{\mathrm {mode1}, \mathrm {mode2}, \ldots, \mathrm {mode8}\}$ defined in Fig. 1(b).
Fig. 5.
Fig. 5. The structure of DenseNet-121 for OAM modes recognition.
Fig. 6.
Fig. 6. (a) Confusion matrix for the recognition of OAM eigenstates ${l} \in \{1, 2, \ldots, 8\}$. (b) Confusion matrix for the recognition of OAM superposition states: $\{\mathrm {mode1}, \mathrm {mode2}, \ldots, \mathrm {mode8}\}$ defined in Fig. 1(b).
Fig. 7.
Fig. 7. The encoding and decoding scheme with pairwise combined OAM modes: The horizontal-axis and vertical-axis are 8 OAM modes. The pairwise combination of 8 OAM modes result in 64-pixel values, which corresponds to 64 grayscale settings.
Fig. 8.
Fig. 8. (a1), (b1) The original landmark and portrait pictures that need to be transmitted in the smoke chamber. Both of them are 6 bit/pixel images with images size $64\times 64$ pixels. (a2), (b2) The received landmark and portrait pictures with $64\times 64$ pixels and 6 bit/pixel grayscale.
Fig. 9.
Fig. 9. The distribution proportion for the training, portrait and landmark datasets in 5 visibility sub-ranges.
Fig. 10.
Fig. 10. Recognition accuracy for different visibility sub-ranges. (a) The visibility sub-range of the training dataset is $V_4\in [5.6, 5.8)$ m, and the testing dataset is the data of all the 5 visibility sub-ranges. (b) The visibility sub-range of the training set is $V_5\in [5.8, 6.0]$ m, and the testing dataset is the data of all the 5 visibility sub-ranges.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

L G p , l ( r , φ , z ) = 2 p ! π ( p + | l | ) ! 1 ω ( z ) [ 2 r ω ( z ) ] | l | L p | l | [ 2 r 2 ω 2 ( z ) ] e r 2 ω 2 ( z ) × e i k r 2 2 R ( z ) e i ( 2 p + | l | + 1 ) a r c t a n ( z z R ) e i l φ
| ψ = cos θ 2 | 1 + sin θ 2 e i ϕ | 1
V = 10 log T t h α     ( k m ) ,
α = 10 log T 4.343 L     ( d B / k m ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.