Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

LED-based visible light communication for color image and audio transmission utilizing orbital angular momentum superposition modes

Open Access Open Access

Abstract

Twisted light has recently gained enormous interest in communication systems ranging from fiber-optic to radio frequency regimes. Thus far, the light-emitting diode (LED) has not yet been exploited for orbital angular momentum (OAM) encoding to transmit data, which, however, could open up an opportunity towards a new model of secure indoor communication. Here, by multiplexing and demultiplexing red, green and blue (RGB) twisted beams derived from a white light emitting diode, we build a new visible light communication system with RGB colors serving as independent channels and with OAM superposition modes encoding the information. At the sender, by means of theta-modulation, we use a computer-controlled spatial light modulator to generate two-dimensional holographic gratings to encode a large alphabet with 16 different OAM superposition modes in each RGB channel. At the receiver, based on supervised machine learning, we develop a pattern recognition method to identify the characteristic mode patterns recorded by CCD cameras, and therefore, decoding the information. We succeed in demonstrating the transmission of color images and a piece of audio over a 6-meter indoor link with the fidelity over 96%.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

It was Allen and coworkers who first recognized that a light beam with a helical phase front of exp(iℓϕ) carries an orbital angular momentum (OAM) of ℓħ per photon, where is an integer, ϕ is the azimuthal angle, and ħ is the Planck constant [1]. Recent years have witnessed a growing interest in the applications of OAM ranging from optical micromanipulation, optical imaging, to quantum information science [2–6]. Another particularly promising application is the optical communication that employs OAM as the information carrier [7, 8]. This technique was proposed and developed under the circumstances that the bandwidth of laser communication proves to be insufficient to satisfy the exponentially growing demands in the foreseeable future. Light possesses several different degrees of freedom, e.g., the time, wavelength, and polarization state, all of which can be exploited to increase the channel capacity. Another potential solution to eventually cope with bandwidth issues is the space division multiplexing (SDM) [9], and in particular, the mode division multiplexing (MDM) [10]. In 2004, Gibson et al. first demonstrated the transfer of data information encoded by OAM modes to enhance the security of data transmission which was resistant to eavesdropping [11]. In the radio frequency (RF) regime, the generation and detection of RF OAM beam were proposed which could be chosen as data carriers in communication [12–15], as the longer wavelength of a radio carrier wave is less sensitivity to various channel conditions and more divergence compared with an optical beam [16]. Tambruini et al. reported a Gaussian beam and an OAM beam which were placed in parallel and transmitted by Yagi-Uda antenna and a spiral parabolic antenna at 2.4GHz then received by a pair of antennas [17]. Also, Yan et al. demonstrated a 2.5m free-space link with a total capacity of 32 Gbit/s at a carrier frequency of 28GHz with eight coaxially propagating RF OAM beams [1819]. The practical implementation of OAM multiplexing systems request new mode multiplexing and demultiplexing technologies in order to achieve high performance. In 2014, Wang et al. achieved the free space optical communication with a bit rate of 1.036 Pbit/s and a spectral efficiency of 112.6-bit/s/Hz based on the multiplexing of 26 OAM modes [44]. Also, a number of implementations of OAM-multiplexed transmission over relevant distances were demonstrated both in optical fibers [20, 21] and in free space [22]. Particularly important progress was made in 2014 by Krenn and coworkers who successfully implemented a 3km free-space communication with spatially modulated light through turbulence [23]. They further extended their scheme to work over a 143 km free-space established between two islands [24]. Very recently, Ren et al. also performed a high-speed classical communication with 400Gbit/s data rate by multiplexing OAM modes over 120 meters of free space in Los Angeles [25].

The above schemes have typically focused on using the commercial laser systems to implement the optical communication, which possesses good coherence, low divergence and concentrates their power at a greater distance. However, easier accessible and economical light source should be exploited if we aim to generalize and commercialize the visible light communication technique. The visible light wireless communication can be traced back to the work in 1880 by Bell who transmitted voice data over 200m using sunlight [26]. Besides, several other demonstrations featuring fluorescent lights for communication at low data rates were investigated [27]. Nowadays, as an emerging technology for the optical wireless link, Light Fidelity (Li-Fi) represents a new paradigm for visible light communications [28]. In a Li-Fi system, the logical state of 1 and 0 is encoded by swiftly controlling the on and off state of LED, respectively, at a rate much faster than the response time of the human eye [29, 30]. This novel technology is supported by the ubiquitous and inexpensive LEDs. As mentioned before, OAM has been extensively employed in the laser communication systems in deep space, near earth and even under water [23, 24, 31, 32]. However, thus far no experimental demonstration that exploits OAM in the visible light communication has been conducted to transmit data with the white LED source. Here, by multiplexing and demultiplexing Red-Green-Blue (RGB) twisted light beams derived from LED, we demonstrate an effective visible light communication scheme with RGB colors serving as independent data channels while with OAM superpositions encoding the information. We accomplish the transmission of a color image of Albert Einstein, with the RGB channels naturally transmitting the RGB components of the image, respectively. We also transmit a piece of 8-bit Pachelbel’s Canon in D based on Hexadecimal number sequence. After a 6-meter indoor free-space link, the color image and audio are recognized and decoded with a fairly good fidelity based on a supervised machine learning. The security of our white light communication can be additionally ensured by the inherent uncertainty of OAM states when possible eavesdropping occurs, which can serve as a point-to-point secure communication that supplements the traditional broadcasting Li-Fi.

2. Framework

2.1. RGB OAM encoding based on multiplexing masks with theta-modulation

Instead of a laser source, we use an inexpensive white LED as the source to produce the RGB twisted light beams based on theta-modulation [33, 34]. In case of mode division multiplexing (MDM), the orthogonal OAM eigenmodes have been directly employed in optical communications. And the characteristic petal patterns of two opposite OAM superposition modes could be more advantageous than using single OAM modes, as they could be more efficiently detected and recognized after free-space propagation. We transmitted color image and audio by encoding 16 different OAM superposition modes, e.g., = 0, ±1, ±2, … ±15, where we choose the Laguerre-Gaussian (LG) modes with zero radial index p = 0 to represent these OAM states. The superposition of ± OAM just gives rise to a 2-petal flower pattern of rotational symmetry, and the radius of the intensity patterns scale with [35]. By adding both the intensity and phase distribution of ± OAM superposition modes to a linear grating, we obtain the 1st-order diffraction carrying the desired petal-like patterns. Because we aim to exploit the RGB model as three independent channels, our first key step is to generate two-dimensional holographic gratings in a single spatial light modulator (SLM) that serves as the multiplexing phase masks to generate the RGB twisted light’s superpositions. This can be realized based on the technique of theta-modulation. The specially designed hologram (illustrated sketch of the experimental setup in Result section) is generated by integrating three one-dimensional gratings with 0°, 60° and 120° orientations, with each one-dimensional grating being programmed to generate the desired OAM superposition modes individually. With such composite masks, we can obtain three angularly separated diffraction patterns, situating on the axes orientated at 0°, 60° and 120°, respectively. Then with a broadband LED illumination, the theta-modulated diffraction is dispersed spectrally in the Fourier plane. A tailed-made opaque aperture is inserted at the Fourier plane, where three punctured pinholes are carefully adjusted to filter out the primary RGB components, with one pinhole in each 1st-order diffraction. After a telescope, the RGB light beams encoded with various OAM superposition modes can be produced and transmitted coaxially through free space towards the receiver.

In our demonstration below, we aim to transmit the color images and a piece of audio. For the color image, we decompose the image based on the RGB color model. And naturally, the primary red, green and blue components are transmitted through the red, green and blue channels offered by the white LED, respectively. In each channel, the monochromatic image is sent pixel by pixel, with the gray scale of each pixel being encoded by the 16-level OAM superposition modes, specifically, = 0, ±1, ±2, …, ±15. For audio, its waveform describes a depiction of the pattern of sound pressure variation or amplitude in the time domain. We extract the discrete amplitude values between 0 and 255, and convert each value to two hexadecimal numbers, e.g., 0, 1, 2, 3,…, e, f. In a similar way, we encode these 16 different hexadecimal numbers with OAM superposition modes of = 0, ±1, ±2, …, ±15, respectively. In order to increase the data rate, we divide the stream of hexadecimal numbers into three sub-streams, which are transmitted through the RGB channels, respectively.

2.2. RGB OAM decoding based on pattern recognition with machine learning

At the receiver, two lenses of different focused lengths (f3 = 150mm and f4 = 50mm), forming a demagnification 4f system, collect the incoming RGB twisted light beams and direct them to a Cross Dichroic X-cube Prism. This X-cube prism is a RGB combiner splitter, which serves as a demultiplexer for the RGB channel signals. After coming out from the three ports of the X-cube prism, the red, green and blue signals are separated and then recorded by a color CCD camera (Thorlabs, DCU224C), respectively.

The next key step is to distinguish the sequential OAM superposition modes encoded in each color channel. We accomplish this by using a pattern classification recognition based on bagged classification trees with the bootstrap aggregating (acronym as bagging) method [36, 37], which is a kind of supervised machine learning. It accomplishes a task to infer a predictor from labeled training data, and the predictor can be used for mapping new examples. In our work, we use the successive OAM superposition modes to serve as original data. The input patterns of the OAM superposition modes from = 0 to = ±15 are designated as the numbers from 0 to 15 accordingly, as illustrated in Fig. 1(a). Firstly, we randomly partition the data into two parts, 90% of the image data as a training set for generating predictor, and the rest of the data as the validation set for evaluating the quality of the predictor. We category the 16 OAM superposition modes with the bagging classification method, which generates multiple versions of a predictor and uses them to get an aggregated predictor. Therefore, based on the bagging predictor, an aggregated predictor can be generated to predict a numerical outcome and does a plurality vote when predicting a class. Secondly, for testing the quality of the generated predictor, we use the predictor to recognize the mode in the validation set. Figure 1(b) is the evaluation outcome, we notice that the predictor model can be generated and autonomously categorizes the 16 kinds of input mode structures, outputting a series of mode numbers. The crosstalk matrix represents the distinguishability of the input and measured OAM superposition which is dominantly diagonal. The results show that the predictor has good performance and can well distinguish 16 different OAM superposition modes. After this initialization, the predictor model can be applied to analyze the real image data.

 figure: Fig. 1

Fig. 1 Pattern recognition with supervised machine learning. (a) Learning rules: the OAM superposition modes, = 0, ±1, ±2, … ± 15, are assigned by the numbers, 0, 1,2, …, 15, respectively. (b) The validation crosstalk matrix of the predictor. The number 5 indicates that each OAM superposition mode, ±, has been sent five times has been sent five times.

Download Full Size | PDF

2.3. The bagged classification trees in machine learning

The core of the pattern classification recognition is the use of bagged classification trees to get a predictor which can recognize different OAM superposition modes. In order to obtain the predictor, we input a set of learning materials for training. A learning set of L consists of data (yn, xn), n = 1, …, N, where y are either class labels or a numerical response. Assuming a predictor can be defined by ϕ(x, L), if the input is x we can predict y by ϕ(x, L). In the learning process, the multiple samples are formed by making bootstrap [38] replicate of learning set and using these as new learning sets Lk, each consisting of N independent observations. In our work, the training process starts from getting 50 sets of 16 OAM modes superposition. Here the OAM superposition patterns and the -values are corresponding to the input x and the numerical response y, respectively. Specifically, the input OAM superposition modes from = 0 to = ±15 are designated as the numbers from 0 to 15, respectively, as shown in Fig. 1(a).

The predictor to identify the OAM superposition pattern is derived by utilizing an ensemble of bagged classification trees, and the strategy is schematically illustrated in Fig. 2. We randomly partition the data into two parts, 90% of the image data serve as the training set for generating predictor, and the rest serve as the validation set for evaluating the performance of the model. In order to get a good predictor, the machine learning process carries out three main steps [39–41]: bootstrap sampling, train, and ensemble, see Fig. 2. Firstly, we input the training set L which consists of N images, here N = 720. Secondly, by the method of bootstrap sampling, it randomly samples with replacement and generates a series of sample L1, L2Lk from the training set. Here we aim to use the sample set to get a better predictor than a single learning set predictor. For every sample we use a weak learner, i.e., the decision tree to classify independently in every region, generating a predictor set ϕ(x, Lk) which consists of K predictors. Finally, according to the ensemble learning strategy, a strong learner ϕ(x, L) can be obtained by averaging and voting all the K predictors ϕ(x, Lk). Thus, with the predictor ϕ(x, L), if an arbitrary OAM mode-intensity image is input, we can obtain a predicted result of -value accordingly. We also apply the predictor model to classify the test data, and we show in Fig. 1(b) the testing result of validation set, from which the good performance can be seen clearly.

 figure: Fig. 2

Fig. 2 Strategy of pattern recognition using bagged classification trees.

Download Full Size | PDF

3. Result

Our optical system of OAM-based visible light communication is shown in Fig. 3. The white light derived from a 125 mW LED (Daheng, GCI-060411) with a bandwidth ranging from 440nm to 670nm. A telescope is used to collimate the white-light beam, which then illuminates the SLM uniformly, as shown in Fig. 3(a). It is noted that the usage of telescoping and beam expansion will alleviate the error in waist sizes and strong curvature [42]. The sender is basically a 4f system, consisting of two lenses with focal length f1 = 300mm and f2 = 1000mm, realizing the theta-modulation to generate the colorful OAM superposition states. The SLM (Hamamatsu, X13138-01) is a phase-only modulator, which is capable of operating in a wide range of the spectrum as well as broad band source, allowing us to generate customized digital hologram to encode the information [43,44]. After transmitting in free space, the red, green and blue channels will be demultiplexed by an X-cube prism at the receiver, and then recorded by a color CCD camera, respectively, see Fig. 3(b). In our communication system, the 4f system at the sender is utilized to filter out the RGB channels and transmit the patterns of superposition modes through the free space, serving as a telescope. This is crucial as the characteristic petals would become fuzzy and unrecognized after free-space propagation due to the partial coherence of the LED source. Then at the receiver, the 4f system is applied to de-magnify and image the superposition modes into the effective area of the CCD camera. The 16 different OAM superposition modes are identified with pattern recognition, based on the aforementioned method of machine learning, thereby decoding the information.

 figure: Fig. 3

Fig. 3 Sketch of the experimental setup for visible light communication link based on the RGB twisted light encoding/decoding. (a) The sender. The right upper insert is a typical specially designed hologram. (b) The receiver, see the text for details.

Download Full Size | PDF

3.1. Transmission of the primary RGB colors

To verify the validity of our communication system, we first demonstrate the transmission of a simple tri-circle image of three RGB primary colors. The RGB color model is an additive color model in which red, green and blue light are added together in various ways to reproduce a broad array of colors. As shown in Fig. 4(a), a secondary color is formed by the sum of two primary colors of equal intensity, e.g., green and blue make cyan, red and blue make magenta, while red and green make yellow. By adding all three primary colors together, it yields white. Generally, a color is expressed as an RGB triplet(r,g,b), which indicates how much of each of the red, green and blue is included. For the tri-circle image of Fig. 4(a), it has a resolution of 200 × 184 = 36800 pixels and a 4-bit grayscale value of the RGB triplet(r,g,b). To transmit the full color image, a natural and effective way is to use the red, green and blue colors of white LED as three independent channels to send the red, green and blue components of the image, respectively. And for each channel, we further use the 16 different OAM superposition modes to encode the 4-bit pixel grayscale values, thereby creating a basis set of 48 modes encoded in both color and OAM degrees of freedom. This is the key for our encoding technique and for achieving a higher channel capacity in the implementation of the optical link. Hence, the 200 × 184 = 36800 tri-circle image can be mathematically converted to three data streams, each with 36800 sequence numbers.

 figure: Fig. 4

Fig. 4 Experimental results for transmission of a tri-circle image of RGB primary colors. (a) The 4-bit original image to be sent. (b–d) The received and reconstructed red, green and blue components of the image, respectively. (e) The full-color image reconstructed by adding three primary components (b–d) together. (f–h) The corresponding crosstalk matrices of OAM superposition modes for red, green and blue channels, respectively, where the numbers denote the events and zero OAM modes are used to transmit the dark background trivially.

Download Full Size | PDF

Based on our above encoding strategy, we can rapidly load the multiplexing masks on SLM and the grayscale values of red, green and blue components are sent pixel by pixel. At the receiver, the red, green and blue colors are directly separated and then demultiplexed by the X-cube prism. Three color CCD cameras placed at the three output ports of the X-cube prism record the successive patterns in the video form simultaneously. Based on the decoding strategy, we extract each frame of the videos and decode the OAM superposition modes by means of the aforementioned pattern recognition with machine learning. Thus we can reconstruct the red, green and blue components of the received image, as shown in Figs. 4(b), 4(c) and 4(d), respectively. As the image is composed of three circles, each with a primary color, the received image after reconstruction is merely a circle of the corresponding primary color. By adding these red, green and blue components together, we finally reconstruct the received image of Fig. 4(e). For a quantitative estimation, we also plot the crosstalk matrices of OAM superposition modes for the RGB channels in Figs. 4(f), 4(g) and 4(h), respectively. The crosstalk matrix exhibits the distinguishability of superposition modes and can be obtained by comparing the sent and received modes numbers. In the matrices, the sent modes represent the encoding label numbers, and the received modes represent the decoding label numbers. If the bar stays in the diagonal, it means the received modes are the same with the sent modes, indicating that the predictor decodes the sent modes correctly. In our schemes, we can get the sent modes numbers by encoding the image, and obtain the received modes numbers after decoding. By defining the error rate as the ratio between the wrong decoded events and all the detection ones, we estimate that the pattern recognition predictor distinguishes the mode patterns with a relatively low error rate, e.g., 0.3% for red, 0.11% for green, and 0.08% for blue channels, respectively. While for the whole image of Fig. 4(e), the average error rate is calculated to be around 0.17%, therefore, showing the favorable performance of our visible light communication system.

The crosstalk matrix characterizes the distinguishability of OAM superposition modes. Each element with subscript (, ′) shows the event number that the sender sent the encoded OAM superposition ± while the receiver decoded the information of ±′ OAM superposition. Thus those diagonal elements, e.g., = ′, indicates that the predictor decodes the sent modes correctly. Otherwise, they will lead to transmission error.

3.2. Transmission of a color Albert Einstein image

More generally, we performed another experiment to transmit a color Albert Einstein image of Fig. 5(a), which has a 200 × 164 = 32800 pixels and exhibits a more complicated color mixture. Again, the RGB components of the Albert Einstein image are sent, naturally, through the RGB channels of the LED light, respectively. At the sender, we encode and transmit each RGB pixel according to the same strategy as above. At the receiver, we record and analyze the videos recorded in each RGB channel, and decode the grayscale information pixel by pixel. The recovered RGB images were shown in Figs. 5(b), 5(c) and 5(d), respectively. And based on the measured crosstalk matrices in Figs. 5(f), 5(g) and 5(h), we calculate the error rates to be 0.14%, 4.2%, and 3.51%, respectively. Besides, we can see that the most frequently encoded OAM superposition modes are = ±14, ±6 and ±3 which are used 3271, 3211 and 5181 times in the red, green and blue channels, respectively. Finally, we recover the original image successfully with a high fidelity, see Fig. 5(e), whose average error is only around 2.62%.

 figure: Fig. 5

Fig. 5 Experimental results for transmission of the color Albert Einstein image. (a) The 4-bit original image to be sent. (b–d) The received and reconstructed red, green and blue components of the image, respectively. (e) The full-color image reconstructed by adding three primary components (b–d) together. (f–h) The corresponding crosstalk matrices for red, green and blue channels, respectively.

Download Full Size | PDF

In addition, the error rates of each superposition mode for the primary RGB color and the Albert Einstein image are presented in Fig. 6. For the primary RGB color, there are only 2 different superposition modes that are utilized to encode the pixel information in each RGB channel. Subsequently, the machine learning predictor can recognize these two modes with high accuracy, see also Figs. 4(f), 4(g) and 4(h). While for the Albert Einstein image, it requires 16 different superposition modes, e.g., = 0, ±1,…,±15 to encode the pixels, see also Figs. 5(f), 5(g) and 5(h). Therefore, it is a bit more difficult for the predictor to recognize two neighboring modes for the Albert Einstein image.

 figure: Fig. 6

Fig. 6 The error rates for transmitting the primary RGB color and the Albert Einstein image. (a) The measured error rates for the transmission of a tri-circle image of RGB primary colors. (b) The measured error rates for the transmission of the Albert Einstein image.

Download Full Size | PDF

3.3. Transmission of a piece of Pachelbel’s Canon in D

To demonstrate the versatility of our scheme, we further transmit the Canon in D composed by Johann Pachelbel with the sampling frequency of 8000Hz and 8-bit depth. For audio, its waveform describes a depiction of the pattern of sound pressure variation or amplitude in the time domain, see Fig. 7(a). As computers do not store sound, instead, they store math, we need the conversion of a sound wave (a continuous signal) to a sequence of samples (a discrete-time signal), e.g., a series of 1s and 0s [45]. The sample rate of 8000Hz describes how fast the computer is taking those “napshots” of sound, namely, 8000 samples are obtained per second on average. Each sample of an audio signal must be ascribed a numerical value to be stored and processed by the computer. For example, the 8-bit depth describes the number of bits of information in each sample, namely, the audio amplitude values range from 0 to 255. Here we aim to transmit a small piece of 20.862s audio through our OAM visible light communication system. For this, we extract the 8-bit amplitude values and convert each value to two hexadecimal numbers, (0, 1, 2, 3,…, e, f). Hence, the audio can be mathematically represented by a sequence of 333,792 hexadecimal numbers. In our scheme, we just use the OAM superposition modes to encode the hexadecimal number. Thus the audio information can be transmitted through the free-space link after the spatial mode encoding with SLM. To further increase the transmission rate by taking full advantage of the three RGB independent channels, we divide the whole stream of the hexadecimal numbers into three sub-streams and send them through the red, green and blue channels, respectively, each with 111,264 hexadecimal numbers. At the receiver, in a similar way, we first separate the RGB channels by using the X-cube prism, and decode the OAM superposition modes with the same method of pattern recognition. Then the data sequences in the RGB channels are all recovered, thus we reconstruct the piece of the music finally, as was shown in Fig. 7(b). If we define the fidelity as the ratio between the number of the correct amplitude values in the reconstructed audio and the total number of amplitude values in the original one, we can reach a relatively high fidelity over 96%.

 figure: Fig. 7

Fig. 7 The waveform graphs of 20.862s Canon in D composed by Johann Pachelbel. (a) The original audio waveform. (b) The received and recovered waveform by our OAM-based visible light communication system, see supplementray materials for the Visualization 1 for (a) and Visualization 2 for (b).

Download Full Size | PDF

4. Discussion and conclusion

In the above proof-of-principle experimental demonstration, we have shown the potential of twisted light in visible light communication. We are now facing an imminent shortage of radio frequency (RF) spectrum, i.e., the spectral efficiency (the number of bits successfully transmitted per Hertz bandwidth) of wireless networks has become saturated. Li-Fi, the high-speed communication and networking variant of visible light communication, aims to unlock a vast amount of unused electromagnetic spectrum in the visible light region [46]. Although, here, only the red, green and blue components of the white LED are utilized, we anticipate that our scheme could become a viable technique soon to help mitigate RF communications spectrum bottlenecks, with the visible spectrum being fully explored. The data transmission rate of our present system is mainly restricted by the response time of SLM we use in the optical system, which has a limited frame rate of 60 Hz only. A black screen was inserted between two modes to distinguish the subsequent modes. To ensure enough response time for loading the entire grating on the SLM, we set the refresh time to be 50ms per mode, and the RGB channels’ total transmission rate is estimated to be 30 pixels per second. Besides, in the real-world environment, the transmitting distance is also influenced by the parameters of 4f systems, air turbulence, scattering and optical incoherence of the LED source. Further technology for speeding up the modulation of light’s spatial structure will improve the transmission rate dramatically. For instance, high-performance digital mirror device (DMD) possesses the ability to rapidly switch different modes at a frame rate as high as 20 kHz. Also, further improvements such as high-speed integrated OAM transmitters might lead to the use of OAM superposition modes as an effective and fast way to encode information [47]. In addition, we can see that the transmission capacity may be further increased if the radial degree of freedom in OAM beams [48, 49] as well as vector beam [50] are considered. Based on our proof-of-principle experiments, further fidelity improvement can be achieved by replacing the device with SLM or DMD of higher spatial resolution and improving the image recognition accuracy via deep learning. With these techniques incorporated, our scheme will become more promising for future practical communications. On the other hand, security is another important factor for communication systems. Unlike the broadcasting style of a traditional Li-Fi, our transmission link is an indoor point-to-point model, therefore, offering an additional security as is ensured by OAM measurement. It was shown that the information encoded in the OAM basis is resistant to possible eavesdropping, as any attempt to sample the beam away from its axis will be subject to an angular restriction and a lateral offset, both of which result in inherent uncertainty in the measurement [11]. In summary, we have proposed and demonstrated an OAM-based indoor visible light communication system. The sender exploits the theta-modulation to encode and transmit the OAM-encoded information through the free-space red, green and blue channels sustained by a white LED. The receiver demultiplexes and decodes the red, green and blue OAM superposition modes with the method of pattern recognition based on supervised machine learning. We have succeeded in transmitting both the color images and a piece of audio with a relatively high accuracy over 96%. In comparison with traditional Li-Fi, we have also discussed the security of our OAM-based visible light communication. Finally, we would like to point out that our work has brought up many interesting questions, some of which we discuss here, and which need to be fully addressed before our ideas can be realized in a commercial communication system.

Funding

National Natural Science Foundation of China (NSF) (11474238, 91636109); Fundamental Research Funds for the Central Universities at Xiamen University (20720160040); Natural Science Foundation of Fujian Province of China for Distinguished Young Scientists(2015J06002); New Century Excellent Talents in University of China (NCET-13-0495).

Acknowledgments

We would like to thank Mr. Xiaochuan Jiang for Mathlab codes; we also thank Mario Krenn for his helpful suggestions.

References and links

1. L. Allen, M. W. Beijersbergen, R. J. C. Spreeuw, and J. P. Woerdman, “Orbital angular momentum of light and the transformation of Laguerre-Gaussian laser modes,” Phys. Rev. A 45(11), 8185–8189 (1992). [CrossRef]   [PubMed]  

2. S. Franke-Arnold, L. Allen, and M. Padgett, “Advances in optical angular momentum,” Laser Photon. Rev. 2(4), 299–313 (2008). [CrossRef]  

3. G. Molina-Terriza, J. P. Torres, and L. Torner, “Twisted photons,” Nat. Phys. 3(5), 305–310 (2007). [CrossRef]  

4. M. Padgett, J. Courtial, and L. Allen, “Light’s orbital angular momentum,” Phys. Today 57(5), 35–40 (2004). [CrossRef]  

5. A. M. Yao and M. J. Padgett, “Orbital angular momentum: origins, behavior and applications,” Adv. Opt. Photon. 3(2), 161–204 (2011). [CrossRef]  

6. S. M. Barnett, M. Babiker, and M. J. Padgett, “Optical orbital angular momentum,” Phil. Trans. R. Soc. A 375, 20150444 (2017). [CrossRef]   [PubMed]  

7. A. E. Willner, Y. Ren, G. Xie, Y. Yan, L. Li, Z. Zhao, J. Wang, M. Tur, A. F. Molisch, and S. Ashrafi, “Recent advances in high-capacity free-space optical and radio-frequency communications using orbital angular momentum multiplexing,” Phil. Trans. R. Soc. A 375(2087), 20150439 (2017). [CrossRef]   [PubMed]  

8. A. E. Willner, H. Huang, Y. Yan, Y. Ren, N. Ahmed, G. Xie, C. Bao, L. Li, Y. Cao, Z. Zhao, J. Wang, M. P. J. Lavery, M. Tur, S. Ramachandran, A. F. Molisch, N. Ashrafi, and S. Ashrafi, “Optical communications using orbital angular momentum beams,” Adv. Opt. Photon. 7(1), 66–106 (2015). [CrossRef]  

9. D. J. Richardson, J. M. Fini, and L. E. Nelson, “Space-division multiplexing in optical fibres,” Nat. Photonics 7(5), 354–362 (2013). [CrossRef]  

10. S. Berdagué and P. Facq, “Mode division multiplexing in optical fibers,” Appl. Opt. 21(11), 1950–1955 (1982). [CrossRef]   [PubMed]  

11. G. Gibson, J. Courtial, M. J. Padgett, M. Vasnetsov, V. Pas’ko, S. M. Barnett, and S. Franke-Arnold, “Free-space information transfer using light beams carrying orbital angular momentum,” Opt. Express 12(22), 5448–5456 (2004). [CrossRef]   [PubMed]  

12. F. E. Mahmouli and S. D. Walker, “4-Gbps Uncompressed Video Transmission over a 60-GHz Orbital Angular Momentum Wireless Channel,” IEEE Wireless Commun. Lett. 2(2), 223–226 (2013). [CrossRef]  

13. B. Thide, H. Then, J. Sjoholm, K. Palmer, J. Bergman, T. D. Carozzi, Y. N. Istomin, N. H. Ibragimov, and R. Khamitova, “Utilization of photon orbital angular momentum in the low-frequency radio domain,” Phys. Rev. Lett. 99(8), 087701 (2007). [CrossRef]   [PubMed]  

14. F. Tamburini, E. Mari, B. Thide, C. Barbieri, and F. Romanato, “Experimental verification of photon angular momentum and vorticity with radio techniques,” Appl. Phys. Lett. 99(20), 204102 (2011). [CrossRef]  

15. F. Tamburin, E. Mari, A. Sponselli, B. Thid, A. Bianchini, and F. Romanato, “Encoding many channels on the same frequency through radio vorticity: first experimental test,” New J. Phys. 14(3), 033003 (2012).

16. R. L. Phillips and L. C. Andrews, “Spot size and divergence for Laguerre Gaussian beams of any order,” Appl. Opt. 22(5), 643–644 (1983). [CrossRef]   [PubMed]  

17. F. Tamburini, B. Thide, E. Mari, G. Parisi, Spinello, F. Spinello, M. Oldoni, R. A. Ravanelli, P. Coassini, C. G. Someda, and F. Romanato, “N-tupling the capacity of each polarization state in radio links by using electromagnetic vorticity,” https://arxiv.org/pdf/1307.5569.

18. Y. Yan, G. Xie, M. P. J. Lavery, H. Huang, N. Ahmed, C. Bao, Y. Ren, Y. Cao, L. Li, and Z. Zhao, “High-capacity millimetre-wave communications with orbital angular momentum multiplexing,” Nat. Commun. 5, 4876 (2014). [CrossRef]   [PubMed]  

19. J. Wang, S. Li, M. Luo, J. Liu, L. Zhu, C. Li, D. Xie, Q. Yang, S. Yu, and J. Sun, “N-dimentional multiplexing link with 1.036-Pbit/s transmission capacity and 112.6-bit/s/Hz spectral efficiency using OFDM-8QAM signals over 368 WDM pol-muxed 26 OAM modes,” in European Conference on Optical Communication (Cannes, France, 2014), paper Mo.4.5.1.

20. L. Zhu, J. Liu, Q. Mo, C. Du, and J. Wang, “Encoding/decoding using superpositions of spatial modes for image transfer in km-scale few-mode fiber,” Opt. Express 24(15), 16934–16944 (2016). [CrossRef]   [PubMed]  

21. N. Bozinovic, Y. Yue, Y. Ren, M. Tur, P. Kristensen, H. Huang, A. E. Willner, and S. Ramachandran, “Terabit-Scale Orbital Angular Momentum Mode Division Multiplexing in Fibers,” Science 340(6140), 1545 (2013). [CrossRef]   [PubMed]  

22. J. Wang, J.Y. Yang, I. M. Fazal, N. Ahmed, Y. Yan, H. Huang, Y. Ren, Y. Yue, S. Dolinar, M. Tur, and A. E. Willner, “Terabit free-space data transmission employing orbital angular momentum multiplexing,” Nat. Photonics 6(7), 488–496 (2012). [CrossRef]  

23. M. Krenn, R. Fickler, M. Fink, J. Handsteiner, M. Malik, T. Scheidl, R. Ursin, and A. Zeilinger, “Communication with spatially modulated light through turbulent air across Vienna,” New J. Phys. 16(11), 113028 (2014). [CrossRef]  

24. M. Krenn, J. Handsteiner, M. Fink, R. Fickler, R. Ursin, M. Malik, and A. Zeilinger, “Twisted light transmission over 143 km,” Proc. Natl. Acad. Sci. U.S.A. 113(48), 13648–13653 (2016). [CrossRef]   [PubMed]  

25. Y. Ren, Z. Wang, P. Liao, L. Li, G. Xie, H. Huang, Z. Zhao, Y. Yan, N. Ahmed, and A. Willner, “Experimental characterization of a 400 Gbit/s orbital angular momentum multiplexed free-space optical link over 120 m,” Opt. Lett. 41(3), 622–625 (2016). [CrossRef]   [PubMed]  

26. A. G. Bell, W. Adams, and W. Preece, “Discussion on the photophone and the conversion of radiant energy into sound,” J. Soc. Telegraph Eng. 9(34), 375–383 (1880). [CrossRef]  

27. D. K. Jackson, T. K. Buffaloe, and S. B. Leeb, “Fiat lux: a fluorescent lamp digital transceiver,” IEEE Trans. Ind. Appl. 34(3), 625–630 (1998). [CrossRef]  

28. S. Dimitrov and H. Haas, Principles of LED Light Communications: Towards Networked Li-Fi (Cambridge University, 2015). [CrossRef]  

29. H. Hass, L. Yin, Y. Wang, and C. Chen, “What is LiFi?” J. Lightwave Tech , 34(6), 1533–1544 (2016). [CrossRef]  

30. H. Haas, “Wireless data from every light bulb,” (Ted, 2011). https://www.ted.com/talks/harald_haas_wireless_data_from_every_light_bulb

31. I. B. Djordjevic, “Deep-space and near-Earth optical communications by coded orbital angular momentum (OAM) modulation,” Opt. Express 19(15), 14277–14289(2011). [CrossRef]   [PubMed]  

32. F. Bouchard, A. Sit, F. Hufnagel, A. Abbas, Y. Zhang, K. Heshami, R. Fickler, C. Marquardt, G. Leuchs, R. W. Boyd, and E. Karimi, “Underwater Quantum Key Distribution in Outdoor Conditions with Twisted Photons,” https://arxiv.org/abs/1801.10299v1

33. J. D. Armitage and A. W. Lohmann, “Theta Modulation in Optics,” Appl. Opt. 4(4), 399–403 (1965). [CrossRef]  

34. J. Wang, Y. Zhang, W. Zhang, and L. Chen, “Theta-modulated generation of chromatic orbital angular momentum beams from a white-light source,” Opt. Express 24(21), 23911–23916 (2016). [CrossRef]   [PubMed]  

35. J. E. Curtis and D. G. Grier, “Structure of Optical Vortices,” Phys. Rev. Lett. 90(13), 133901 (2003). [CrossRef]   [PubMed]  

36. L. Breiman, “Bagging Predictors. Machine learning,” Springer 24(2), 123–140 (1996).

37. C. M. Bishop, Pattern recognition and machine learning (Springer, 2006).

38. Y. Freund and Robert E. Schapire, Experiments with a new boosting algorithm, Proc. 13th Int’l Conf. Machine Learning , 96, 148–156 (1996).

39. H. Trevor, R. Tibshirani, and J. Friedman, The elements of statistical learning (Springer, 2009).

40. L. I. Kuncheva, Combining pattern classifiers: methods and algorithms (John Wiley & Sons, 2004). [CrossRef]  

41. T. G. Dietterich, Ensemble methods in machine learning (Academic, 2000).

42. C. Schulze, A. Dudley, D. Flamm, M. Duparré, and A. Forbes, “Reconstruction of laser beam wavefronts based on mode analysis,” Appl. Opt. 52(21), 5312–5317 (2013). [CrossRef]   [PubMed]  

43. A. Forbes, A. Dudley, and M. McLaren, “Creation and detection of optical modes with spatial light modulators,” Adv. Opt. Photon. 8(2), 200–227 (2016). [CrossRef]  

44. L. Zhu and J. Wang, “Arbitrary manipulation of spatial amplitude and phase using phase-only spatial light modulators,” Sci. Rep. 4, 7441 (2014).

45. M. Bosi and R. E. Goldberg, Introduction to digital audio coding and standards (Springer, 2003). [CrossRef]  

46. H. Haas, “High-speed wireless networking using visible light,” SPIE Newsroom 10(2.1201304), 004773 (2013).

47. M. J. Strain, X. Cai, J. Wang, J. Zhu, D. B. Phillips, L. Chen, M. Lopez-Garcia, J. L. O’brien, M. G. Thompson, and M. Sorel, “Fast electrical switching of orbital angular momentum modes using ultra-compact integrated vortex emitters,” Nat. Commun. 5, 4856 (2014). [CrossRef]   [PubMed]  

48. G. Xie, Y. Ren, Y. Yan, H. Huang, N. Ahmed, L. Li, Z. Zhao, C. Bao, M. Tur, S. Ashrafi, and A. E. Willner, “Experimental demonstration of a 200-Gbit/s free-space optical link by multiplexing Laguerre-Gaussian beams with different radial indices,” Opt. Lett. 41(15), 3447–3450 (2016). [CrossRef]   [PubMed]  

49. A. Trichili, C. Rosales-Guzmán, A. Dudley, B. Ndagano, A. Ben Salem, M. Zghal, and A. Forbes, “Optical communication beyond orbital angular momentum,” Sci. Rep. 6, 27674 (2016). [CrossRef]   [PubMed]  

50. Y. Zhao and J. Wang, “High-base vector beam encoding/decoding for visible-light communications,” Opt. Lett. 40(21), 4843–4846 (2015). [CrossRef]   [PubMed]  

Supplementary Material (2)

NameDescription
Visualization 1       The original audio
Visualization 2       The received and recovered audio

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Pattern recognition with supervised machine learning. (a) Learning rules: the OAM superposition modes, = 0, ±1, ±2, … ± 15, are assigned by the numbers, 0, 1,2, …, 15, respectively. (b) The validation crosstalk matrix of the predictor. The number 5 indicates that each OAM superposition mode, ±, has been sent five times has been sent five times.
Fig. 2
Fig. 2 Strategy of pattern recognition using bagged classification trees.
Fig. 3
Fig. 3 Sketch of the experimental setup for visible light communication link based on the RGB twisted light encoding/decoding. (a) The sender. The right upper insert is a typical specially designed hologram. (b) The receiver, see the text for details.
Fig. 4
Fig. 4 Experimental results for transmission of a tri-circle image of RGB primary colors. (a) The 4-bit original image to be sent. (b–d) The received and reconstructed red, green and blue components of the image, respectively. (e) The full-color image reconstructed by adding three primary components (b–d) together. (f–h) The corresponding crosstalk matrices of OAM superposition modes for red, green and blue channels, respectively, where the numbers denote the events and zero OAM modes are used to transmit the dark background trivially.
Fig. 5
Fig. 5 Experimental results for transmission of the color Albert Einstein image. (a) The 4-bit original image to be sent. (b–d) The received and reconstructed red, green and blue components of the image, respectively. (e) The full-color image reconstructed by adding three primary components (b–d) together. (f–h) The corresponding crosstalk matrices for red, green and blue channels, respectively.
Fig. 6
Fig. 6 The error rates for transmitting the primary RGB color and the Albert Einstein image. (a) The measured error rates for the transmission of a tri-circle image of RGB primary colors. (b) The measured error rates for the transmission of the Albert Einstein image.
Fig. 7
Fig. 7 The waveform graphs of 20.862s Canon in D composed by Johann Pachelbel. (a) The original audio waveform. (b) The received and recovered waveform by our OAM-based visible light communication system, see supplementray materials for the Visualization 1 for (a) and Visualization 2 for (b).
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.