Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Experimental demonstration of flexible information rate PON beyond 100 Gb/s with rate-compatible LDPC codes

Open Access Open Access

Abstract

The applications of rate-compatible low-density parity-check (RC-LDPC) codes are investigated for a 16 quadrature amplitude modulation (16QAM) signal and coherent detection system. With rate-compatible signals, we can provide the flexible net data rate between 135.5 Gb/s and 169.7 Gb/s in a passive optical network (PON) link. Based on the LDPC codes defined in the IEEE 802.3ca standard, we construct two sets of RC-LDPC codes with a fixed and variable information bit length. Since the puncturing operation may degrade the performance of LDPC codes, we apply the protograph-based extrinsic information transfer (PEXIT) technique to optimize the puncturing positions to mitigate the degradation. Additionally, we explore four low-complexity LDPC decoding algorithms (min sum, offset min sum, variable weight min sum, and relaxed min sum with 2nd min emulation) to investigate the relationship between the computational complexity and decoding performance. Simulation results indicate that the constructed codewords exhibit good performance in the waterfall region across a range of code rates. Finally, we conduct an experimental setup in a dual-polarization 25 GBaud 16QAM coherent PON to verify the effectiveness of the constructed LDPC codes with four decoding algorithms. The experimental results show maximal 4.8 dB receiver sensitivity differences, which demonstrate the feasibility of the method for constructing RC-LDPC codes in future high-speed flexible coherent PON.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Optical access refers to a technology that enables high-speed and reliable broadband connectivity to end-users using optical fibers. It involves the use of fiber cables to transmit data signals over long distances, providing faster and more efficient communication compared to copper-based access schemes [1]. In the optical access networks, passive optical networks (PON) architecture has been widely used due to its advantages of low cost and simple implementation. In order to satisfy the growing traffic demand, the single-wavelength peak line rate has reached 50 Gb/s according to the latest standard developed by ITU-T [2]. Considering the development route of PON standards, the single-wavelength peak line rate may reach 200 Gb/s in future PON based on coherent detection technology [3,4]. Several coherent PON schemes at peak line rate of 200 Gb/s or beyond have been experimentally demonstrated with power budget more than 30 dB [59]. Additionally, simplified coherent optical network units (ONUs) have been demonstrated in 200 Gb/s PON [1011].

Another challenge related to the future 200 Gb/s PON is the realization of flexibility, which is driven by the increasing demand for high-speed broadband services [12]. Traditionally, PONs have been designed with fixed bandwidth allocation, where each subscriber is allocated a fixed amount of bandwidth. However, this approach may not be efficient in meeting the dynamic and diverse demands of modern networks. Therefore, flexible PON is considered as a solution to optimize the overall capacity based on the available channel conditions, i.e., optical path loss (OPL), signal-to-noise ratios and other impairments. There are already a number of methods to provide flexible data rate to different groups of ONUs. For example, both classical probabilistic shaping and reversed probabilistic shaping methods have been proposed to adjust the information rate [1315]. A flexible-rate PON based on discrete multi-tone (DMT) have also been proposed and achieved a wide-range data-rate adjustment [16,17]. However, the large computational complexity and long processing delay of distribution matching operations may restrict their further applications in PON. Another rate adjustment method is based on rate-compatible low-density parity-check (RC-LDPC) codes, which is possible to construct codewords with different code rates and performance under various channel environments [12,18,19]. However, rate-compatible codes designed via puncturing are usually suffered from severely performance degradation compared with stand-alone codes of similar rate [20]. In addition, previous LDPC decoding processing in flexible PON is based on belief-propagation (BP) [21] decoding algorithm without considering the computational complexity. Since the LDPC decoding performances are related to the computational complexity of the decoding algorithm, a new degree of freedom for optimizing the throughput of PON can be introduced by considering different decoding algorithms with different computational complexities.

In this paper, we provide two RC-LDPC encoding schemes with fixed and adjustable length of information bits. The puncturing locations are optimized to improve the encoding performance with 6 variable code rates. We then compare four decoding algorithms, Min Sum (MS) [22], Offset MS (OMS) [23], variable weight MS (vwMS) [24], and Relaxed Min Sum with $2^{nd}$ min emulation (RMS-2ME) [25], to investigate the relationship between the performance and computational complexity in simulation. Finally, we conduct an experiment to evaluate the LDPC encoding and decoding schemes in 200 Gb/s dual-polarization 16 quadrature amplitude modulation (DP-16QAM) formats. The post-forward error correction (FEC) performance is obtained and compared when the information rate is adapted between 135.5 Gb/s and 169.7 Gb/s.

The paper is organized as follows. In Section 2, we describe a construction method with optimized puncturing operation for RC-LDPC codes and investigate four low-complexity decoding algorithms. Simulation results of the RC-LDPC codes based on the selected decoding algorithms, as well as the complexity analysis of these decoding algorithms, are also presented in this section. In Section 3, we describe the experimental setup. Next, we present the experimental results and discussions in Section 4. Finally, we draw the conclusions in Section 5.

2. Principle

2.1 Methods for constructing rate-compatible LDPC codes

In order to flexibly adjust the rate of LDPC codes, the operations of puncturing and shortening are introduced in the encoding and decoding process jointly. The employed LDPC mother code has length of 17664 with 14592 information bits and 3072 parity bits, which is defined by the IEEE 802.3ca standard [26]. Two flexible rate schemes are demonstrated in this paper whose essential difference is whether the length of the information bits is fixed or not depending on the requirement of the services. The first flexible rate scheme with an unfixed length of information bits is illustrated in Fig. 1. Shortening is done by reducing the number of the information bits of the mother code to lower the code rate. The shortening bits will not be transmitted in the channel, which are typically padded with known values (zeros) and then combined with valid information bits to form the sequence. The encoder receives the sequence and outputs a codeword with parity bits, followed by a puncturing operation. Puncturing is a way to obtain higher code rate when the code strength is reduced. It is noted that the puncturing bits will not be transmitted in the channel. The final step on the transmitter side is to map the bits to the 16QAM format. At the receiver side, since the shortening bits have certain value, they can be set a large log-likelihood ratio (LLR). In parallel, since the puncturing bits are treated as completely unknown bits, the LLR can be set to 0. Finally, the decoder receives the LLR and outputs the received information bits.

 figure: Fig. 1.

Fig. 1. Method of the flexible LDPC encoder under IEEE 802.3ca standard.

Download Full Size | PDF

It is noted that the selection of puncturing locations may have an impact on the bit error rate (BER) performance. In [12], puncturing is applied to parity bits at the end of the codeword. In our proposed schemes, we optimize the selection of puncturing locations. The core idea is to use the protograph-based extrinsic information transfer (PEXIT) [27] technique to calculate the threshold of the current matrix containing puncturing columns, and select the column with the smallest threshold as the optimal puncturing column. Table 1 shows the threshold of the matrix when different columns are selected as puncturing columns. The performance comparison with the method in [12] is also provided in Table 1.

Tables Icon

Table 1. The comparative analysis of the decoding thresholds with different puncturing positionsa

The PEXIT technique can be used to design mask matrix with improved performance. The LDPC codes defined in IEEE 802.3ca are extended from a base matrix of size 69$\times$12 (the expansion factor is 256). We extract the mask matrix of the base matrix and use it to select the optimal puncturing columns. When 1 column of the base matrix is selected as the puncturing column, the threshold is calculated to be minimized when the $11^{th}$ column is selected as the puncturing column. When 2 columns of the base matrix are selected as the puncturing columns, the calculated threshold is minimized when the $4^{th}$ column is selected as an additional puncturing column. Similarly, the optimal locations for other number of puncturing columns can be derived and the threshold gain up to 0.6568 dB can be obtained when 7 columns of the base matrix are punctured. After matrix dispersion, each position of the base matrix will be filled with either a 256 $\times$ 256 matrix of all zeros or a 256 $\times$ 256 circulant matrix. Therefore, each of 256 bits corresponding to one puncturing column are puncturing bits. Moreover, we will also briefly discuss the complexity of the PEXIT technique. Assuming $d_{v}$ is the average degree of variable nodes and N is the number of variable nodes (N is 69 and $d_{v}$ is about 4 in this paper), $(\textit {J}^{-1}(\cdot ))^2$ and $\textit {J}(\cdot )$ in PEXIT should be calculated as $3\textit {N}d_{v}$ and $\textit {N}(2d_{v}+1)$ times in one iteration, respectively. Functions $\textit {J}(\cdot )$ and $\textit {J}^{-1}(\cdot )$ have some simple approximations shown in [28] and mainly involve multiplication, exponentiation, and logarithmic operations. Therefore, the PEXIT technique has a high complexity. However, it should be note that we only need to record the optimal puncturing locations obtained through PEXIT technique in advance, without any PEXIT operations during the encoding and decoding process.

Based on Scheme 1, we choose the puncturing number of the parity bits to be 512 to 1792 in steps of 256, while keeping the total number of shortening bits and puncturing bits to be 9216. This ensures that the length of the codeword in the channel is always fixed (8448 bit) and allows the code rate to achieve a variation between 0.6970 and 0.8485. The code variants representing different code rates are detailed in the Table 2. This construction strategy with fixed code length provides more options for systems that require certain fixed frame lengths. At the receiver side, only a single check matrix is needed to decode different code rates.

Tables Icon

Table 2. List of LDPC code variants from Scheme 1a

The second flexible rate scheme with an fixed length of information bits also takes a combination of shortening and puncturing operation. The difference is that the number of shortening bits always remains constant as the number of puncturing bits changes. The number and position selection of puncturing bits are consistent with Scheme 1, but the number of shortening bits remains constant at 9216. The code variants with different code rate based on Scheme 2 are listed in Table 3. It is worth noting that it is possible to increase the codeword length by reducing the shortening bit length. However, a lower limit of code rate and a wider code rate range can be obtained when more information bits are shortened.

Tables Icon

Table 3. List of LDPC code variants from Scheme 2a

2.2 Low-complexity LDPC decoding algorithms

When using the BP [21] decoding algorithm, the LDPC codes can exhibit performance very close to the Shannon limit. Although the BP decoder has exceptional performance in the waterfall region, it suffers from high computational complexity. Therefore, we investigate several simplified decoding algorithms and analyze their computational complexity. The MS [22] algorithm is a simplified version of the BP decoding algorithm, which is optimized based on the properties of the tanh function. The logarithmic and tanh operations can be replaced by minimum value search operation in the check nodes update. Therefore, the computational complexity can be reduced significantly. However, the MS algorithm is suffered from obvious performance loss. The OMS algorithm [23] is achieved by enhancing the update step of the check node, where a bias factor is introduced to offset some of the performance losses resulting from the simplification of the MS algorithm. Both the above decoding algorithms require the computation of the minimum magnitude and the second minimum magnitude during each check nodes update process. The hardware consumption caused by the computation and storage still cannot be neglected when decoding large-length LDPC codes. To further reduce the complexity, a new approximation called single-minimum scheme [29] is used to the check update function. This method only searches the minimum magnitude while the second minimum magnitude is estimated based on the minimum magnitude. The vwMS [24] algorithm follows this concept and uses varying weight factors to estimate the second minimum magnitude. Relaxed MS (RMS) [30] is an alternative approach to improve the decoding performance that utilizes the successive under relaxation technique to achieve variable nodes update. The RMS-2ME algorithm [25], whose core idea is to combine RMS with the single-minimum scheme, also presents good performance improvements compared to MS algorithm.

In the selection of scheduling strategies, we choose a layered decoding strategy, which is a partially parallel execution method combining both serial and parallel processes. This strategy involves serial execution between layers and parallel execution within layers. While full parallel decoding strategy is fast, it also consumes more hardware resources. Moreover, in the process of layered decoding, the updated information from each layer can be directly used by the next layer, which also accelerates the convergence speed of decoding. The layered decoding strategy is shown in Algorithm 1. Algorithm 1 demonstrates the specific implementation process of MS, OMS, vwMS and RMS-2ME decoding algorithms based on the layered strategy. $L(q^{(iter)}_{ij,l})$ represents the LLR message passed from variable node $v_i$ to check node $c_j$ after the decoding operation of layer $\textit {l}$ is completed in the $\textit {iter}$ iteration. ${L(\sigma ^{(iter)}_{ji})}$ represents the LLR message passed from check node $c_j$ to variable node $v_i$ after the decoding operation of layer $\textit {l}$ is completed in the $\textit {iter}$ iteration. $L(Q^{(iter)}_{i,l})$ represents the posterior LLR of variable node $\textit {i}$ in the $\textit {iter}$ iteration of layer $\textit {l}$. $N_v(i)\backslash \{j\}$ represents the set of check nodes associated with the variable node $v_i$, excluding the $j^{th}$ check node. $N_c(j)\backslash \{i\}$ represents the set of variable nodes associated with the check node $c_j$, excluding the $i^{th}$ variable node. $\tilde {N}$ represents the number of $\beta _{min}$. The settings of some key parameters are as follows: $c_{offset}$ is 0.5 in OMS, $\lambda$ is 0.75 in vwMS, $\omega$ and $\rho$ are set to 1 and 0.5 in RMS-2ME, respectively.

Tables Icon

Algorithm 1. MS-based layered decoding process for LDPC

Table 4 analyzes the complexity of each layer at every iteration for the algorithm, including the number of occurrences of addition, comparison, and multiplication. Here, $d_{c}$ represents the degree of the check nodes. There are 12 layers in the decoding process, $d_{c}$ is 22 for the second layer and 23 for the other layers, which are determined by the degree distribution of the check nodes. Z represents the expansion factor whose value is 256. For MS and OMS decoding algorithms, both the minimum value and the second minimum value need to be searched. Assuming there are $d_{c}$ inputs, $d_{c}$-1 comparisons are needed to determine the minimum value and $d_{c}$-2 comparisons are required to find the second minimum value from the remaining inputs. Finally, $d_{c}$ outputs are generated by comparing $d_{c}$ inputs with the minimum value. Therefore, the total number of comparisons is 3$d_{c}$-3. For vwMS and RMS-2ME, only the minimum value need to be searched. 2$d_{c}$-1 comparisons are sufficient to complete the check nodes update of RMS-2ME algorithm (assume there is only one minimum value). But it is necessary for vwMS that the minimum value is compared with $d_{c}$ inputs to determine whether there is only one value equal to the minimum value or not. Therefore, 3$d_{c}$-1 comparisons needed for vwMS.

Tables Icon

Table 4. Complexity analysis of each layer in each iteration

2.3 Simulation performance

To investigate the performance of the constructed RC-LDPC code family, we carry out the following simulations. Firstly, we construct a set of RC-LDPC codes based on the proposed shortening and puncturing strategy, and collect at least 100 error blocks or transmit 1 million code blocks for performance comparison. Then, we simulate and analyze the reliability of different decoding algorithms. The maximum number of iterations for each decoding algorithm is set to 30. Figure 2 shows the simulation comparison of two puncturing strategies. At the BER threshold of $10^{-5}$, the puncturing strategy of our proposed method shows performance improvements of 0.51 dB, 0.46 dB, 0.52 dB, 0.58 dB, 0.86 dB, and 1.09 dB over methods in [12] at 6 different code rates, which is consistent with the results shown in Table 1. Figure 3 and 4 respectively display the BER performance versus Eb/N0 curves of RC-LDPC codes constructed by Scheme 1 under different decoding algorithms and code rates.

 figure: Fig. 2.

Fig. 2. Simulation comparison of two puncturing strategies with 6 different LDPC codes constructed by Scheme 1 and OMS decoding algorithm. The dashed line represents results of method in [12], and the solid line represents results of this method.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Comparison of simulation results for LDPC codes with different code rates constructed according to Scheme 1 under decoding algorithms of (a) MS, (b) OMS, (c) vwMS and (d) RMS-2ME.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Comparison of simulation results in Scheme 1 for LDPC decoding algorithms at code rate of (a) 0.6970, (b) 0.7273, (c) 0.7576, (d) 0.7879, (e) 0.8182 and (f) 0.8485.

Download Full Size | PDF

It can be seen from Fig. 3 that the constructed RC-LDPC codes exhibit good waterfall region performance within the designed code rate range (0.6970$\sim$0.8485) and do not show any apparent error floor at BER = $10^{-5}$. For the base matrix, the performance of the RC-LDPC codes suffers a certain loss with additional puncturing column which leads to higher code rate. The presented simulation results are consistent with this prediction. Furthermore, there is a more pronounced performance loss at code rates of R = 0.8485, which implies that the number of puncturing columns needs to be controlled within an appropriate range to prevent the occurrence of excessive performance loss.

Figure 4 demonstrates the performances of different decoding algorithms. It is shown that the OMS algorithm have superior BER performance across the 6 code rates. At BER threshold of $10^{-5}$ from Fig. 4(a), OMS, RMS-2ME, and vwMS algorithms offer enhancements of 1 dB, 0.88 dB, and 0.48 dB, over the MS algorithm, respectively. Similarly, the simulation results of the RC-LDPC codes constructed from Scheme 2 are shown in Fig. 5 and Fig. 6, where OMS and MS algorithms also present $\sim$1 dB performance difference. It verifies the possibility that different decoding algorithms can provide another degree of freedom in flexible PON with different computational complexity. It is noted that both the MS and OMS algorithms require no multiplication operation as shown in Talbe 4. Therefore, MS algorithm is preferred to the ONUs with large power margin, while OMS algorithm can be used to the ONUs with small power margin at the cost of only a little increased computational complexity.

 figure: Fig. 5.

Fig. 5. Comparison of simulation results for LDPC codes with different code rates constructed according to Scheme 2 under decoding algorithms of (a) MS, (b) OMS, (c) vwMS and (d) RMS-2ME.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Comparison of simulation results in Scheme 2 for LDPC decoding algorithms at different code rate of (a) 0.6774, (b) 0.7, (c) 0.7241, (d) 0.75, (e) 0.7778, and (f) 0.8077.

Download Full Size | PDF

3. Experimental setup

The experimental setup of the coherent optical communication system is described in Fig. 7. Firstly, we use a 1550 nm external cavity laser (ECL) with a linewidth of approximately 100 kHz to modulate the waveform generated by a four-channel arbitrary waveform generator (AWG, Keysight M8194A). According to the length of the constructed LDPC codes from Scheme 1, a frame of 16QAM symbol sequence with length of 2112 is generated. 12 frames are then combined with a fixed length of 25344 symbols for each polarization. The 25 GBaud 16QAM signal is then pre-equalized and resampled to 120 GSa to match the output sampling rate of AWG. Considering the code rate of the RC-LDPC codes in Scheme 1 and Scheme 2, the net data rate is varied between 135.5 Gb/s and 169.7 Gb/s, a wider net data rate range can also be achieved by changing the modulation format at different data rates. The electrical signal is then converted to optical signal via a dual-polarization IQ modulator. The optical signal is amplified by an erbium-doped fiber amplifier (EDFA) and then propagated through a 48-km standard single-mode fiber (SSMF). The optical signal subsequently passes through a variable optical attenuator (VOA) to adjust the received optical power level. At the receiver side, the optical signal is finally received by an integrated coherent receiver, where an ECL is employed as a local oscillator (LO) and the four outputs of the coherent receiver are sampled by a real-time digital storage oscilloscope (DSO, Teledyne LeCroy LabMaster 10 Zi-A) with a sampling rate of 80 GSa/s. Finally, the offline digital signal processing (DSP) is applied to the received signals. As described in Fig. 7, the DSP consists of chromatic dispersion (CD) compensation, down conversion, time synchronization, training sequences-aided frequency offset compensation, decision-direct least mean square (DD-LMS) equalization, QAM demapping and FEC decoding [31].

 figure: Fig. 7.

Fig. 7. Experimental setup for the coherent PON based on dual-polarization 25GBaud 16QAM formats and RC-LDPC.

Download Full Size | PDF

4. Result and discussion

The data collected from the experiment are all generated at the launch optical power of 12 dBm. We calculate the BER for the data collected without channel encoding and the BER result is shown in Fig. 8. It can be seen from Fig. 8 that when BER changes from 0.0036 to 0.0205, the received optical power is −22.1 dBm to −25.7 dBm, corresponding to an optical power budget from 34.1 dB to 37.7 dB. The constellation diagrams for received optical powers of −28.4 dBm and −23 dBm are also shown in Fig. 8.

 figure: Fig. 8.

Fig. 8. BER versus received optical power for 200 Gb/s coherent PON.

Download Full Size | PDF

A set of LDPC codes derived from the same encoding matrix exhibits different performance when the same noise is added. This characteristic can provide the opportunity to reuse experimental data for LDPC decoding [3234]. The received optical power is increased in steps of 0.3 dB. Based on the designed LDPC code length, 160 frames with noise and 16QAM signal are collected at each value of received optical power. First, we extract the noise from these frames and assign them with numbers from 1 to 160. Then we generate a random number following a uniform distribution (1 to 160), and finally add the corresponding noise to the constructed RC-LDPC code in each frame for the following demodulation and decoding process. Similar to the simulation settings, at least 100 error blocks are collected or up to 1 million blocks are calculated at each value of received optical power. Figure 9(a) to (d) respectively shows the BER performances of the RC-LDPC codes constructed by Scheme 1 under four decoding algorithms. Referring to Fig. 9(a), when BER = $10^{-5}$, the lowest rate LDPC codes have a received optical power that is 3.32 dB lower than the highest rate for MS decoding algorithm. As the rate increases, the minimum and maximum differences between two adjacent code rates are 0.39 dB and 1.21 dB, respectively. Similar comparative results are noted in Fig. 9(b) to (d) as well, which are consistent with the analysis from our previous simulations.

 figure: Fig. 9.

Fig. 9. Experimental results of Scheme 1: BER performance of the constructed LDPC codes under different decoding algorithms of (a) MS, (b) OMS, (c) vwMS, (d) RMS-2ME.

Download Full Size | PDF

Figure 10 exhibits the comparisons among different decoding algorithms at code rates of 0.6970, 0.7273, 0.7576, 0.7879, 0.8182 and 0.8485, respectively. As shown in Fig. 10(a), OMS decoding algorithm shows 0.17 dB, 0.68 dB and 1.46 dB improvement over RMS-2ME, vwMS and MS decoding algorithms by at BER threshold of $10^{-5}$, respectively. Throughout the results at the six code rates, the OMS and RMS-2ME decoding algorithms have consistently maintained stable and excellent performance. For the vwMS and MS decoding algorithms, as the code rate increases, the performance of the vwMS algorithm becomes increasingly close to that of the MS algorithm. This also indirectly reflects that the vwMS algorithm cannot properly handle the performance loss caused by puncturing operation. The variable weight factor changes with the number of iterations, which means that there are 30 weight coefficients to be set. For simplicity, we do not consider the impact of code rate changes, which may cause the serious performance degradation of vwMS algorithm. From the results in Fig. 9 and 10, the combination of different code rates and decoding algorithms may lead to the receiver sensitivity differences of 4.8 dB in Scheme 1, which provides certain degree of freedom to achieve flexible operations in PON.

 figure: Fig. 10.

Fig. 10. Experimental results of Scheme 1: comparison of BER performance of LDPC decoding algorithms at different code rates of (a) 0.6970, (b) 0.7273, (c) 0.7576, (d) 0.7879, (e) 0.8182, and (f) 0.8485.

Download Full Size | PDF

In the same way, we conduct experimental verification on the RC-LDPC codes constructed according to Scheme 2. Figure 11(a) to (d) exhibit the BER performances under the four decoding algorithms as the code rate varies. As shown in Fig. 11(a) based on MS decoding algorithm, the lowest rate LDPC codes have a received optical power that is 3.19 dB lower than the highest rate at the BER threshold of $10^{-5}$. As the rate increases, the minimum and maximum differences between two adjacent code rates are 0.35 dB and 1.19 dB, respectively. The decoding results based on OMS, vwMS and RMS-2ME are noted in Fig. 11(b) to (d) as well, which also confirms our previous simulations.

 figure: Fig. 11.

Fig. 11. Experimental results of Scheme 2: BER performance of the constructed LDPC codes under different decoding algorithms of (a) MS, (b) OMS, (c) vwMS, and (d) RMS-2ME.

Download Full Size | PDF

Figure 12(a) to (f) show the performance of different algorithms at code rates of 0.6774, 0.7, 0.7241, 0.75, 0.7778 and 0.8077, respectively. The OMS algorithm still maintains superior and stable performance. At BER = $10^{-5}$, compared to the MS decoding algorithm, the OMS decoding algorithm has performance improvements of 1.42 dB, 1.37 dB, 1.42 dB, 1.56 dB, 1.45 dB, and 1.55 dB under the 6 code rates, respectively. The differences in the other algorithms are also marked in the figure. Considering both the code rates and decoding algorithms, the performance differences can be as large as 4.6 dB in Scheme 2. The experimental results are basically consistent with our previous simulation analysis, and the rate-compatible codewords constructed by the two schemes have both demonstrated excellent waterfall region performance.

 figure: Fig. 12.

Fig. 12. Experimental results of Scheme 2: comparison of BER performance of LDPC decoding algorithms at different code rate of (a) 0.6774, (b) 0.7, (c) 0.7241, (d) 0.75, (e) 0.7778, and (f) 0.8077.

Download Full Size | PDF

5. Conclusion

In this paper, two schemes of RC-LDPC codes using the puncturing and shortening methods with fixed and adjustable information bit length are constructed. The PEXIT technique is applied to optimize the selection of puncturing position, and a threshold gain of 0.6568 dB is obtained when the number of puncturing bits is 1792 (7$\times$256). We investigate four low-complexity LDPC decoding algorithms and perform simulation analysis on the constructed codes with the layered decoding strategy. The simulation results show the maximal 1.37 dB performance improvement after optimization of the puncturing position. The multiplier-free OMS decoding algorithm also shows $\sim$1 dB improvement over MS algorithm with only a little increased computational complexity. Finally, a coherent optical communication system with line rate of 200 Gb/s is designed for experimental verification. Our experimental results also proves that the two schemes of RC-LDPC codes can generate net data rate from 135.5 to 169.7 Gb/s with $\sim$3 dB performance differences. At the BER threshold of $10^{-5}$, the OMS decoding algorithm has the best performance, which provides $\sim$1.5 dB improvement over MS decoding algorithm. The combination of LDPC at difference code rate and different decoding algorithm can achieve maximal 4.8 dB receiver sensitivity difference among ONUs, which may have the potential in future flexible 200G coherent PON system.

Funding

National Key Research and Development Program of China (2022YFB2903201).

Acknowledgments

The work is supported by the National Key R&D Program of China under Grant 2022YFB2903201.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. L. A. Campos, Z. Jia, M. Xu, et al., “Coherent optics for access from P2P to P2MP,” in Optical Fiber Communications Conference and Exhibition (2022), pp. 1–3.

2. ITU-T G.9804.3 (2021) Amd.1, “50-Gigabit-capable passive optical networks (50G-PON): physical media dependent (PMD) layer specification amendment 1,” Tech. rep. (2023).

3. M. S. Faruk, X. Li, and D. Nesset, “Coherent passive optical networks: why, when, and how,” IEEE Commun. Mag. 59(12), 112–117 (2021). [CrossRef]  

4. F. Li, M. Yin, and Z. Luo, “Architecture and key digital signal processing techniques of a next-generation passive optical network,” J. Opt. Commun. Netw. 15(3), A82–A91 (2023). [CrossRef]  

5. M. S. Faruk, X. Li, and S. J. Savory, “Experimental demonstration of 100/200-Gb/s/λ PON downstream transmission using simplified coherent receivers,” in Optical Fiber Communications Conference and Exhibition (2022), pp. 1–3.

6. X. Li, M. S. Faruk, and S. J. Savory, “Bidirectional symmetrical 100 Gb/s/λ coherent PON using a simplified ONU transceiver,” IEEE Photonics Technol. Lett. 34(16), 838–841 (2022). [CrossRef]  

7. A. Hraghi, G. Rizzelli, and A. Pagano, “Analysis and experiments on C band 200G coherent PON based on alamouti polarization-insensitive receivers,” Opt. Express 30(26), 46782–46797 (2022). [CrossRef]  

8. M. X. Li, J. Li, and M. Luo, “400Gb/s real-time coherent PON based on a silicon photonic integrated transceiver,” Opt. Express 30(26), 47847–47855 (2022). [CrossRef]  

9. G. Li, S. Xing, and J. Jia, “Local oscillator power adjustment-based adaptive amplification for coherent TDM-PON with wide dynamic range,” J. Lightwave Technol. 41(4), 1240–1249 (2023). [CrossRef]  

10. I. B. Kovacs, M. S. Faruk, P. Torres-Ferrera, et al., “Simplified coherent optical network units for very-high-speed passive optical networks,” J. Opt. Commun. Netw. 16(7), C1–C10 (2024). [CrossRef]  

11. I. B. Kovacs, M. S. Faruk, A. Wonfor, et al., “Symmetric bidirectional 200 Gb/s/λ PON solution demonstrated over field installed fiber,” in Optical Fiber Communications Conference and Exhibition (2024), pp. 1–3, M1l.3.

12. R. Borkowski, M. Straub, and Y. Ou, “FLCS-PON – a 100 Gbit/s flexible passive optical network: concepts and field trial,” J. Lightwave Technol. 39(16), 5314–5324 (2021). [CrossRef]  

13. S. Xing, G. Li, and A. Yan, “Principle and strategy of using probabilistic shaping in a flexible coherent passive optical network without optical amplifiers,” J. Opt. Commun. Netw. 15(8), 507–517 (2023). [CrossRef]  

14. N. Kaneda, R. Zhang, and Y. Lefevre, “Experimental demonstration of flexible information rate PON beyond 100 Gb/s with probabilistic and geometric shaping,” J. Opt. Commun. Netw. 14(1), A23–A30 (2022). [CrossRef]  

15. S. Xing, G. Li, and A. Sun, “Demonstration of PS-QAM based flexible coherent PON in burst-mode with 300G peak rate and ultra-wide dynamic range,” J. Lightwave Technol. 41(4), 1230–1239 (2023). [CrossRef]  

16. J. Zhou, J. He, and X. Lu, “100G fine-granularity flexible-rate passive optical networks based on discrete multi-tone with PAPR optimization,” J. Opt. Commun. Netw. 14(11), 944–950 (2022). [CrossRef]  

17. W. Mo, J. Zhou, and G. Liu, “Simplified LDPC-assisted CNC algorithm for entropy-loaded discrete multi-tone in a 100G flexible-rate PON,” Opt. Express 31(4), 6956–6964 (2023). [CrossRef]  

18. M. Xu, H. Zhang, Z. Jia, et al., “Adaptive modulation and coding scheme in coherent PON for enhanced capacity and rural coverage,” in Optical Fiber Communications Conference and Exhibition (2021), pp. 1–3.

19. R. Borkowski, M. Straub, Y. Ou, et al., “World’s first field trial of 100 Gbit/s flexible PON (FLCS-PON),” in European Conference on Optical Communications (2020), pp. 1–4.

20. T. V. Nguyen and A. Nosratinia, “Rate-compatible short-length protograph LDPC codes,” IEEE Commun. Lett. 17(5), 948–951 (2013). [CrossRef]  

21. D. J. MacKay, “Good error-correcting codes based on very sparse matrices,” IEEE Trans. Inf. Theory 45(2), 399–431 (1999). [CrossRef]  

22. M. P. Fossorier, M. Mihaljevic, and H. Imai, “Reduced complexity iterative decoding of low-density parity check codes based on belief propagation,” IEEE Trans. Commun. 47(5), 673–680 (1999). [CrossRef]  

23. J. Chen, A. Dholakia, and E. Eleftheriou, “Reduced-complexity decoding of LDPC codes,” IEEE Trans. Commun. 53(8), 1288–1299 (2005). [CrossRef]  

24. F. Angarita, J. Valls-Coquillat, V. Almenar, et al., “Reduced-complexity min-sum algorithm for decoding LDPC codes with low error-floor,” IEEE Trans. Circuits Syst. I 61(7), 2150–2158 (2014). [CrossRef]  

25. S. Hemati, F. Leduc-Primeau, and W. J. Gross, “A relaxed min-sum LDPC decoder with simplified check nodes,” IEEE Commun. Lett. 20(3), 422–425 (2016). [CrossRef]  

26. “IEEE standard for ethernet amendment 9: Physical layer specifications and management parameters for 25 Gb/s and 50 Gb/s passive optical networks,” IEEE Std 802.3ca-2020 (Amendment to IEEE Std 802.3-2018 as amended by IEEE 802.3cb-2018, IEEE 802.3bt-2018, IEEE 802.3cd-2018, IEEE 802.3cn-2019, IEEE 802.3cg-2019, IEEE 802.3cq-2020, IEEE 802.3cm-2020, and IEEE 802.3ch-2020) pp. 1–267 (2020).

27. G. Liva and M. Chiani, “Protograph LDPC codes design based on EXIT analysis,” in Global Telecommunications Conference (IEEE, 2007), pp. 3250–3254.

28. S. Ten Brink, G. Kramer, and A. Ashikhmin, “Design of low-density parity-check codes for modulation and detection,” IEEE Trans. Commun. 52(4), 670–678 (2004). [CrossRef]  

29. A. Darabiha, A. Carusone, and F. Kschischang, “A bit-serial approximate min-sum LDPC decoder and FPGA implementation,” in International Symposium on Circuits and Systems (IEEE, 2006), pp. 4 pp.–.

30. S. Hemati and A. H. Banihashemi, “Dynamics and performance analysis of analog iterative decoding for low-density parity-check (LDPC) codes,” IEEE Trans. Commun. 54(1), 61–70 (2006). [CrossRef]  

31. Md. S Faruk and S. J. Savory, “Digital signal processing for coherent transceivers employing multilevel formats,” J. Lightwave Technol. 35(5), 1125–1141 (2017). [CrossRef]  

32. N. Stojanović, Y. Zhao, and D. Chang, “Reusing common uncoded experimental data in performance estimation of different FEC codes,” IEEE Photonics Technol. Lett. 25(24), 2494–2497 (2013). [CrossRef]  

33. B. Chen, Y. Lei, S. van der Heide, et al., “First experimental verification of improved decoding of staircase codes using marked bits,” in Optical Fiber Communications Conference and Exhibition (2019), pp. 1–3.

34. W. Fang, B. Chen, Y. Lei, et al., “Experimental demonstration of neural network-based soft demapper for long-haul optical transmission,” in Opto-Electronics and Communications Conference (2023), pp. 1–4.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Method of the flexible LDPC encoder under IEEE 802.3ca standard.
Fig. 2.
Fig. 2. Simulation comparison of two puncturing strategies with 6 different LDPC codes constructed by Scheme 1 and OMS decoding algorithm. The dashed line represents results of method in [12], and the solid line represents results of this method.
Fig. 3.
Fig. 3. Comparison of simulation results for LDPC codes with different code rates constructed according to Scheme 1 under decoding algorithms of (a) MS, (b) OMS, (c) vwMS and (d) RMS-2ME.
Fig. 4.
Fig. 4. Comparison of simulation results in Scheme 1 for LDPC decoding algorithms at code rate of (a) 0.6970, (b) 0.7273, (c) 0.7576, (d) 0.7879, (e) 0.8182 and (f) 0.8485.
Fig. 5.
Fig. 5. Comparison of simulation results for LDPC codes with different code rates constructed according to Scheme 2 under decoding algorithms of (a) MS, (b) OMS, (c) vwMS and (d) RMS-2ME.
Fig. 6.
Fig. 6. Comparison of simulation results in Scheme 2 for LDPC decoding algorithms at different code rate of (a) 0.6774, (b) 0.7, (c) 0.7241, (d) 0.75, (e) 0.7778, and (f) 0.8077.
Fig. 7.
Fig. 7. Experimental setup for the coherent PON based on dual-polarization 25GBaud 16QAM formats and RC-LDPC.
Fig. 8.
Fig. 8. BER versus received optical power for 200 Gb/s coherent PON.
Fig. 9.
Fig. 9. Experimental results of Scheme 1: BER performance of the constructed LDPC codes under different decoding algorithms of (a) MS, (b) OMS, (c) vwMS, (d) RMS-2ME.
Fig. 10.
Fig. 10. Experimental results of Scheme 1: comparison of BER performance of LDPC decoding algorithms at different code rates of (a) 0.6970, (b) 0.7273, (c) 0.7576, (d) 0.7879, (e) 0.8182, and (f) 0.8485.
Fig. 11.
Fig. 11. Experimental results of Scheme 2: BER performance of the constructed LDPC codes under different decoding algorithms of (a) MS, (b) OMS, (c) vwMS, and (d) RMS-2ME.
Fig. 12.
Fig. 12. Experimental results of Scheme 2: comparison of BER performance of LDPC decoding algorithms at different code rate of (a) 0.6774, (b) 0.7, (c) 0.7241, (d) 0.75, (e) 0.7778, and (f) 0.8077.

Tables (5)

Tables Icon

Table 1. The comparative analysis of the decoding thresholds with different puncturing positionsa

Tables Icon

Table 2. List of LDPC code variants from Scheme 1a

Tables Icon

Table 3. List of LDPC code variants from Scheme 2a

Tables Icon

Algorithm 1. MS-based layered decoding process for LDPC

Tables Icon

Table 4. Complexity analysis of each layer in each iteration

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.