Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Experimental demonstration of fronthaul flexibility for enhanced CoMP service in 5G radio and optical access networks

Open Access Open Access

Abstract

The RAN architecture towards mobile 5G and beyond is undergoing a fundamental evolution, which brings optics into the radio world. Fronthaul is a new segment that leverages on the advantages of optical communication for RAN transport. However, the current fronthaul architecture shows a fixed connection between an RRH and a BBU, which leads to inefficient resource utilization. In this paper, we focus on the fronthaul flexibility that allows “any-RRH to any-BBU” connection. In particular, we consider a CoMP service and discuss how flexible optical fronthaul helps to improve its performance. To achieve this goal, we propose an SDN-enabled orchestration for coordinating radio and optical access networks. Under this unified control manner, the agile RRH-BBU mapping can be reached through lightpath reconfiguration. To further verify the benefits of flexibility, we experiment the CoMP service in the cloud radio over flexible optical fronthaul (CRoFlex) testbed. Experimental results demonstrate the proposed SDN-enabled flexible optical fronthaul can improve the CoMP performance by optimizing the RRH-BBU mapping.

© 2017 Optical Society of America

1. Introduction

With spreading of virtual reality, mobile online gaming, and Internet of Things (IoT), the fifth generation (5G) mobile communication is heading towards higher data rate, lower end-to-end latency and less cost and power [1, 2]. Cloud radio access network (CRAN) has been proposed to achieve these goals [3]. A big motivation of CRAN is to decouple the baseband processing functions, called baseband unit (BBU), from the conventional base station (BS). Then the BS is simplified as radio transmitting and receiving functions only, called remote radio head (RRH). The BBUs are centralized in a BBU pool and implemented on a general-purpose platform which can be flexibly configured for virtualization. The RRHs are simple, compact and easy manipulating, which can be placed close to mobile users for ubiquitous access.

Fronthaul is a new segment of CRAN that connects between the BBU and the RRH for data transmission [4]. Typically, a BBU communicates with an RRH via a common public radio interface (CPRI) to exchange digitalized radio signals. The CPRI transmission requires fronthaul to support a high bandwidth and low latency. Advanced optical access technologies are used to satisfy these requirements. One of the promising solutions is a time and wavelength division multiplexed passive optical network (TWDM-PON) [5–9], which combines the TDM and WDM manners to provide a cost-effective broadband access. To efficiently use the fronthaul bandwidth, the authors proposed a data compression scheme for the CPRI signal and experimented it on a TDM-PON based testbed [10, 11]. Another method to increase the fronthaul capacity is optical spatial division multiplexing (SDM), which utilizes multicore fiber to provide a multiple input multiple output (MIMO) optical fronthaul [12]. The optical MIMO processing for multicore fiber can be jointly optimized with the radio MIMO processing [13]. Besides the bandwidth, fronthaul latency is also an important factor to be considered. The main reason for low-latency requirement is due to the wireless signaling transmission between the RRH and BBU over the fronthaul, such as hybrid automatic repeat request (HARQ) [14]. The time budget left for the fronthaul for a round trip is under 500 us, which consists of light propagation delay and transport processing delay [15]. The transport processing delay can be optimized by designing the optical fronthaul systems for the low-latency requirement [15–19, 22]. Some works focused on the encapsulation and queuing delays for the CPRI signal [15]. Efficient packetization methods were proposed for an Ethernet-based optical fronthaul [16, 17]. On the other hand, to reduce the grant time delay for upstream transmission in the TDM-PON, the authors proposed a mobile dynamic bandwidth allocation (M-DBA) scheme that cooperates with mobile scheduling to achieve a low latency [18, 19]. The efforts of the above solutions to reduce the bandwidth demand are limited by the intrinsic shortage of CPRI. Because the CPRI data rate is related to the installation of an RRH (e.g., the number of antennas), but not with the actual mobile traffic load. Therefore, the bandwidth demand will be dramatically growing when the scale of antenna array becomes larger and larger. To deal with this issue, next generation fronthaul interface (NGFI) was proposed through redefining the functional split between the BBU and the RRH [20, 21]. The NGFI is to support user-dependent data rate, where fronthaul bandwidth varies with mobile traffic load. Also, by moving some baseband processing functions (e.g., functions below low MAC layer) back to the RRH, the HARQ can be processed locally in the RRH which reduces the signaling transmission delay. With the functional split, a mobile-PON architecture was proposed to achieve a high-efficiency and low-latency mobile fronthaul [22].

Most previous studies mainly focused on the bandwidth and latency problems of fronthaul. However, there is very few study that concerns about the fronthaul flexibility which is also an important feature. The current fronthaul architecture presents a fixed connection between the BBU and the RRH, which leads to inefficient resource utilization. Flexible fronthaul allows “any-RRH to any-BBU” connection through reconfigurable lightpath [23, 24]. In this paper, we focus on the fronthaul flexibility and discuss how flexible optical fronthaul helps to improve wireless performance. In particular, we consider a coordinated multipoint (CoMP) service as an enhanced wireless service enabled by flexible optical fronthaul. One of the problems for the CoMP service is the data exchange, where massive signaling and raw radio samples are shared between the BBUs for the purpose of coordination. In our previous work [25], we proposed a lightpath reconfiguration algorithm to reassociate coordinated RRHs with a single BBU by using minimum-cut graph. While the earlier work focused on the problem statement and algorithm design, this paper is a follow-up work, where the proposed algorithm is demonstrated in the software-defined networking (SDN) enabled radio and optical access network testbed. The experimental results verify the feasibility of the proposed architecture.

The rest of this paper is organized as follows. Section 2 introduces the needs of flexibility for 5G optical fronthaul. Section 3 describes how CoMP service can be improved through a reconfigurable fronthaul. Section 4 proposes some enabling technologies for the radio and optical orchestration. Section 5 shows the experimental setup and results of the CoMP service over flexible optical fronthaul. Section 6 concludes the paper.

2. The flexibility for 5G optical fronthaul

The ever-growing number of antennas requires fronthaul to support massive connections [26], where each connection can be dynamically setup and released on demand. Additionally, NGFI data rate shows a user-dependent characteristic that requires fronthaul to support flexible bandwidth allocation. However, the current fronthaul architecture only supports a point-to-point connection with constant data rate (e.g., CPRI), which is not suitable for the future demands. In this section, we discuss the needs of fronthaul flexibility from the following two aspects.

2.1. Agile RRH-BBU mapping

Currently, the RRH-BBU mapping in the CRAN is usually configured offline, and cannot be changed dynamically according to network condition. Figure 1(a) shows a fixed connection scenario. This rigid connection will lead to inefficient resource utilization and service degradation. First, mobile traffic shows a tidal-effect characteristic, in which the RRH load fluctuates over time and area. When the RRH loads are low at some periods, the BBU utilizations are decreased to 30% and 45%, as shown in Fig. 1(a). Second, reliability is becoming an important issue in the fronthaul. Since an RRH-BBU pair has a one-to-one connection, dynamic link recovery cannot be guaranteed in case of BBU fails. Third, some wireless services (e.g., handover and CoMP) need a group of RRHs’ data to be jointly processed for better mobility performance. Unfortunately, the rigid RRH-BBU mapping limits the co-processing among the RRHs, which indirectly lower the wireless service performance.

 figure: Fig. 1

Fig. 1 Comparison of fixed and flexible optical fronthaul. (a) Fixed fronthaul: one-to-one connection with constant data rate; (b) flexible fronthaul: any-to-any connection with variable data rate.

Download Full Size | PDF

Flexible fronthaul should support an any-to-any connection between the BBU and the RRH. This connection is not static but can be changed dynamically according to network condition. With the benefits of flexibility, an RRH can choose a proper BBU for a specific purpose. The enabling technologies to realize fronthaul flexibility are reconfigurable optical devices. Hybrid electrical and optical cross-connect can provide a high degree of flexibility at each fronthaul node, which is used in the metro optical networks (e.g., Ethernet over WDM ring or mesh network). Besides, tunable lasers and filters can also improve the fronthaul flexibility, which is used at the front and end parts of the optical access networks (e.g., wavelength tunable PON). Figure 1(b) shows a flexible connection scenario, where Alice’s and Bob’s RRHs share the same BBU which improves its utilization to 75%.

2.2. Elastic fronthaul bandwidth allocation

Compared with the CPRI-based fronthaul, the NGFI decouples the interface traffic from the number of antennas, where the data rate varies with mobile load. Since the NGFI has a diversity of bandwidth demands, network operators expect to assign a “just-right-size” lightpath for a connection to improve the fronthaul transmission efficiency. Figure 1(b) shows different sizes of lightpaths can accommodate variable NGFI data rates. The NGFI can significantly improve the bandwidth utilization, but the fronthaul needs to support a flexible bandwidth switching capability. Elastic bandwidth (wavelength) allocation algorithms should be studied, and the algorithms should jointly consider radio and optical perspectives.

Generally, there are two ways to accommodate variable data rates. One solution is to exploit the packet-based characteristic to realize multi-rate NGFI signals multiplexing. This solution requires the fronthaul node to support electric processing ability. The other way is to use flexible grid technology that evolves the traditional International Telecommunication Union (ITU) grid toward high flexibility with fine-grained spectrum slots (e.g., 12.5 GHz vs. 50 GHz or 100 GHz) [27]. The term flexibility refers to the ability of a network to dynamically adjust optical resources, such as the bandwidth of lightpath, transponder, and modulation format, according to the requirement of each connection. The flexible grid is an all-optical solution enabled by a bandwidth-variable wavelength selected switch (BV-WSS) that filters the variable spectral regions [28].

3. Enhanced CoMP service over reconfigurable optical fronthaul

In this section, we propose the flexible optical fronthaul as a service for an enhanced CoMP (eCoMP). First, we describe the concepts of intra-BBU CoMP and inter-BBU CoMP classified by different RRH-BBU mappings. Then we propose to change the mappings by reconfiguring the lightpath between the RRHs and BBUs, which can improve the CoMP performance (i.e., maximizing intra-BBU CoMP ratio).

3.1. Concepts of intra-BBU CoMP and inter-BBU CoMP

The 5G mobile communication is deployed with ultra-dense cells, but the cell-edge users may suffer severe inter cell interference (ICI). CoMP is an efficient method to solve the ICI and improve cells throughput. With the help of CoMP, several geographically-adjacent RRHs jointly process/transmit as a single antenna system that serves for the cell-edge users [29]. According to the BBU that serves the purpose of coordination, we classify the CoMP service into intra-BBU CoMP and inter-BBU CoMP. Figure 2(a) shows coordinated RRHs (RRH3 and RRH4) connect to the same BBU (BBU3). We call this kind of CoMP as intra-BBU CoMP because co-processing is done in a single BBU only. On the other hand, coordinated RRHs (RRH1 and RRH2) connect to multiple BBUs (BBU1 and BBU2), we call it inter-BBU CoMP. Inter-BBU CoMP needs to exchange shared baseband samples and channel state information (CSI) over an X2 interface, which requires a large bandwidth for backhaul [30]. Additionally, the CSI is a time-sensitive information that becomes outdated in a short time (milliseconds), which poses stringent latency on backhaul transmission [31].

 figure: Fig. 2

Fig. 2 CoMP in the flexible optical fronthaul networks. (a) CoMP before lightpath reconfiguration; (b) CoMP after lightpath reconfiguration.

Download Full Size | PDF

Even the CRAN shortens the physical distance between baseband processing entities (e.g., BBU cards), it still needs information exchanged through a backplane Ethernet switch in a BBU pool [25, 32]. As the RAN is developed with dense cells, the complexity of the hardware and control software in the switch will grow significantly. On the one hand, the increasing bandwidth will eat the switch port resources inside the BBU pool, which can lead to highly complex BBU pool designs. On the other hand, the switching latency incurred at the switch ports will negatively impact the performance of the CoMP service. Therefore, any issues with the latency or bandwidth of the backhaul will cause the wireless service performance to degrade.

3.2. Lightpath reconfiguration for eCoMP

Inter-BBU CoMP will cause extra data exchanges with low latency between the BBUs. Therefore, mobile operators would like to lower the inter-BBU CoMP ratio (or increase the intra-BBU CoMP ratio) as much as possible to improve the CoMP performance. The eCoMP exploits the fronthaul flexibility by dynamically reconfiguring the lightpath, which is to reassociate coordinated RRHs (connected to different BBUs) with a single BBU. We tear down the lightpath between RRH1 and BBU1 and reassociate RRH1 with BBU2, as shown in Fig. 2(b). After that, the coordinated RRHs (RRH1 and RRH2) connect to a particular BBU. The bandwidth and latency for transmitting the shared CoMP data between BBU1 and BBU2 can be reduced because the coordination is processed within one BBU only.

However, lightpath reconfiguration will cause BBU load migration. For example, since RRH1 is reassociated with BBU2 in Fig. 2(b), the baseband processing resources for the cell-inner user have to be migrated from BBU1 to BBU2. The BBU load migration may incur service interruption, which leads to delays for users and thus degrades the quality of service. How to balance the tradeoff between lightpath reconfiguration and BBU load migration is important. Our previous work proposed an auxiliary graph based lightpath reconfiguration algorithm, which exploits the minimum-cut graph to deal with the RRH-BBU mapping problem [25]. The proposed algorithm achieves a high intra-BBU CoMP ratio and a low BBU load migration, which is also verified in our testbed in section 5.

4. Enabling technologies for radio and optical orchestration

The 5G RAN is a diverse network environment, where multi-resources are integrated including radio resource, optical resource, and data processing resource. In this section, we propose an SDN-enabled orchestration for jointly scheduling of heterogeneous resources. Based on this orchestration, we design the control plane of our testbed, where eCoMP service is demonstrated on it.

4.1. Framework of SDN-enabled radio and optical orchestration

The framework of SDN-enabled orchestration is shown in Fig. 3(a). Three planes, application plane, control plane, and data plane, are interconnected via a northbound interface (NBI) and a southbound interface (SBI). For data plane, each physical node (i.e., BBU, RRH, and optical transport node) is attached with an OpenFlow agent (OF-agent) that communicates with the controller through extended OpenFlow protocols (OFP). Radio and optical devices are controlled by a radio controller and an optical controller respectively. In each controller, the protocol control (PC) module is to code/decode extended OF messages, and resource maintenance (RM) module is to collect and store physical resource information. The controllers interact with orchestrator plugin to share information between them. Orchestrator plugin is a key module of the control plane, which consists of three subparts. Policy injection module is to execute algorithms for different applications (e.g., eCoMP algorithm). Integrated traffic engineering database (TED) is a database to store virtual radio and optical resources which are abstracted from raw physical data. For example, RRH-BBU relationship and the lightpath between them are stored in a virtual radio resource (VRR) module and a virtual optical resource (VOR) module respectively. Orchestrator engine is an execution module which has two functions. Radio resource mapping (RRM) is responsible for mapping RRH-BBU pairs, and lightpath calculation (LPC) is for lightpath provision. They interact with optical and radio controllers to program physical networks. Figure 3(b) depicts the functional modules and their intercommunications in the orchestrator plugin. For the policy injection module, the eCoMP algorithm is running in it. It is a key module to jointly process heterogeneous resources by calling RRM and LPC to fetch radio information (e.g., CoMP sets) and transport information (e.g., virtual topology) from VRR and VOR respectively. Besides, the RRM and LPC are also responsible for communicating with radio and optical controllers to change the network conditions (e.g., RRH-BBU mappings and lightpaths between them). VRR and VOR abstract the raw data of physical network which are reported from the RM module. The feature of the proposed orchestration framework is to decouple the optimization policy from the control and management of network resources, which makes network operator control each sub-network independently. In Fig. 3(b), the joint resources scheduling is just performed in the policy injection module, and the execution of algorithm outputs are carried out separately for the radio and optical sub-networks.

 figure: Fig. 3

Fig. 3 (a) The framework of SDN-enabled radio and optical orchestration; (b) the functional modules and their interactions of orchestrator plugin.

Download Full Size | PDF

4.2. Procedure of the orchestrator for eCoMP service

The interaction procedure of orchestrator for eCoMP is shown in Fig. 4. When an eCoMP request comes to the policy injection module, the eCoMP algorithm will be executed to calculate proper RRH-BBU mappings and their corresponding lightpaths. Then the orchestrator will inform the radio and transport controllers to reconfigure network states according to the outputs of the eCoMP algorithm. The signaling procedures are detailed as follows:

 figure: Fig. 4

Fig. 4 Cooperation procedures of orchestrator for eCoMP service.

Download Full Size | PDF

Step 1 (red): Orchestrator obtains the current radio resource information via an extended OFP message (OFP_RRH_Feature_Req/Rep), and calculate new RRH-BBU mappings for maximizing the intra-BBU CoMP ratio.

Step 2 (yellow): Orchestrator obtains the current transport resource information via an extended OFP message (OFP_TN_Feature_Req/Rep), and calculate the corresponding lightpaths for the new RRH-BBU mappings.

Step 3&4 (green & blue): Orchestrator tells radio and transport controllers to program the physical networks via some extended OFP messages (OFP_BBU_Mod/Rep & OFP_TN_Mod/Rep) according to the result of eCoMP algorithm.

5. Experimental setup, results, and discussion

To validate how flexible optical fronthaul can help to improve wireless performance, we experimentally demonstrate the eCoMP service in the CRoFlex. The CRoFlex is an SDN-enabled CRAN testbed equipped with programmable optical transport devices, which aims to validate new architectures, innovative applications and advanced algorithms for 5G radio and optical access networks [33, 34].

The experimental topology is shown in Fig. 5(a). There are twenty RRHs charged by two BBU pools, in which a BBU pool contains four BBU cards. The nodes surrounded by red lines are commercial devices or prototypes, and the other nodes are virtual radio and optical devices enabled by OF-agents. The experimental environment of CRoFlex is shown in Fig. 5(b). The fronthaul is based on a dense wavelength division multiplexing (DWDM) network, which supports eight wavelengths with 50 GHz spectrum interval. For fronthaul nodes, we have two kinds of SDN-controlled optical switches. Four Finisar’s 1x9 BV-WSSs are deployed on the core side of the network that can support flexible grid technology, and four self-developed 6x6 fast optical switches are deployed at the access side. The examined switching delay is from a few to dozens of milliseconds. Figure 5(c) shows the structure of a 4-port flexible grid enabled optical switch. The fronthaul connects a commercial long term evolution (LTE) platform which consists of two RRHs (RRH7 and RRH8, emulating two coordinated RRHs), two BBU processing cards (BBU1 and BBU2), and one mobile evolved packet core (EPC) system. The fronthaul interface is based on the CPRI with 6 Gbps. Each BBU card can accommodate three RRHs at the same time. Figure 5(d) illustrates a switching structure, in which a BBU pool connects to the transport network. The backhaul shares the same underlying infrastructure with the fronthaul, but the backhaul traffic (the shared CoMP data) goes different wavelengths. In Fig. 5(d), the backhaul link is connected to an Ethernet switch, and be aggregated to a 6x6 optical switch through the electrical/optical conversion. The fronthaul link is connected to the optical switch directly via fibers. The LTE platform supports a 20-MHz frequency band and CoMP service. A network monitor can observe all the radio outputs. A file transfer protocol (FTP) server is setup to emulate files download for a cell-edge user (CoMP user). The design of the controller is based on the OpenDaylight, which is an open source platform of SDN. Based on the OpenDaylight, we can easily create an orchestrator plugin by using YANG model. The controller communicates with programmable physical devices through extended OFP. An OF-agent is an OFP translator which is implemented by the OpenvSwitch. Each OF-agent is associated with a network device. Controller and OF-agents run separately in different virtual machines on high-performance servers.

 figure: Fig. 5

Fig. 5 (a) Experimental topology; (b) experimental environment of CRoFlex testbed; (c) structure of a 4-port flexible grid enabled optical switching node; (d) switching structure of a BBU pool.

Download Full Size | PDF

We set RRH7 and RRH8 as a CoMP pair to serve a cell-edge user who downloads files from an FTP server. We assume that the CoMP service for cell-edge users is provided by two adjacent cells, and there is no cell-inner user in the experiment. The reference signal received power (RSRP) is the power difference between coordinated RRHs for CoMP pair selection. In this experiment, we set the RSRP threshold 6 dB according to the Ref [32]. Figure 6(a) shows the measured RSRP of coordinated RRH7 and RRH8. At the initial stage, RRH7 and RRH8 are connected to BBU1 and BBU2 respectively via different wavelengths, which serves as an inter-BBU CoMP. Figure 6(b) shows the optical filter output, in which the wavelengths carry radio signals of the coordinated RRHs.

 figure: Fig. 6

Fig. 6 (a) Measured RSRP of coordinated RRHs; (b) optical filter output; (c) measured download rate of the cell-edge user during the experimental time; (d) BBU resource usage before and after lightpath reconfiguration (LR).

Download Full Size | PDF

When an eCoMP request comes into the orchestrator, the eCoMP algorithm is executed, then the RRH7 is reassociated with BBU2 through lightpath reconfiguration. Figure 6(c) shows the measured cell-edge user download rate before and after lightpath reconfiguration. The experiment starts at T0 (20:43:27). The 22% average download rate growth is achieved when a regular CoMP service is provided during the period from T1 (20:48:01) to T2 (20:51:21). According to the outputs of the eCoMP algorithm, orchestrator reconfigures the lightpath at T2. The cell-edge user will experience a transition period from T2 to T3 (20:51:21), in which the download rate will first go down to the rate without CoMP. This is because CoMP service will be interrupted during the lightpath reconfiguration, so the cell-edge user is just served by RRH8. The download rate will climb up and be stable with intra-BBU CoMP after T3. The transition period is about few minutes which depends on the RRH registration for a new BBU and lightpath reconfiguration.

Figure 6(d) shows the BBU resource usages which are measured before and after lightpath reconfiguration. Before lightpath reconfiguration, the cell-edge user is served by an inter-BBU CoMP, where RRH7 and RRH8 are connected to BBU1 and BBU2 respectively. Both BBU1 and BBU2 process CoMP data, which takes about 22% of a BBU card resource for each of them. After lightpath reconfiguration, RRH7 is reassociated with BBU2. So the BBU2 usage increases to 29.25%, while the BBU1 decreases to 14.25%. For here, 14.25% of a BBU card resource is the lowest processing resource to keep a BBU alive without taking any traffic load so that the BBU1 can be turned off for energy saving.

The Wireshark captures of extended OFP messages for the eCoMP service are shown in Figs. 7(a)-7(b). Figure 7(a) details the OFP_RRH_Feature_Req/Rep messages that are to report CoMP information (step 1 in Fig. 4). From the extended field, we can see that RRH8 is connected to BBU2 via a wavelength of 1550.12nm, and the reported CoMP traffic load is 100Mbps. Figure 7(b) shows the OFP_TN_Mod/Rep message that is to reconfigure the lightpath. We also show the overall latency for the lightpath reconfiguration. In Fig. 7(c), the overall latency consists of three parts: orchestrator plugin latency includes software running time and algorithm processing time; control plane latency is OFP processing and propagation delay between controller and OF-agent; data plane latency is hardware response delay of fronthaul devices. We observe that application/algorithm execution time is a major contributor for the overall latency because the TCP stack delay and software delay are large relative to the other latencies. The algorithm execution time under different network scales was presented in [25]. Possible ways of improving this kind of latency are to design low-complexity schemes with respective protocol layers. Besides, powerful computing capability and in-memory caching are effective ways to accelerate the processing speed. This overall latency is not the end-to-end delay from a mobile user to an application server. It is the latency that reflects how long a network operator can change the state of the radio and optical access networks by using SDN.

 figure: Fig. 7

Fig. 7 (a)-(b) Wireshark captures of extended OFP; (c) overall latency for lightpath reconfiguration.

Download Full Size | PDF

The proposed eCoMP algorithm is verified in the control plane of CRoFlex. The network topology is the same as Fig. 5(a), but the commercial devices or prototypes are not attached for this time. The fiber lengths between the nodes are randomly selected between 2.5km and 3.5km. We assume a fronthaul link can support eight wavelengths, and the capacity of a wavelength is 10Gbps. The CPRI data rate for an RRH is 6Gbps. There are total eighty mobile users who are cell-edge users and cell-inner users that generate the same traffic load. The cell-edge users are randomly distributed in the topology, and each of them is served by two adjacent RRHs. As an input to the algorithm, about 60% of the cell-edge users are served by the inter-BBU CoMP. We assume five backhaul routes with different distances are established between BBU1 and BBU2, and the backhaul traffic prefers to choose a shorter path among all the available paths. We consider some factors that cause the latency for inter-BBU communication. Light propagation delay is assumed to be 5us/km, and a 10-Gbps Ethernet switch which is placed on the top of the BBU rack consumes about 5.2 us [35]. Since the backhaul signals are directly over the WDM channels, so there is an optical/electrical/optical (O/E/O) conversion delay (~15us) at each termination of the lightpath. Figure 8 shows the simulation results before and after lightpath reconfiguration (LR) versus the number of cell-edge users. In Fig. 8(a), the eCoMP algorithm can substantially decrease the inter-BBU CoMP ratio because the coordinated RRHs are reassociated with a single BBU. We also observe the inter-BBU CoMP ratio slightly grows when the number of cell-edge users increases. That is because when there are a large number of RRH-BBU pairs, the capacity of the fronthaul link is a barrier for lightpath reconfiguration. Figure 8(a) also shows the cell-inner traffic load migration ratio, which is defined as the migrated cell-inner traffic load divided by the total traffic load. The migration ratio decreases when the number of cell-edge users is increasing, as shown in Fig. 8(a). This is because LR will cause cell-inner traffic load migration between the BBUs. Figure 8(b) shows the latency improvement after LR. Since coordinated RRHs are reassociated with a single BBU, the transmission delay for the shared CoMP data can be reduced. When the number of cell-edge users growing, the latency reduction increases. This is because the inter-BBU communication requires a longer path when more backhaul traffics exist in the network thus leads to larger propagation delay. More details and results analysis of the proposed algorithm are presented in our previous work [25].

 figure: Fig. 8

Fig. 8 Simulation results. (a) Inter-BBU CoMP ratio and cell-inner traffic load migration ratio versus the number of cell-edge users; (b) latency reduction for eCoMP versus the number of cell-edge users.

Download Full Size | PDF

6. Conclusions

High capacity and low latency are the main benefits of optical fronthaul, which come from the nature of optics. However, the current fronthaul architecture shows a rigid connection between an RRH and a BBU, which leads to inefficient resource utilization for massive connections in the upcoming 5G and beyond. Flexibility, a potential benefit enabled by programmable optical networking devices, is gradually influencing the RAN transport network development. Users are mobile, but the network is not mobile, it is flexible. Flexible optical fronthaul can help to improve the wireless service performance. This paper described the need for flexibility from the perspective of agile RRH-BBU mapping and elastic fronthaul bandwidth allocation. We introduced a mobile case (eCoMP), where flexible optical fronthaul performs as a service. The eCoMP was experimentally demonstrated in the CRoFlex testbed enabled by SDN technology. We verified the enhancement of CoMP performance through lightpath reconfiguration for minimizing the inter-BBU CoMP. The integrated radio and optical orchestrator was demonstrated in the experiment. With the benefits of flexibility, future RAN could become truly opened and virtualized, while making the resource utilization more efficiency.

Funding

National Nature Science Foundation of China Project (No. 61501055), the Fundamental Research Funds for the Central Universities, and the ZTE Industry-Academia-Research Cooperation Funds. A short summarized version of this work was presented at OFC2017.

References and links

1. 5G PPP AWG, “View on 5G Architecture,” v. 1.0, Jul. 2016. https://5g-ppp.eu/.

2. N. Marchetti, “Towards 5th generation wireless communication systems,” ZTE Communications 13(1), 11–19 (2015).

3. A. Checko, H. L. Christiansen, Y. Yan, L. Scolari, G. Kardaras, M. S. Berger, and L. Dittmann, “Cloud RAN for Mobile Networks—A Technology Overview,” IEEE Comm. Surv. and Tutor. 17(1), 405–426 (2015). [CrossRef]  

4. A. Pizzinat, P. Chanclou, F. Saliou, and T. Diallo, “Things you should know about fronthaul,” J. Lightwave Technol. 33(5), 1077–1083 (2015). [CrossRef]  

5. D. Iida, S. Kuwano, J. Kani, and J. Terada, “Dynamic TWDM-PON for mobile radio access networks,” Opt. Express 21(22), 26209–26218 (2013). [CrossRef]   [PubMed]  

6. T. Pfeiffer, “Next generation mobile fronthaul and midhaul architecture [invited],” J. Opt. Commun. Netw. 7(11), B38–B45 (2015). [CrossRef]  

7. B. Skubic, G. Bottari, A. Rostami, F. Cavaliere, and P. Öhlén, “Rethinking optical transport to pave the way for 5G and the networked society,” J. Lightwave Technol. 33(5), 1084–1091 (2015). [CrossRef]  

8. X. Liu and F. Effenberger, “Emerging optical access network technologies for 5G wireless,” J. Opt. Commun. Netw. 8(12), B70–B79 (2016). [CrossRef]  

9. J. I. Kani, S. Kuwano, and J. Terada, “Options for future mobile backhaul and fronthaul,” Opt. Fiber Technol. 26, 42–49 (2015). [CrossRef]  

10. N. Shibata, S. Kuwano, J. Terada, and H. Kimura, “Dynamic IQ data compression using wireless resource allocation for mobile front-haul with TDM-PON [invited],” J. Opt. Commun. Netw. 7(3), A372–A378 (2015). [CrossRef]  

11. N. Shibata, T. Tashiro, S. Kuwano, N. Yuki, Y. Fukada, J. Terada, and A. Otaka, “Performance evaluation of mobile front-haul employing Ethernet-based TDM-PON with IQ data compression [Invited],” J. Opt. Commun. Netw. 7(11), B16–B22 (2015). [CrossRef]  

12. M. Morant and R. Llorente, “Experimental Demonstration of LTE-A M×4×4 MIMO Radio-over-Multicore Fiber Fronthaul,” in Proceedings of Optical Fiber Communication Conference and Exhibition (OFC, 2017). [CrossRef]  

13. M. Morant, A. Macho, and R. Llorente, “On the suitability of multicore fiber for LTE–advanced MIMO optical fronthaul systems,” J. Lightwave Technol. 34(2), 676–682 (2016). [CrossRef]  

14. U. Dötsch, M. Doll, H. P. Mayer, F. Schaich, J. Segel, and P. Sehier, “Quantitative analysis of split base station processing and determination of advantageous architectures for LTE,” Bell Labs Tech. J. 18(1), 105–128 (2013). [CrossRef]  

15. N. P. Anthapadmanabhan, A. Walid, and T. Pfeiffer, “Mobile fronthaul over latency-optimized time division multiplexed passive optical networks,” in Proceedings of IEEE International Conference on Communications (ICC), Workshop on Backhaul Networks (2015). [CrossRef]  

16. D. Chitimalla, K. Kondepu, L. Valcarenghi, M. Tornatore, and B. Mukherjee, “5G Fronthaul–Latency and Jitter Studies of CPRI Over Ethernet,” J. Opt. Commun. Netw. 9(2), 172–182 (2017). [CrossRef]  

17. M. K. Al-Hares, P. Assimakopoulos, S. Hill, and N. J. Gomes, “The effect of different queuing regimes on a switched Ethernet fronthaul,” in Proceedings of IEEE International Conference on Transparent Optical Networks (ICTON, 2016). [CrossRef]  

18. T. Tashiro, S. Kuwano, J. Terada, T. Kawamura, N. Tanaka, S. Shigematsu, and N. Yoshimoto, “A novel DBA scheme for TDM-PON based mobile fronthaul,” in Proceedings of Optical Fiber Communication Conference and Exhibition (OFC, 2014). [CrossRef]  

19. S. Hatta, N. Tanaka, and T. Sakamoto, “Feasibility demonstration of low latency DBA method with high bandwidth-efficiency for TDM-PON,” in Proceedings of Optical Fiber Communication Conference and Exhibition (OFC, 2017). [CrossRef]  

20. C. M. R. Institute, “White Paper of Next Generation Fronthaul Interface,” v.1.0, Oct. 2015. http://labs.chinamobile.com/cran/.

21. C. i, Y. Yuan, J. Huang, S. Ma, C. Cui, and R. Duan, “Rethink Fronthaul for Soft RAN,” IEEE Commun. Mag. 53(9), 82–88 (2015). [CrossRef]  

22. S. Zhou, X. Liu, F. Effenberger, and J. Chao, “Mobile-PON: a high-efficiency low-latency mobile fronthaul based on functional split and TDM-PON with a unified scheduler,” in Proceedings of Optical Fiber Communication Conference and Exhibition (OFC, 2017). [CrossRef]  

23. N. Cvijetic, A. Tanaka, K. Kanonakis, and T. Wang, “SDN-controlled topology-reconfigurable optical mobile fronthaul architecture for bidirectional CoMP and low latency inter-cell D2D in the 5G mobile era,” Opt. Express 22(17), 20809–20815 (2014). [CrossRef]   [PubMed]  

24. J. Zhang, Y. Ji, X. Xu, H. Li, Y. Zhao, and J. Zhang, “Energy efficient baseband unit aggregation in cloud radio and optical access networks,” J. Opt. Commun. Netw. 8(11), 893–901 (2016). [CrossRef]  

25. J. Zhang, Y. Ji, S. Jia, H. Li, X. Yu, and X. Wang, “Reconfigurable optical mobile fronthaul networks for coordinated multipoint transmission and reception in 5G,” J. Opt. Commun. Netw. 9(6), 489–497 (2017). [CrossRef]  

26. S. Han, C. L. I. Z. Xu, Q. Sun, and H. Li, “Energy-efficient large-scale antenna systems with hybrid digital-analog beamforming structure,” ZTE Communications 13(1), 28–34 (2015).

27. M. Jinno, H. Takara, B. Kozicki, Y. Tsukishima, Y. Sone, and S. Matsuoka, “Spectrum-efficient and scalable elastic optical path network: architecture, benefits, and enabling technologies,” IEEE Commun. Mag. 47(11), 66–73 (2009). [CrossRef]  

28. Y. Ji, J. Zhang, Y. Zhao, X. Yu, J. Zhang, and X. Chen, “Prospects and research issues in multi-dimensional all optical networks,” Sci. China Inf. Sci. 59(10), 101301 (2016). [CrossRef]  

29. R. Irmer, H. Droste, P. Marsch, M. Grieger, G. Fettweis, S. Brueck, H. P. Mayer, L. Thiele, and V. Jungnickel, “Coordinated multipoint: concepts, performance, and field trial results,” IEEE Commun. Mag. 49(2), 102–111 (2011). [CrossRef]  

30. A. Müller and P. Frank, “Cooperative interference prediction for enhanced link adaptation in the 3GPP LTE uplink,” in Proceedings of IEEE Vehicular Technology Conference (VTC, 2010). [CrossRef]  

31. S. Brueck, L. Zhao, J. Giese, and M. A. Amin, “Centralized scheduling for joint transmission coordinated multi-point in LTE-advanced,” in Proceedings of ITG/IEEE Workshop on Smart Antennas(2010). [CrossRef]  

32. Next Generation Mobile Networks Alliance, “CoMP Evaluation and Enhancement,” [Online] Available: https://www.ngmn.org/uploads/media/NGMN_RANEV_D3_CoMP_Evaluation_and_Enhancement_v2.0.pdf.

33. J. Zhang, Y. Ji, J. Zhang, R. Gu, Y. Zhao, S. Liu, K. Xu, M. Song, H. Li, and X. Wang, “Baseband unit cloud interconnection enabled by flexible grid optical networks with software defined elasticity,” IEEE Commun. Mag. 53(9), 90–98 (2015). [CrossRef]  

34. J. Zhang, H. Yu, Y. Ji, H. Li, X. Yu, Y. Zhao, and H. Li, “Demonstration of radio and optical orchestration for improved coordinated multi-point (CoMP) service over flexible optical fronthaul transport networks,” in Proceedings of Optical Fiber Communication Conference and Exhibition (OFC2017). [CrossRef]  

35. SIEMENS white paper, “Latency on a Switched Ethernet Network,” [Online] Available: https://cache.industry.siemens.com/dl/files/587/94772587/att_113195/v1/94772587_ruggedcom_latency_switched_network_en.pdf.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Comparison of fixed and flexible optical fronthaul. (a) Fixed fronthaul: one-to-one connection with constant data rate; (b) flexible fronthaul: any-to-any connection with variable data rate.
Fig. 2
Fig. 2 CoMP in the flexible optical fronthaul networks. (a) CoMP before lightpath reconfiguration; (b) CoMP after lightpath reconfiguration.
Fig. 3
Fig. 3 (a) The framework of SDN-enabled radio and optical orchestration; (b) the functional modules and their interactions of orchestrator plugin.
Fig. 4
Fig. 4 Cooperation procedures of orchestrator for eCoMP service.
Fig. 5
Fig. 5 (a) Experimental topology; (b) experimental environment of CRoFlex testbed; (c) structure of a 4-port flexible grid enabled optical switching node; (d) switching structure of a BBU pool.
Fig. 6
Fig. 6 (a) Measured RSRP of coordinated RRHs; (b) optical filter output; (c) measured download rate of the cell-edge user during the experimental time; (d) BBU resource usage before and after lightpath reconfiguration (LR).
Fig. 7
Fig. 7 (a)-(b) Wireshark captures of extended OFP; (c) overall latency for lightpath reconfiguration.
Fig. 8
Fig. 8 Simulation results. (a) Inter-BBU CoMP ratio and cell-inner traffic load migration ratio versus the number of cell-edge users; (b) latency reduction for eCoMP versus the number of cell-edge users.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.