To mitigate the potential scalability issues of an OpenFlow-based control plane, a seamless OpenFlow and Path Computation Element (PCE) integrated control plane is proposed, by means of an architecture in which the path computation function is formally decoupled from the controller so the controller can off-load the task to one or more dedicated PCEs using an open and standard interface and protocol, and where the PCE obtains its topology database by means of a dedicated dynamic topology server, which is accessed by the PCE on a per-request basis. The overall feasibility and performance metrics of this integrated control plane are experimentally verified and quantitatively evaluated on a real IP over translucent Wavelength Switched Optical Network (WSON) testbed.
© 2013 OSA
OpenFlow , which allows operators to control the network using software running on a network operating system (e.g. NOX ) within an external controller, has recently been proposed and experimentally validated as a promising unified control plane (UCP) technique for future IP/Dense Wavelength Division Multiplexing (DWDM) multi-layer optical networks, since it provides satisfactory flexibility for the operator to control a network and is aligned with carrier’s preferences given its simplicity and manageability [3–12].
However, scalability is a major issue of an OpenFlow-based UCP since, in a centralized architecture, the NOX has to perform both advanced path computations (e.g. impairment-aware routing and wavelength assignment or IA-RWA) and OpenFlow-based signaling for path provisioning. Several previous studies have verified that IA-RWA computations in optical networks are CPU-intensive . Therefore, considering the potential deployment of an OpenFlow-based UCP, it is very important to offload the NOX and thus enhance the network scalability. Note that, it is possible to have more CPUs for the NOX to scale up its computation capability. However, this solution is not cost-efficient due to the its low per-dollar performance, since the CPU’s price goes up quickly with more sockets, and upgrade of the motherboard and additional memory installation are also required in order to have more CPUs in a single machine, as investigated in .
Another straightforward solution to address this issue is to decouple the IA-RWA computation to a dedicated Path Computation Element (PCE) . Besides the computation offload, this solution will also bring a lot of advantages such as flexible path computation in a multi-domain network, the ease of integration with Business and Operations Support Systems (BSS/OSS) etc, due to the mature, well-defined and feature-complete PCE Communication Protocol (PCEP) . However, since the NOX/OpenFlow is responsible for lightpath setup/release and, consequently, the PCE is not necessarily aware of the up-to-date network resource information (e.g. available wavelengths), it is unable to compute an optimal path, especially if, given the optical technology constraints, it needs to take into account the wavelength continuity constraint (WCC). In previous work  the authors proposed a basic solution in which the PCE was able to perform the routing function by considering a static topology, relying on the NOX to perform the wavelength assignment. In other words, the NOX is constrained to perform wavelength assignment to the path provided by the PCE. However, an important limitation of that approach, given the fact that the PCE could not take into account wavelength availability, is that there is no guarantee that an available wavelength can be allocated along the computed path, which may cause a very high blocking probability or a long processing delay (e.g. route recalculation).
In light of this, in this paper, we propose a more efficient solution for the seamless and dynamic interworking between a NOX/OpenFlow controller and a PCE, by introducing a topology server. The PCE can then request the up-to-date topology from the server, and, in this work, the PCE obtains the topology at each PCEP request. The overall feasibility and efficiency of the proposed solution are experimentally verified on an IP/DWDM multi-layer network testbed with both control and date planes, while its performance metrics are quantitatively evaluated and compared with the previously reported alternative .
The rest of this paper is organized as follows. Section 2 proposes the solution for seamless OpenFlow and PCE interworking. Section 3 presents the experimental demonstration and performance evaluations of the proposed solution. Section 4 concludes this paper by summarizing our contributions.
2. OpenFlow/PCE integrated control plane
In this section, the network architecture is introduced firstly. After that, the proposed extensions for the OpenFlow protocol, the NOX and the PCE are presented respectively. Finally, the procedure for dynamic end-to-end path provisioning by using the proposed OpenFlow/PCE integrated control plane is investigated in detail.
2.1 Network architecture
Figure 1(a) shows the proposed network architecture, in which we consider an IP/DWDM multi-layer network. In the IP layer, the IP routers are enhanced to support the OpenFlow protocol, and are referred to as OpenFlow-enabled IP routers (OF-R) . A translucent wavelength switched optical network (WSON), with sparsely but strategically equipped 3R regenerators, is deployed in the optical layer. Such a translucent network is a promising paradigm for industrial deployment because extensive studies indicate it can provide an adequate trade-off between network cost and service provisioning performance . An OpenFlow/PCE integrated control plane, as detailed next, is deployed to control this multi-layer network through the OpenFlow protocol, with its functional modules shown in Fig. 1(b).
2.2 OpenFlow extensions
The OpenFlow protocol extensions are based on OpenFlow circuit switch addendum v0.3 . To control a translucent WSON through the extended OpenFlow protocol, OpenFlow-enabled optical switching nodes (e.g. photonic cross-connects or PXC), transponders, and 3R regenerators are required, which are referred to as OF-PXC, OF-TPND, and OF-REG respectively. Figure 2(a) , Fig. 2(b) and Fig. 2(c) show the functional modules for the OF-PXC, OF-TPND, and OF-REG, respectively. For the OF-PXC, an OpenFlow agent is introduced, which can communicate with the NOX through the extended OpenFlow protocol. In addition, based on the information in an extended OpenFlow Flow Mod message, which is used to carry the cross-connection description such as input/output ports and wavelength, the OpenFlow agent can send a vendor-specific command (e.g. Transaction Language 1 or TL1) to control the PXC hardware in order to setup or release an optical cross-connection, as detailed in [8, 9, 18]. The OF-TPND/REG has been demonstrated in . The transponders or regenerator groups are connected to an extended OpenFlow agent, which receives the extended OpenFlow Flow Mod messages from the NOX carrying the TPND/REG control information (e.g. port ID, wavelength, etc), and then converts these information into TL1 commands in order to control transponders or regenerators.
2.3 NOX extensions
In the proposed control plane, a new entity is introduced, named a topology server. It is responsible for managing (i.e., gathering, storing and serving) the traffic engineering (TE) information of the network (including topology, wavelength and 3R regenerator availability) as well as information related to physical impairments (e.g. optical signal to noise ratio or OSNR), as shown in Fig. 1(b). Although the proposed architecture is very flexible, in this work we proceed as follows: upon successful path setup or release operations, the NOX updates the TE information (e.g. available wavelengths for each link) and then automatically generates an XML file to encode the TED, as detailed next. After that, the NOX automatically sends the generated XML file to the topology server to update the TE information of the whole network, by using a TED Update message. In addition, the NOX is extended with path computation client (PCC)  capabilities for communications with the PCE by means of the PCEP protocol. After an initial session handshake, the PCC may send a path computation request (PCReq) message to the PCE requesting an IA-RWA computation. The PCE replies with a path computation reply (PCRep) containing either a path composed of an explicit route object (ERO) in the case of a successful IA-RWA computation along with the path attributes, or with a NOPATH object if a satisfied path could not be found, If either the NOX(PCC) or the PCE does not desire to keep the connection open, it ends the PCEP session by using a Close message. After the IA-RWA computation is completed, the results are passed to the NOX to control the corresponding OF-PXCs, OF-TPNDs and OF-REGs to set up the lightpath, by using the aforementioned OpenFlow extensions.
2.4 PCE extensions
Once a PCReq message is received, the PCE is extended to firstly send a traffic engineering database request (TED Req) message to the topology server. In turn, the topology server replies with a TED reply (TED Rep) message, encoded using the eXtensible Markup Language (XML) format, which describes the up-to-date network information. Figure 3 shows the selected parts of an XML file for TED encoding. For each node, the TED encoding includes the node ID, node OSNR, and 3R regenerator information. For each link, the TED encoding file contains link ID, link OSNR, and wavelength availabilities information. Such a TED encoding file can be automatically generated by the NOX, as aforementioned. Since the PCE requests the latest topological information on a per-request basis, the server is referred to as a per-request-based dynamic topology server. The PCE performs the IA-RWA computation by using a translucent-oriented Dijkstra algorithm with the objective function  code 32768. This algorithm has been detailed in our previous work in , which is able to compute the path, the regeneration points and the wavelengths at each transparent segment, minimizing the path cost, and fulfilling both the WCC and OSNR requirements.
2.5 Procedure for end-to-end path provisioning
The procedure for end-to-end path provisioning is shown in Fig. 4 . If a new flow arrives at the ingress OpenFlow-enabled IP router (OF-R1), and if this flow does not match any existing flow entry in the flow table of OF-R1, OF-R1 forwards the first packet of this flow to the NOX, which requests a path computation to the PCE. After the initial session handshake, the NOX sends a PCReq message to the PCE through the PCC. In turn, the PCE sends a TED Req message to the topology server, and obtains a latest copy of the TED from the topology server for IA-RWA computation. The IA-RWA result (the network path, associated wavelengths and regeneration points if needed) are returned to the NOX, and, subsequently the NOX proceeds to set up an end-to-end path by controlling all the OF-Rs, OF-TPNDs, OF-PXCs, and OF-REGs along the computed path by using the OpenFlow protocol, in a sequential order, starting from the ingress OF-R1. After that, the NOX sends a TED Update message to the topology server to update the TED.
The detailed information encapsulated in the PCRep and PCReq messages, as well as the OpenFlow Packet In and Flow Mod messages is depicted in Fig. 5 . In the Packet In message, the source and destination addresses of the incoming flow is encapsulated, and the Flow Mod messages contain the input/output ports and the wavelength information which is used to control OF-PXCs/OF-TPNDs/OF-REGs for lightpath provisioning. In the PCReq message, the source and destination addresses as well as the objective function code are encapsulated for the PCE to compute route by using the translucent-oriented Dijkstra algorithm . The path computation results including ERO and path attributes are included in the PCRep message.
3. Experimental setup, results and discussions
We set up an IP over translucent WSON multi-layer network testbed comprising both control and data planes, and the data plane of the optical layer was equipped with real hardware including two transponders and four PXCs, as shown in Fig. 6 . In the data plane, four OF-PXC nodes integrated with DWDM MUX/DEMUX optical filters were utilized. Two OTU2 (10.7 Gbps) based OF-TPNDs were attached at OF-PXC1/PXC2 respectively, and a shared OF-REG was deployed at OF-PXC4. Two OF-Rs were connected to two transponders respectively. Each link was deployed with 10 wavelengths. The link/node OSNR values (as defined in ) were statically configured as shown in Fig. 6. Both the NOX and PCE were deployed in PCs with 3.2 GHz CPU and 1 GB memory. In Fig. 6, the number close to each network element is their identifier from the view point of the NOX (i.e. data_path_id in OpenFlow terminology).
In general, the topology server can be located either inside or outside of the NOX. Figure 7 shows performance comparison of both cases, in terms of CPU utilization of the NOX when the flow request intervals are 2 seconds and 5 seconds respectively. It can be seen that, co-locating the topology server at the NOX has a very limited negative effect on the load of the NOX. Therefore, it is better to deploy the topology server inside the NOX. When the topology server is integrated into the NOX, the NOX can update the TED locally which is able to reduce the overall processing latency. Table 1 shows the reduction of processing time for TED update when the topology server is integrated with the NOX, compared with the case where the topology server is separated. As expected, we verified that the more complex the network topology, the greater the benefits obtained by integrating the topology server with the NOX.
Therefore, a co-located topology server was the retained deployment model. To verify the feasibility of the proposed solution, we firstly sent out a flow from the Client to the Server (as shown in Fig. 6). Then, according to the procedure depicted in Fig. 4, path (1) was calculated by the PCE and provisioned by the NOX via the OpenFlow protocol, following the route shown in Table 2 . Figure 8 shows the Wireshark capture of the message sequence. It can be seen that, since the XML file for describing the TED is large, it is divided into several Transmission Control Protocol (TCP) packets for transmission from the topology server to the PCE. The message latency between a TED Req and a TED Rep was around 11.7 ms, and the overall path computation latency was around 16.4 ms, as shown in Fig. 8. Although absolute values may vary depending on hardware and optimization settings, we sent 100 successive requests for path (1) setup to get the average latency, which is more meaningful than the result with a single request. Figure 9(a) depicts the distribution of message latency between a TED Req and a TED Rep, and Fig. 9(b) shows the distribution of the overall path computation latency for these repeated path (1) setup requests. We measured that the average message latency between a TED Req and a TED Rep was 11.52 ms, and the average path computation latency for the path (1) was 15.71 ms. Figure 10(a) and Fig. 10(b) show the Wireshark capture of the PCReq and PCRep messages. As previously mentioned in Fig. 5, it can be seen that the source and destination addresses as well as the objective function code are encapsulated in the PCReq message, and the path computation results including ERO and path attributes are included in the PCRep message.
After the provisioning of path (1), we manually set all the wavelengths on link 3-4 to the occupied state, and we started a new flow from the Client to the Server. Due to the WCC, path (2) was selected by the PCE for transmitting this flow, as shown in Table 2. Similarly, wavelengths on link 3-6 were set to the occupied state, and then a new path (3) was calculated and provisioned for the next incoming flow. Note that, due to the OSNR constraints, an OF-REG was allocated to compensate physical impairments, as shown in Table 2.
We repeated the setup/release of paths (1), (2) and (3) over 100 times, in order to measure the path provisioning latency. Table 2 shows the average control plane latency (ACPL) and overall path provisioning latency (OPPL) for creating these paths. The ACPL is the average control plane message (i.e., OpenFlow, PCEP, TED Req and TED Rep) latency to complete the procedure in Fig. 4. The OPPL comprises both the ACPL and the configuration latency of the date plane hardware. From Table 2, it can be seen that, by using the proposed OpenFlow/PCE integrated control plane, the end-to-end path provisioning can be completed within 400 ms in our tested scenario.
Finally, we compare the performance of the proposed solution with the previously reported alternative  which uses the PCE for routing, based on static topology information, and relies on the NOX for wavelength assignment. Flow requests were generated following a Poisson process, with holding times following a negative exponential distribution. The results are shown in Fig. 11 . It can be seen that, as expected, the solution proposed in this paper significantly outperforms the previously reported one in terms of blocking probability. It is because the PCE can compute a path by taking the WCC into account using the up-to-date network resource information, thanks to the introduction of the dynamic topology server.
We successfully demonstrated an OpenFlow/PCE integrated control plane for IP over translucent WSON with the assistance of a per-request-based dynamic topology server. Our work refines an OpenFlow based control architecture that relies on software defined networking principles by adding two entities, a PCE and a topology server, and where the entities communicate using mature, open and standard interfaces. This architecture allows more flexibility in deployment models, integrating the advances done in the IETF PCE working group and leveraging on a mature protocol.
The overall feasibility and efficiency of the proposed solution were verified on a real network testbed with both control and data planes. The experimental results indicate that a better performance can be obtained if the topology server is integrated within the NOX. In this case, the path provisioning latencies are around 400 ms, including both the processing latency in the control plane and the configuration latency of network elements in the data plane. Moreover, the experimental results show that the proposed solution can greatly reduce the path blocking probability compared with the previously reported alternative, thanks to the newly introduced topology server. Finally, we believe a more efficient encoding of the TED can be implemented to further reduce overall latency. In this line, the architecture would benefit from standard information and data models of the controlled network, as well as a standard protocol for topology management and exchange.
This work was partly supported by the Ministry of Internal Affairs and Communications (MIC), Japan. A part of CTTC’s work was funded by the MINECO (Spanish Ministry of Economy and Competitiveness) through the project DORADO (TEC2009-07995), and by the European Community’s Seventh Framework Programme FP7 / 2007-2013 through the project OFELIA under grant agreement no 258365.
References and links
1. “The OpenFlow switch consortium,” http://www.openflow.org/.
2. “NOX: an OpenFlow controller,” http://noxrepo.org/.
3. S. Das, G. Parulkar, N. McKeown, P. Singh, D. Getachew, and L. Ong, “Packet and circuit network convergence with OpenFlow,” in Optical Fiber Communication Conference and Exposition and National Fiber Optic Engineers Conference (OFC/NFOEC 2010), Technical Digest (CD) (Optical Society of America, 2010), paper OTuG1.
4. V. Gudla, S. Das, A. Shastri, G. Parulkar, N. McKeown, L. Kazovsky, and S. Yamashita, “Experimental demonstration of OpenFlow control of packet and circuit switches,” in Optical Fiber Communication Conference and Exposition and National Fiber Optic Engineers Conference (OFC/NFOEC 2010), Technical Digest (CD) (Optical Society of America, 2010), paper OTuG2.
5. L. Liu, R. Casellas, T. Tsuritani, I. Morita, R. Martínez, and R. Muñoz, “Interworking between OpenFlow and PCE for dynamic wavelength path control in multi-domain WSON,” in Optical Fiber Communication Conference and Exposition and National Fiber Optic Engineers Conference (OFC/NFOEC 2012), Technical Digest (CD) (Optical Society of America, 2012), paper OM3G.2.
6. S. Das, Y. Yiakoumis, G. Parulkar, N. McKeown, P. Singh, D. Getachew, and P. D. Desai, “Application-aware aggregation and traffic engineering in a converged packet-circuit network,” in Optical Fiber Communication Conference and Exposition and National Fiber Optic Engineers Conference (OFC/NFOEC 2011), Technical Digest (CD) (Optical Society of America, 2011), paper NThD3.
7. S. Das, A. R. Sharafat, G. Parulkar, and N. McKeown, “MPLS with a simple OPEN control plane,” in Optical Fiber Communication Conference and Exposition and National Fiber Optic Engineers Conference (OFC/NFOEC 2011), Technical Digest (CD) (Optical Society of America, 2011), paper OWP2.
8. L. Liu, T. Tsuritani, I. Morita, H. Guo, and J. Wu, “OpenFlow-based wavelength path control in transparent optical networks: a proof-of-concept demonstration,” in 37th European Conference and Exhibition on Optical Communications (ECOC 2011), Technical Digest (CD) (Optical Society of America, 2011), paper Tu.5.K.2.
9. L. Liu, T. Tsuritani, I. Morita, H. Guo, and J. Wu, “Experimental validation and performance evaluation of OpenFlow-based wavelength path control in transparent optical networks,” Opt. Express 19(27), 26578–26593 (2011). [CrossRef] [PubMed]
10. L. Liu, D. Zhang, T. Tsuritani, R. Vilalta, R. Casellas, L. Hong, I. Morita, H. Guo, J. Wu, R. Martínez, and R. Muñoz, “First field trial of an OpenFlow-based unified control plane for multi-layer multi-granularity optical networks,” in Optical Fiber Communication Conference and Exposition and National Fiber Optic Engineers Conference (OFC/NFOEC 2012), Technical Digest (CD) (Optical Society of America, 2012), paper PDP5D.2.
11. S. Azodolmolky, R. Nejabati, E. Escalona, R. Jayakumar, N. Efstathiou, and D. Simeonidou, “Integrated OpenFlow-GMPLS control plane: an overlay model for software defined packet over optical networks,” in 37th European Conference and Exhibition on Optical Communications (ECOC 2011), Technical Digest (CD) (Optical Society of America, 2011), paper Tu.5.K.5.
12. M. Channegowda, P. Kostecki, N. Efstathiou, S. Azodolmolky, R. Nejabati, P. Kaczmarek, A. Autenrieth, J. Elbers, and D. Simeonidou, “Experimental evaluation of extended OpenFlow deployment for high-performance optical networks,” in 38th European Conference and Exhibition on Optical Communications (ECOC 2012), Technical Digest (CD) (Optical Society of America, 2012), paper Tu.1.D.2.
13. L. Liu, T. Tsuritani, R. Casellas, R. Martínez, R. Muñoz, and M. Tsurusawa, “Experimental demonstration and comparison of distributed and centralized multi-domain resilient translucent WSON,” in Proceedings of 36th European Conference and Exhibition on Optical Communication (ECOC 2010), paper We.7.D.3 (Institute of Electrical and Electronics Engineers, Torino, Italy, 2010), pp.1–3.
14. S. Han, K. Jang, K. Park, and S. Moon, “PacketShader: a GPU-accelerated software router,” in Proceedings of SIGCOMM 2010, (Association for Computing Machinery, Delhi, India, 2010), pp. 1–12.
15. A. Farrel, J.-P. Vasseur, and J. Ash, “A path computation element (PCE)-based architecture,” IETF RFC 4655 (2006), http://tools.ietf.org/html/rfc4655.
16. J. P. Vasseur and J. L. Le Roux, eds., “Path computation element (PCE) communication protocol (PCEP),” IETF RFC 5440 (2009), http://tools.ietf.org/html/rfc5440.
17. G. Shen and R. S. Tucker, “Translucent optical networks: the way forward,” IEEE Commun. Mag. 45(2), 48–54 (2007). [CrossRef]
18. S. Das, “Extensions to the OpenFlow protocol in support of circuit switching,” (2010). http://www.openflow.org/wk/images/8/81/OpenFlow_Circuit_Switch_Specification_v0.3.pdf.
19. J. L. Le Roux, J. P. Vasseur, and Y. Lee, eds., “Encoding of objective functions in the path computation element communication protocol,” IETF RFC 5541 (2009), http://tools.ietf.org/html/rfc5541.
20. R. Martínez, R. Casellas, R. Muñoz, and T. Tsuritani, “Experimental translucent-oriented routing for dynamic lightpath provisioning in GMPLS-enabled wavelength switched optical networks,” J. Lightwave Technol. 28(8), 1241–1255 (2010). [CrossRef]