Abstract

Bandwidth demands resulting from generated traffics by emerging applications, when aggregated at the edge of the networks for transport over core networks can only be met with high-capacity WDM circuit switched optical networks. An efficient end-to-end delivery of packet streams in terms of network control, operation and management is one of the key challenges for the network operators. Therefore, a unified control and management plane for both packet and optical circuit switch domains is a good candidate to address this need. In this paper a novel software-defined packet over optical networks solution based on the integration of OpenFlow and GMPLS control plane is experimentally demonstrated. The proposed architecture, experimental setup, and average flow setup time for different optical flows are reported. The performance of the extended OpenFlow controller is also presented and discussed.

© 2011 OSA

1. Introduction

Future Internet is characterized by global delivery of packet switched traffic driven by high-performance network-based applications such as ultra high definition (UHD) video on demand streaming and cloud computing. Bandwidth demands resulting from generated traffics by these types of applications, when aggregated at the edge of the network for transport over core networks can be met only with high-capacity WDM circuit switched optical networks. A key challenge for network operators to support future Internet applications is the efficient end-to-end delivery of packet switched traffic in terms of network operation, control and management when traversing a packet switched domain (i.e., campus and metro) through a circuit switched domain (i.e., WDM optical core network).

The Internet Engineering Task Force (IETF) has developed a set of protocols defined as the Generalized Multi Protocol Label Switching (GMPLS [1]) paradigm. GMPLS extends MPLS (Multi Protocol Label Switching) to encompass control and management of time-division (e.g., SONET/SDH, PDH, G.709), wavelength, and spatial switching (e.g., incoming port or fiber to outgoing port or fiber). It has a set of routing and signaling protocols that allow dynamic and on demand establishment of optical channels (also known as lightpaths), acting as the control plane of a dynamic optical network. Currently, the GMPLS suite for optical networks includes the OSPF-TE (Open Shortest Path First with Traffic Engineering) protocol for routing and the RSVP-TE (Resource reSerVation Protocol with Traffic Engineering) protocol for signaling. When a new lightpath is needed, GMPLS based on the status of network resources and using a routing and wavelength assignment (RWA) algorithm can calculate a suitable path and reserve the associate network resources.

The GMPLS control plane, due to its support for various optical transport technologies as well as its capability for dynamic and on demand lightpath provisioning, is being widely considered by operators as the control plane of their next generation core optical networks.

The Combination of MPLS in packet switched domain and its successor GMPLS in optical circuit switched domain provides an attractive control and management solution for network operators tackling the challenge of efficient end-to-end delivery of traffic.

Over the past decade, several control and management models for interoperability of packet switched domain and optical circuit switched domain have been proposed mainly based on the GMPLS/MPLS model. Some of them have been or being standardized and widely deployed by operators. These solutions can be categorized into three models: overlay, peer and augmented model. These models are mainly based on the assumption that packet switched domain and optical circuit switched domain are two independent (or loosely independent) autonomous domains. Under the overlay model, packet switched domain is more or less independent of the optical network and acts as a client to the optical domain. In this scenario, the optical network provides point to point connectivity for the packet switched domain. In peer model, packet switch domain acts as a peer to the optical domain and both domains share a single control plane, which completely share all the routing and signaling information across two domains. The augmented model is an intermediate model, where packet switched domain and circuit switched domain have their own independent control plane but some routing and signaling information are shared between the two domains.

GMPLS/MPLS based control planes are promising solutions for dynamic and on demand end-to-end delivery of packet switched traffic over an optical circuit switched network and it has been widely developed and deployed by vendors and operators. However, each vendor has its own closed implementation of the control plane, which strongly decouples control of the network from higher layers (i.e., user applications) via a set of well defined interfaces (e.g., user network interface). These interfaces provide very limited or almost no visibility about network topology, status and condition to the user applications. This is a major barrier for development of new networking concept such as software defined networking (SDN), where tradition isolation between application layer, control and the network layer is removed and application and network control layers can exchange information to achieve efficient cross layer service provisioning.

To mitigate this issue, the OpenFlow paradigm has been recently proposed as a control framework that supports programmability of network functions and protocols (SDN) by decoupling the data plane and the control plane, which are currently vertically integrated in many networking equipments (e.g., routers, switches, access points) [2]. OpenFlow allows user defined (software defined) applications take the control of the network and therefore paves the way for collapsing the traditional network layering and bringing the application very close to the network control and management plane. OpenFlow adopts the concept of flow based switching and network traffic control for intelligent, user controlled and programmable network service provisioning with the capability to execute any user defined routing, control and management application in software outside the data path, in the OpenFlow controller. OpenFlow technology is being adopted by most of the major packet switching vendors as the key technology enabler for the realization of software defined networking and there are currently a number of products available from different vendors supporting OpenFlow protocol especially in campus and metro class products.

OpenFlow control framework is a promising technology for integrating the control and management of packet switched domain and optical circuit switched domain. It provides a framework for development of innovative functionalities and protocols thanks to its support for software defined networking. There have been several attempts and proposals to control both circuit switched and packet switched networks using the OpenFlow protocol [3, 4]. However, we believe that in order to assess the suitability of OpenFlow as a unified control place solution for both packet switched and optical circuit switched domains the natural first step, similar to evolution of GMPLS from MPLS, would be to deploy OpenFlow in packet switched domain and to utilize a well established control plane such as GMPLS for optical circuit switched domain. In the work proposed in this paper, a novel software-defined packet over optical networks solution is presented, which is enabled by the interworking of OpenFlow and GMPLS control planes in an overlay model. In this model the packet switched domain is controlled by OpenFlow protocol, while the optical circuit switched is controlled by GMPLS control plane and these two control plane communicate via a UNI interface.

This paper, describes the proposed overlay control plane architecture, workflow and a proof of concept experimental demonstration of the proposed solution. To the best of our knowledge, this is for the first time that interworking of OpenFlow and GMPLS control plane for software-defined packet over optical networks has been proposed and experimentally demonstrated.

After this brief introduction, the integrated OpenFlow-GMPLS control plane is presented in section 2. The experimental setup and related workflow and results are discussed in section 3. Section 4 draws the conclusions of this work.

2. Integrated OpenFlow-GMPLS control plane

The underlying principle of OpenFlow is to treat traffic as flows and to have the control functionality taken out of the networking equipments to a centrally managed or a distributed controller (i.e., OpenFlow controller), while retaining only data plane on the equipment. In the OpenFlow control framework a network is managed by a network-wide operating system running on top of a controller that controls the data plane using OpenFlow protocol. In practice, the OpenFlow controller is a server with capability to host different network management and control applications to effectively manage the network in a centralized or distributed approach. OpenFlow allows the abstraction of each physical switch as a flow-table and the controller makes decisions as to how each flow is forwarded (reactively as new flows are detected, or proactively in advance). Each decision is then cached in the data plane’s flow-table.

In an OpenFlow controlled packet switched network a flow can be defined in a flexible way as a combination of any L2, L3, L4 headers of a packet as shown in Fig. 1 . Incoming packets are matched against the flow definitions (rules) and, if there is a match, a set of actions are performed, and statistics will be updated. Packets that do not match any flow-table entry are (typically) encapsulated and sent to the controller. The controller decides how to process the packet and adds a new rule in the data plane flow-table accordingly. Consequent packets with the same flow identifier will be processed according to this new rule, without contacting the controller. Therefore, while each packet is switched individually, the flow is the basic unit of manipulation within the OpenFlow enabled switch. The three fields that define the flow table (Fig. 1) are: rule, action and statistics. Rule, identifies the entries from the packet header that will define the flow. For instance all packets, which are addressed to a particular IP address and to the TCP port 80 (i.e., web traffic) define a flow (i.e., a rule in the flow table). Action, establishes the way the packet should be treated depending on the rule. For example, the mentioned traffic can be forwarded to a particular port (or ports) in the switch. Statistics, gather packet related statistics, which are used by the network applications to make dynamic and/or automatic decisions. One potential use of the statistics filed is to establish a mechanism to set a lifetime for the flow entries. For instance the duration counters measure the amount of time that a flow has been installed in the switch.

 

Fig. 1 Flow table structure within an OpenFlow switch.

Download Full Size | PPT Slide | PDF

The existing specification of OpenFlow covers packet switching networks. However, several experimental and demonstration cases extend the OpenFlow protocol to the circuit switched (i.e., optical core networks) domain [36]. All of these works are based on extensions to the original OpenFlow specification in order to make it practically suitable for circuit switched domain. An OpenFlow circuit switch or hybrid switch consists of a cross-connect table, which caches the information about the existing circuit flows (or cross-connects made in the switch), and a secure channel to an external controller, which manages the switch over the secure channel using the OpenFlow protocol [7].

There are specific features and characteristics in the optical circuit switch domain, which are not available in packet switching domain. For instance the analog nature of optical circuits and data transmission in optical networks removes the notion of “packet in” messages in the OpenFlow protocol. Therefore, the extended OpenFlow protocol should be used for provisioning of the lightpaths in the optical circuit switch domain. The modular structure of the optical nodes, introduces some challenging constraints for the definition of switch capabilities (e.g., colored/colorless, directed/directionless, and blocking/contentionless add/drop ports). Wavelength continuity constraint, physical layer impairments, and optical power equalization are other characteristics of the optical circuit switching domain, which should be properly considered in the operation of extended OpenFlow controller. Many of the required functionalities are already implemented in GMPLS protocols and therefore can be integrated with an extended OpenFlow controller.

Nevertheless, the integrated control of packet and circuit switched domains using a single OpenFlow controller has never been implemented to the date. The architecture proposed in this paper benefits from OpenFlow technology, which can deal with traffic flows regardless of the underlying transport technology. It uses an extended OpenFlow controller to provide transparent connectivity between packet and circuit switched domains. The architecture, shown in Fig. 2 , integrates a GMPLS control plane deployed in an overlay model into an extended packet switch OpenFlow controller.

 

Fig. 2 Integrated OpenFlow-GMPLS unified control plane.

Download Full Size | PPT Slide | PDF

The GMPLS control plane follows the standard ASON model and it includes the following building blocks: the network connection controller (NCC) responsible for handling and processing the connection requests; the signaling controller (SC), which implements the RSVP-TE protocol to handle the GMPLS signaling; the routing controller (RC) comprising the OSPF-TE protocol and a path computation algorithm for calculating the end to end path (RWA or lightpath routing); the link resource manager (LRM) which is responsible for monitoring and information collection regarding status of network elements; and the transport network resource controller (TNRC), which provides the interface between the controller and network elements for their configuration and monitoring. In turn, the extended OpenFlow controller comprises: the flow processor (FP), responsible for processing new flows and creating flow rules and updates for flow tables in switches; the path computation element (PCE), which performs path computation for each flow within each packet switched domain; the discovery agent (DA), which is responsible for discovering the network topology and connectivity including end points in each packet switched domain; and the OpenFlow Gateway (OFGW) that provides the interface between the OpenFlow controller and the GMPLS control plane through a user network interface (UNI) signaling interface. Finally, the OpenFlow protocol controller (OFPC) is utilized for interfacing the OpenFlow Controller and the OpenFlow-enabled switches for their flow table configuration and monitoring.

In the proposed architecture both packet switched domains and GMPLS optical domain are controlled by the extended OpenFlow controller, which provides functionality for requesting optical connectivity from the optical domain. The edge packet switches that interconnect the optical circuit switched domain to the packet switched domains are equipped with tuneable WDM interfaces and are connected to an add/drop port of the ingress/egress nodes of the optical domain as shown in Fig. 2.

3. Experimental setup, demonstration scenario and results

The experimental setup is shown in Fig. 3 . It comprises two OpenFlow enabled L2 packet switched domains and one optical fiber switched domain. One of the packet switched domains (i.e., the client side), is equipped with three NEC IP8800 OpenFlow enabled switches and the other one, the server domain, is equipped with one NEC IP8800 switch. The optical switching domain comprises 4 Calient DiamondWave optical switches. Each optical switch is controlled by a GMPLS controller and the GMPLS control plane is distributed among four GMPLS controllers. The client and server packet switch domains are controlled by the proposed extended OpenFlow controller, which is connected to the GMPLS control plane through UNI interface. For simplicity the two NEC switches at the boundary of optical domain are equipped with fixed wavelength transponders at different wavelengths and each port of the optical switch carries only one wavelength.

 

Fig. 3 Experimental test-bed setup.

Download Full Size | PPT Slide | PDF

Once a client sends a new request for video streaming (connectivity to the video server) to one of the L2 switches in the client domain, the OpenFlow enabled switch (i.e., NEC IP8800) will not be able to find a flow entry for the new request and forwards the request to the extended OpenFlow controller (packet in message in the OpenFlow protocol). This controller resolves the destination address (i.e., server domain) and requests from GMPLS via UNI a new optical lightpath between client and server domains. The routing and wavelength assignment mechanisms along with signaling protocols of GMPLS, establishes a lightpath in the optical circuit switch domain. GMPLS returns the acknowledgment for the established LSP with ingress port/wavelength and egress port/wavelength. The extended OpenFlow controller updates the flow tables of the switches in the client and server domains (i.e., packet switch domain) with appropriate port for the new connection. It also maintains the lightpath identification. The extended OpenFlow controller maintains an extended flow table entry, as depicted in Fig. 4 , which also includes the established LSP in the circuit switch domain along with the allocated channel for it. Finally the client receives acknowledgment of the request and the video connectivity (streaming) will be established. The interaction of OpenFlow and GMPLS control plane extends the benefits of the software defined networking to the circuit switched (optical) domain. For instance, the extended OpenFlow controller can route the flows over different paths based on different level of quality of service.

 

Fig. 4 Extended flow table entry.

Download Full Size | PPT Slide | PDF

The procedure to enable end-to-end connectivity in the described scenario is detailed in Fig. 5 . Considering that the initial flow tables of the switches are empty, the first packet of the request is forwarded to the extended OpenFlow controller. Based on the destination address of the request, the extended OpenFlow controller identifies the end points of the required connection in the circuit switched domain and requests a lightpath from GMPLS control plane using the OFGW through a UNI interface. The discovery agent of the extended OpenFlow controller is exchanging the routing information with the GMPLS control plane; therefore it would be able to compute a lightpath according to the network topology information. Once the lightpath is established the extended controller updates the flow table of the ingress and egress switches and maintains the forwarding port and the channel along the circuit switch domain (extended flow table entry), finalizing the establishment of an end-to-end optical flow path. The extended OpenFlow controller also maintains a list of established lightpaths in terms of lightpath Identifiers.

 

Fig. 5 Timing sequence diagram for the experimental demonstration scenario.

Download Full Size | PPT Slide | PDF

The average end-to-end flow setup time for different optical flow paths are depicted in Fig. 6 . As the number of hops per optical flow increases, more time is required to establish an end-to-end optical flow path. This end-to-end path starts from the packet switch domain at the client side and ends at the other packet switch domain (the server side), while traversing the optical circuit switch domain. The setup times include both signaling and device configuration delays, which is not fully optimized in our GMPLS implementation.

 

Fig. 6 Average end-to-end flow setup time vs. number of hops per optical flow.

Download Full Size | PPT Slide | PDF

The performance of the extended OpenFlow controller in terms of the ‘flow_mod’ operation per second is evaluated and depicted in Fig. 7 . We used the ‘Cbench’ tool in order to measure the throughput of the extended OpenFlow controller. Fifteen independent experiments were performed for each scenario and the duration of each experiment was set to 1000ms. These performances are measured for a controller running over a Linux virtual machine on a Dell 1950 workstation with 2.66 GHz processor and 8GB of memory. This plot shows the baseline performance of the extended OpenFlow controller, in which all switches are contacting the controller in response to a new ‘packet_in’ (new packet arrival) event. The average number of ‘flow_mod’ operations per second for each switch is plotted in this figure. For a single OpenFlow switch, the controller was able to handle on average 3663 ‘flow_mod’ operations per second. This performance is decreased to 293 operations per second when the number of switches increase to 13. In fact the performance of the controller is uniformly divided among the switches.

 

Fig. 7 Average throughput of the extended OpenFlow controller.

Download Full Size | PPT Slide | PDF

4. Conclusions

The software defined networking paradigm, which decouples the control framework from the data plane functionalities, allows network designers and operators to simplify network operations by exploiting fundamental abstractions and promote innovation. The OpenFlow protocol, which is initiated in the packet switch domain and campus networks has the potentials to be extended towards the core optical networks and paving the way for the materialization of a unified control and management framework. Several experimental and demonstration cases have already extended the OpenFlow protocol to the circuit switched (i.e., optical core networks) domain. In this work, we experimentally demonstrated the integration of OpenFlow and GMPLS control planes. The proposed overlay model extends the functionality of a typical OpenFlow controller in a way to properly interface with GMPLS control plane. We also reported on the end-to-end flow setup time as a function of number of hops per optical flow path.

Acknowledgments

This work is partially supported by EU FP7 funded project OFELIA (grant agreement number: 258365, www.fp7-ofelia.eu).

References and links

1. A. Farrel and I. Bryskin, GMPLS Architecture and Applications (Morgan Kaufmann, 2006).

2. N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, “OpenFlow: enabling innovation in campus networks,” ACM SIGCOMM Comput. Commun. Rev. 38(2), 69–74 (2008). [CrossRef]  

3. S. Das, G. Parulkar, P. Singh, D. Getachew, L. Ong, and N. McKeown, “Packet and circuit network convergence with OpenFlow,” in Optical Fiber Communication Conference, OSA Technical Digest (CD) (Optical Society of America, 2010), paper OTuG1.

4. V. Gudla, S. Das, A. Shastri, G. Parulkar, N. McKeown, L. Kazovsky, and S. Yamashita, "Experimental demonstration of OpenFlow control of packet and circuit switches," in Optical Fiber Communication Conference, OSA Technical Digest (CD) (Optical Society of America, 2010), paper OTuG2.

5. S. Das, G. Parulkar, and N. McKeown, “Unifying packet and circuit switched networks,” in 2009 IEEE GLOBECOM Workshops (IEEE, 2009), pp. 1–6.

6. S. Das, Y. Yiakoumis, G. Parulkar, and N. McKeown, P. Singh, D. Getachew, and P. D. Desai, “Application-aware aggregation and traffic engineering in a converged packet-circuit network,” in National Fiber Optic Engineers Conference, OSA Technical Digest (CD) (Optical Society of America, 2011), paper NThD3.

7. S. Das, “Extensions to the OpenFlow protocol in support of circuit switching,”, Addendum to OpenFlow protocol specification (v1.0)—Circuit Switch Addendum v0.3, June 2010, http://www.openflow.org/wk/images/8/81/OpenFlow_Circuit_Switch_Specification_v0.3.pdf.

References

  • View by:
  • |
  • |
  • |

  1. A. Farrel and I. Bryskin, GMPLS Architecture and Applications (Morgan Kaufmann, 2006).
  2. N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, “OpenFlow: enabling innovation in campus networks,” ACM SIGCOMM Comput. Commun. Rev. 38(2), 69–74 (2008).
    [CrossRef]
  3. S. Das, G. Parulkar, P. Singh, D. Getachew, L. Ong, and N. McKeown, “Packet and circuit network convergence with OpenFlow,” in Optical Fiber Communication Conference, OSA Technical Digest (CD) (Optical Society of America, 2010), paper OTuG1.
  4. V. Gudla, S. Das, A. Shastri, G. Parulkar, N. McKeown, L. Kazovsky, and S. Yamashita, "Experimental demonstration of OpenFlow control of packet and circuit switches," in Optical Fiber Communication Conference, OSA Technical Digest (CD) (Optical Society of America, 2010), paper OTuG2.
  5. S. Das, G. Parulkar, and N. McKeown, “Unifying packet and circuit switched networks,” in 2009 IEEE GLOBECOM Workshops (IEEE, 2009), pp. 1–6.
  6. S. Das, Y. Yiakoumis, G. Parulkar, and N. McKeown, P. Singh, D. Getachew, and P. D. Desai, “Application-aware aggregation and traffic engineering in a converged packet-circuit network,” in National Fiber Optic Engineers Conference, OSA Technical Digest (CD) (Optical Society of America, 2011), paper NThD3.
  7. S. Das, “Extensions to the OpenFlow protocol in support of circuit switching,”, Addendum to OpenFlow protocol specification (v1.0)—Circuit Switch Addendum v0.3, June 2010, http://www.openflow.org/wk/images/8/81/OpenFlow_Circuit_Switch_Specification_v0.3.pdf .

2008 (1)

N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, “OpenFlow: enabling innovation in campus networks,” ACM SIGCOMM Comput. Commun. Rev. 38(2), 69–74 (2008).
[CrossRef]

Anderson, T.

N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, “OpenFlow: enabling innovation in campus networks,” ACM SIGCOMM Comput. Commun. Rev. 38(2), 69–74 (2008).
[CrossRef]

Balakrishnan, H.

N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, “OpenFlow: enabling innovation in campus networks,” ACM SIGCOMM Comput. Commun. Rev. 38(2), 69–74 (2008).
[CrossRef]

McKeown, N.

N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, “OpenFlow: enabling innovation in campus networks,” ACM SIGCOMM Comput. Commun. Rev. 38(2), 69–74 (2008).
[CrossRef]

Parulkar, G.

N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, “OpenFlow: enabling innovation in campus networks,” ACM SIGCOMM Comput. Commun. Rev. 38(2), 69–74 (2008).
[CrossRef]

Peterson, L.

N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, “OpenFlow: enabling innovation in campus networks,” ACM SIGCOMM Comput. Commun. Rev. 38(2), 69–74 (2008).
[CrossRef]

Rexford, J.

N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, “OpenFlow: enabling innovation in campus networks,” ACM SIGCOMM Comput. Commun. Rev. 38(2), 69–74 (2008).
[CrossRef]

Shenker, S.

N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, “OpenFlow: enabling innovation in campus networks,” ACM SIGCOMM Comput. Commun. Rev. 38(2), 69–74 (2008).
[CrossRef]

Turner, J.

N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, “OpenFlow: enabling innovation in campus networks,” ACM SIGCOMM Comput. Commun. Rev. 38(2), 69–74 (2008).
[CrossRef]

ACM SIGCOMM Comput. Commun. Rev. (1)

N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, “OpenFlow: enabling innovation in campus networks,” ACM SIGCOMM Comput. Commun. Rev. 38(2), 69–74 (2008).
[CrossRef]

Other (6)

S. Das, G. Parulkar, P. Singh, D. Getachew, L. Ong, and N. McKeown, “Packet and circuit network convergence with OpenFlow,” in Optical Fiber Communication Conference, OSA Technical Digest (CD) (Optical Society of America, 2010), paper OTuG1.

V. Gudla, S. Das, A. Shastri, G. Parulkar, N. McKeown, L. Kazovsky, and S. Yamashita, "Experimental demonstration of OpenFlow control of packet and circuit switches," in Optical Fiber Communication Conference, OSA Technical Digest (CD) (Optical Society of America, 2010), paper OTuG2.

S. Das, G. Parulkar, and N. McKeown, “Unifying packet and circuit switched networks,” in 2009 IEEE GLOBECOM Workshops (IEEE, 2009), pp. 1–6.

S. Das, Y. Yiakoumis, G. Parulkar, and N. McKeown, P. Singh, D. Getachew, and P. D. Desai, “Application-aware aggregation and traffic engineering in a converged packet-circuit network,” in National Fiber Optic Engineers Conference, OSA Technical Digest (CD) (Optical Society of America, 2011), paper NThD3.

S. Das, “Extensions to the OpenFlow protocol in support of circuit switching,”, Addendum to OpenFlow protocol specification (v1.0)—Circuit Switch Addendum v0.3, June 2010, http://www.openflow.org/wk/images/8/81/OpenFlow_Circuit_Switch_Specification_v0.3.pdf .

A. Farrel and I. Bryskin, GMPLS Architecture and Applications (Morgan Kaufmann, 2006).

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1

Flow table structure within an OpenFlow switch.

Fig. 2
Fig. 2

Integrated OpenFlow-GMPLS unified control plane.

Fig. 3
Fig. 3

Experimental test-bed setup.

Fig. 4
Fig. 4

Extended flow table entry.

Fig. 5
Fig. 5

Timing sequence diagram for the experimental demonstration scenario.

Fig. 6
Fig. 6

Average end-to-end flow setup time vs. number of hops per optical flow.

Fig. 7
Fig. 7

Average throughput of the extended OpenFlow controller.

Metrics