Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

On-demand virtual optical network access using 100 Gb/s Ethernet

Open Access Open Access

Abstract

Our Terabit LAN initiatives attempt to enhance the scalability and utilization of lambda resources. This paper describes bandwidth-on-demand virtualized 100GE access to WDM networks on a field fiber test-bed using multi-domain optical-path provisioning.

©2011 Optical Society of America

1. Introduction

The paradigm in optical communications is shifting at 100 Gb/s from serial to multi-lane as well as being driven by LANs rather than WANs. Historically attempts at higher speeds have been led by wide-area serial transport with SDH and OTN; however, at 100 Gb/s and beyond, they are being led by local-area server networking with Ethernet as shown in Fig. 1(a) .

 figure: Fig. 1

Fig. 1 Paradigm shifts in optical interfaces triggered the Terabit-LAN initiatives over core transport networks. (a) Trend in optical system capacity per fiber. (b) Concept of Terabit-LAN over WDM networks.

Download Full Size | PDF

The shift in paradigm is inevitable as the difference in speeds increases. The capacity of Ethernet packet switching, starting at 100 Mb/s in 1990, is now almost 1 Tb/s per chip thanks to advancements in CMOS technology. On the other hand, a 10-Gb/s per lane optical interface was introduced in the early 1990s, and is still dominant in many of the WANs. While the installation of wavelength-division multiplexing (WDM) and Erbium-doped fiber amplifiers (EDFAs) has aided in increasing the transport capacity per fiber up to the current 1 Tb/s and will continue to aid in increasing the capacity between one and two orders of magnitude in the future, we will eventually encounter a capacity ceiling due to physical limitations of the installed fibers and EDFAs [1].

The Terabit LAN initiatives shown in Fig. 1(b), which began in 2005 [2], prepare for this new paradigm and mainly focus on enhancing the scalability and utilization of optical resources in WANs [3]. Terabit switching networks in a local area can be built easily with rich fiber installation, whereas its virtualization over wide-area optical transport networks requires us to resolve many issues in its access, control, and link technologies to obtain the most from premier lambda resources.

This paper reviews the challenges in two research projects, “Lambda Access” and “Lambda Utilities,” for network access and core transport, respectively, and reports on recent experiments that successfully show bandwidth-on-demand virtualized 100 Gb/s Ethernet (100GE) access to a WDM network on a field fiber test-bed [4]. In addition to the field experiments (Section 5), this paper describes experimental evaluations of each technology element investigated in the two projects (Sections 3 and 4).

2. Overview of Research Projects

“Lambda Access” and “Lambda Utilities” are five-year projects that were completed in March 2011 and were funded by the Japanese National Institute NICT. Their objective was to prepare on-demand data transfer at 100 G or more via a WDM backbone.

“Lambda Access” focuses on multi-lane network access, whereas “Lambda Utilities” focuses on dynamic control and establishing efficient links in the core network. Collaboration between these two projects was expected to give rise to the future Terabit-LAN, which would be virtually configured over a multi-lambda network.

2.1 “Lambda Access” Project

This project targeted dynamic network access of 100 Gb/s or beyond with a multi-lane interface, which is referred to as WDM seamless access as shown in Fig. 2(a) . It assumes single-user network access via multiple lambdas through inverse multiplexing of mega-byte frame streams, and an access protocol that is interoperable with the Lambda Utilities project.

 figure: Fig. 2

Fig. 2 Overview of the two five-year research projects to prepare on-demand data transfer at 100G. (a) Project “Lambda Access.” (b) Project “Lambda Utilities.”

Download Full Size | PDF

The project also addressed challenges to increase the frame rate and per-lane bit rate, referred to as frame-multiplexed access. Frame-multiplexed access assumes multi-user network access via a single lambda through statistical multiplexing of user frames, and frame-based operation administration and maintenance (OAM) protocols, which can enhance connectivity to wide area networks (WANs).

2.2 “Lambda Utilities” Project

This project focused on optical core networks to enhance the scalability and utilization with three objectives as shown in Fig. 2(b): achieve borderless lambda path control with enhanced scalability in the network control plane, improve the spectrum utilization efficiency in 100-G transport links, and extend reach using all-optical processing at 100 G and beyond.

3. Lambda Access Technologies

The focus of the “Lambda Access” project (2006–2010) was to enhance the scalability and utilization of the packet interface to access lambda-rich transport networks. The Terabit LAN virtually configured over a transport network will require the access interface to scale in bandwidth and be shared in multi-point communications.

Here we started with a simple architecture. The access interface consists of multiple colored lanes, i.e., mapped on different lambda paths, where any numbers of lanes to the same destination can be virtually bundled on demand [5].

3.1 Virtualized 100GE Optical Access

We achieved bandwidth-on-demand up to 100 Gb/s with a granularity of 10 Gb/s as shown in Fig. 3(a) . The bundling mechanism was implemented on FPGAs using a concept similar to that of aggregation at the physical layer [6]. A packet from a 100GE client is divided into a standard size (< 1.5 kB) if needed, distributed frame-by-frame over the bundled lanes, and stamped with a precise transmit time. This time stamp allows us to provide jitter-free packet transport as well as to bundle the lanes even via multiple routes and/or heterogeneous transport networks.

 figure: Fig. 3

Fig. 3 Technologies enabling virtualized 100GE network access to WDM network. (a) Adaptive inverse multiplexing for 100GE. (b) Mega-byte super jumbo frame processing.

Download Full Size | PDF

We also developed super-jumbo packet processing of up to 1 MB that allows an application to stream at beyond 10 Gb/s even with commercial hardware. The number of applications that require more than 10 Gb/s between end systems is expected to increase over the next few years. However, for these applications to use the bandwidth efficiently, new packet processing technology is necessary. This is because the frame length of conventional media access control (MAC) scheme is too short compared to the packet processing latency inside the end system. The degradation in TCP throughput due to congestion control, when TCP/IP is adopted as an upper layer protocol for conventional MAC, is also a serious problem. To address these problems, we proposed extending the MAC frame-length structure and retransmitting the MAC frame as shown in Fig. 3(b). To extend the MAC frame length, we set the maximum frame length to 1 MB. This value was used because the frame time is long enough compared to the 10GE frame time and the packet processing latency inside the end system. For this hardware implementation, we propose a buffer-less parallel cyclic redundancy check circuit. In the MAC frame retransmission, we compare the Selective Repeat Automatic Repeat reQuest (SR ARQ) scheme, which reduces the complexity of retransmission control and improves throughput in a 100G-based WAN and Go-back-N (GBN) ARQ with fast retransmission using negative AcKnowledgment (NAK). We have confirmed that the both schemes can improve the upper layer throughput up to 98 Gb/s with the BER of < 10−12 based on simulation. This result shows that GBN ARQ, which can reduce the circuit complexity, is more appropriate than SR ARQ as the MAC retransmission scheme.

3.2 Ethernet aggregation and single-lambda 100GE

Another scalability problem addressed in this project deals with increasing the bit rate per lane. Ethernet has stepped into a new paradigm as a multi-lane interface at 100 Gb/s with its unique 66-bit block-by-block distribution mechanism [7]. While the standard 4 x 25 Gb/s (100GBASE-LR4) configuration will best meet the CMOS roadmap for this decade, another mechanism is anticipated to scale Ethernet beyond 100 Gb/s. We investigated both multi-level and multi-carrier modulation schemes.

Figure 4(a) shows a prototype of the single-lambda 100 Gb/s DQPSK transponder implemented with the 100GE physical coding sublayer (PCS) and the physical media attachment (PMA) sublayer that has recently achieved a record low-power (< 2W) implementation for a 65-nm CMOS gearbox LSI [8,9].

 figure: Fig. 4

Fig. 4 Technologies enabling Ethernet fair aggregation and single-lambda 100GE. (a) Single-lambda 100GE by 50 Gbaud DQPSK. (b) Fair-queuing Ethernet frame aggregation.

Download Full Size | PDF

This project also developed a packet aggregator with a novel fair-queuing algorithm. The upper part of Fig. 4(b) shows high-speed frame aggregation for 100GE. The process time is less than 6.7 ns (64 byte packet times in 100GE). The lower part of Fig. 4(b) shows Delay Sensitive – Simplified Weighted Fair Queuing enhancement (DS-SWFQe) which can guarantee user traffic to maintain the minimum bandwidth and a short forwarding delay. The relative error of the DS-SWFQe bandwidth compared to that for the conventional WFQ is less than 0.2%.

4. Lambda-Utilities Technologies

The “Lambda Utilities” project (2006–2010) on the other hand, focused on enhancing the scalability and utilization of optical core networks with the following three objectives: enhance the scalability in the network control plane, improve the spectrum utilization efficiency in a 100-Gb/s link, and extend the reach using all-optical processing [3].

4.1 Enhanced scalability in the network control plane

Enhanced scalability in the network control plane was investigated by using a multi-domain routing model based on a Path Computation Element (PCE). Figure 5(a) shows the proposed system architecture with PCEs in a multi-domain network [10,11]. A PCE is located in each domain and the PCE computes paths within the domain. On-demand path configuration within a few seconds is achieved in large-scale (> 1,000 nodes) emulation. Another advantage is its capability to compute diverse end-to-end paths for work and backup paths by enhancing the Backward Recursive PCE-based Computation (BRPC) procedure [12].

 figure: Fig. 5

Fig. 5 Scalable multi-domain routing using PCEs. (a) Concept of PCE-based multi-domain path control system. (b) Performance evaluation of signaling time.

Download Full Size | PDF

The performance of the multi-domain path control system was evaluated in a multi-domain network, which consisted of 1,000 nodes and was controlled using GMPLS in the emulator. In the evaluated multi-domain network, each domain consisted of 40-50 nodes. Route calculation times for the system increased in proportion to the number of passed domains and were 90 msec, 130 msec, and 170 msec for 2 domains, 3 domains, and 4 domains, respectively. The signaling time increased in proportion to the hop counts. In order to reduce the signaling time and improve the path setup performance, the multi-domain parallel signaling method was proposed and successfully developed [11]. The method divides an end-to-end route into domain-by-domain partial routes, and then executes signaling in parallel according to the partial routes. Figure 5(b) shows the improvement in the performance of the signaling times when using the proposed multi-domain parallel signaling, compared to using the conventional multi-domain sequential signaling method. The signaling performance was greatly improved by up to 46% based on 1,000-node network experiments. The signaling performance for work and that for the backup paths were also improved by up to 62%.

4.2 Improved spectrum utilization efficiency in a 100 Gb/s link

To improve the spectrum utilization efficiency with a 100 Gb/s link, a 125 Gb/s Polmux RZ-DQPSK format was implemented in combination with soft-decision forward error correction (FEC) [13].

As shown in Figs. 6(a) and 6(b), we demonstrated 2-bit soft-decision-based low-density parity-check (LDPC) and Reed-Solomon (RS) FEC with 4 iterations for a 31.3 Gbaud DQPSK optical transmission for the first time. The gross coding gain at the post-FEC BER of 1E-13 is 9.8 dB. Taking the increased bit rate into account, the net coding gain (NCG) is 9.0 dB. The measured coding gain for hard decision was approximately 2.4 dB less than that obtained by 2-bit soft decision.

 figure: Fig. 6

Fig. 6 Spectral-efficient Polmux RZ-DQPSK with LDPC and RS FEC. (a) Field trial configuration. (b) Performance of LDPC and RS FEC. (c) Stable operation at 125 G.

Download Full Size | PDF

The Polmux RZ-DQPSK transceiver incorporated a compact integrated Lithium Niobate (LN) modulator together with accurate optical phase control circuits that were developed during the project for optical modulation and demodulation, which contributed to the long-term stability as shown in Fig. 6(c). The figure shows that the phase and polarization control circuits successfully tracked the environmental fluctuation of the field fiber plant and achieved a stable pre-FEC BER.

As a result, error-free operation over 560 km of DSF fiber was confirmed [14].

4.3 Reach extension using all-optical processing

One major challenge to reach extension using all-optical processing exists in its transparency [15,16]. We investigated how to support PSK in addition to the conventional OOK, and developed a dual-mode all-optical 3R regenerator by employing a PSK-to-OOK format converter and a modulation-format-switchable optical-fiber-based logic gate, as shown in Fig. 7(a) . The optical logic gate consists of a dual-stage wavelength converter based on self-phase-modulation and a non-linear optical loop mirror (NOLM), which enables all-optical PSK encoding in addition to conventional OOK encoding [17]. To cope with the robust 3R operation, an adaptive PMD compensating function and automatic control function are implemented. Figure 7(b) shows the performance of dual-mode all-optical 3R processing at 160 Gb/s. In PSK-mode operation, the DPSK signal was regenerated after transmitting a signal over a 250km-long (80 km x 3 spans) standard single mode fiber (SSMF) with nearly perfect dispersion compensation, and the regeneration capability was confirmed to improve Q-factor by 2 dB. Here, we temporarily employed a 1-bit-delayed-interferrometer for the PSK-to-OOK format converter. For the authentic PSK-mode operation, we are developing a format converter based on homodyne demodulation with an optical phase locked loop (OPLL), although stable OPLL operation beyond 100 Gb/s is still a significant challenge. The 3R performance of the OOK-mode at 160 Gb/s was evaluated in field transmission experiments on Japan Gigabit Network 2 (JGN2), and excellent regenerative operation showing a Q-factor improvement of greater than 7 dB for a 3R-interval of 380 km (64 km x 6 spans) was verified [18].

 figure: Fig. 7

Fig. 7 (a) Schematic image of dual-mode optical 3R regenerator. (b) Demonstration of 160-Gb/s OOK/DPSK dual-mode 3R operation, (c) Results of re-circulating loop transmission over 5000 km. Insets are eye-diagrams of 160-Gb/s signals and optically demultiplexed 40-Gb/s measured with an electrical bandwidth of 40 GHz.

Download Full Size | PDF

For the sake of clarifying the total performance of the optical 3R link, the cascadability of the regenerators (in OOK-mode operation) was evaluated based on re-circulating loop transmission experiments. The loop transmission line was configured with three spans of 80-km long SSMF with nearly perfect dispersion compensation. The optical 3R regenerator was placed at the end of the loop transmission line. Figure 7(c) shows the BER performance measured at the transmission distance of 0 km (back-to-back), 500 km, and 5040 km. We verified that the optical 3R link enabled error-free transmission over 5000 km even at 160 Gb/s, whereas the transmission distance of the conventional link without 3R regenerator was limited to less than 500 km.

5. Field experiments

We conducted field experiments as a feasibility study for the developed technologies as well as to form a basis to envision Terabit-LAN services virtualized over optical transport networks. Figure 8 shows the configuration. A 10 x 10 Gb/s ROADM ring was incorporated into a field fiber test-bed (100-km loop) of the extended Japan Gigabit Network 2 (JGN2plus). An end user was emulated by cluster PCs. The end user accessed the WDM ring on-demand using a 100GE lane-bundling transponder and multi-domain PCEs developed in the “Lambda Access” and the “Lambda Utilities” projects, respectively. Additional optical switches by the side of each ROADM allowed us to access other transport links developed by the “Lambda Utilities” project.

 figure: Fig. 8

Fig. 8 Field demonstration configuration for on-demand 100GE virtual connections over core transport network.

Download Full Size | PDF

In the experiment we confirmed 100GE adaptive network access and its interworking with user-driven on demand N x 10 Gb/s multiple-path provisioning. We demonstrated the streaming of the world’s-first on-demand super-high-definition full-progressive (4K60P) uncompressed video at 12 Gb/s over field-deployed fiber. We also showed instant blu-ray-disk size (25 GB) data transfer in just 2 seconds at 100 Gb/s over route-diversified transport links, as shown in Fig. 9 .

 figure: Fig. 9

Fig. 9 World-first on-demand 100-Gb/s download of blu-ray-disk-size image files. (a) 100G-bandwidth on-demand configuration via diverse route. (b) 100-Gb/s download demonstration (Media 1).

Download Full Size | PDF

6. Summary

Our Terabit LAN initiatives with the two research projects, “Lambda Access” and “Lambda Utilities” for network access and core transport, respectively, attempted to enhance the scalability and utilization of premier lambda resources in WANs. The projects conducted feasibility studies of a series of essential technologies such as adaptive inverse multiplexing, multi-domain routing, spectral-efficient 100G transport, and dual-mode all-optical 3R regeneration. With the combinations of these technologies, we successfully demonstrated bandwidth-on-demand virtualized 100GE access to WDM networks on a field fiber test-bed using multi-domain optical-path provisioning.

Acknowledgments

The authors thank K. Hisadome of NTT, I. Nishioka of NEC, M. Komatsu of NTT Communications, T. Inoue of Mitsubishi Electric., T. Kono of Hitachi, H. Tanaka of KDDI R&D Labs, Y. Akiyama of Fujitsu, M. Kagawa of OKI, and many other contributors to the projects and field experiments. This work was supported by the National Institute of Information and Communications of Technology (NICT), Japan.

References and links

1. P. Winzer, “Beyond 100G Ethernet,” IEEE Commun. Mag. 48(7), 26–30 (2010). [CrossRef]  

2. M. Tomizawa, J. Yamawaku, Y. Takigawa, M. Koga, Y. Miyamoto, T. Morioka, and K. Hagimoto, “Terabit LAN with optical virtual concatenation for grid applications with super-computers,” in Optical Fiber Communication Conference, Technical Digest, paper OThG6 (2005).

3. O. Ishida and S. Araki, “Challenging Terabit-class LAN over wide area networks,” J. Lightwave Technol. 27(12), 1947–1956 (2009). [CrossRef]  

4. O. Ishida, S. Araki, S. Arai, T. Ichikawa, H. Toyoda, I. Morita, T. Hoshida, and H. Murai, “On-demand virtual optical network access using 100 Gb/s Ethernet,” in European Conference on Optical Communication (ECOC), Technical Digest, paper Tu.5.C.3 (2011).

5. K. Hisadome, M. Teshima, Y. Yamada, and O. Ishida, “100 Gb/s Ethernet inverse multiplexing based on aggregation at the physical layer,” IEICE Trans. Commun. E 94B(4), 904–909 (2011).

6. H. Frazier, “Aggregation at the physical layer,” IEEE Commun. Mag. 46(2), S12 (2008). [CrossRef]  

7. G. Nicholl, M. Gustlin, and O. Trainin, “A physical coding sublayer for 100GbE [Applications & Practice],” IEEE Commun. Mag. 45(12), 4–10 (2007). [CrossRef]  

8. H. Toyoda, G. Ono, and S. Nishimura, “100GbE PHY and MAC layer implementations,” IEEE Commun. Mag. 48(3), S41–S47 (2010). [CrossRef]  

9. G. Ono, K. Watanabe, T. Muto, H. Yamashita, K. Fukuda, N. Masuda, R. Nemoto, E. Suzuki, T. Takemoto, F. Yuki, M. Yagyu, H. Toyoda, A. Kambe, T. Saito, and S. Nishimura, “10:4 MUX and 4:10 DEMUX gearbox LSI for 100-Gigabit Ethernet link,” in Digest of Technical Papers of IEEE International Conference Solid-State Circuits (Institute of Electrical and Electronics Engineers, New York, 2011), pp. 148–150.

10. S. Araki, I. Nishioka, S. Ishida, Y. Iizawa, and M. Nakama, “Optical network control challenges, ” IEEE/LEOS Summer Topicals, (2009), pp. 139–140.

11. Y. Iizawa, S. Ishida, I. Nishioka, A. Tajima, and S. Araki, “Fast path signaling method for large-scale multi-domain networks, ” iPOP2011, 3–1, Kanagawa, Japan, June 2–3, 2011.

12. J. P. Vasseur, ed., R. Zhang, N. Bitar, and J. L. Le Roux, “A backward recursive PCE-based computation (BRPC) procedure to compute shortest constrained inter-domain traffic engineering label switched paths,” Internet Draft draft-ietf-pce-brpc-09, Work. in progress.

13. F. Chang, K. Onohara, and T. Mizuochi, “Forward error correction for 100 G transport networks,” IEEE Commun. Mag. 48(3), S48–S55 (2010). [CrossRef]  

14. Y. Akiyama, H. Nakashima, T. Hoshida, T. Inoue, S. Kametani, and K. Onohara, “Error-free 125Gb/s Polmux RZ-DQPSK transmission with concatenated LDPC and RS FEC over 560km (in Japanese),” Proc. IEICE Gen. Conf. B-10–110, −111 (2011).

15. F. Parmigiani, R, Slavíc, J. Kakande, C. Lundström, M. Sjödin, P. Andrekson, R. Weerasuriya, S. Sygletos, A. Ellis, L. Grüner-Nielsen, D. Jakobsen, S. Herstrøm, R. Phelan, J. O’Gorman, A. Bogris, D, Syvridits, S. Dasgupta, P. Petropoulos, and D. Richardson, “All-optical phase regeneration of 40Gbit/s DPSK signals in a black-box phase sensitive amplifier,” in Proc. OFC2010, Post-deadline paper PDPC3 (2010).

16. M. Matsumoto, “A fiber-based all-optical 3R regenerator for DPSK signals,” IEEE Photon. Technol. Lett. 19(5), 273–275 (2007). [CrossRef]  

17. S. Arahira, H. Murai, and K. Fujii, “All-optical modulation-format convertor employing polarization-rotation-type nonlinear optical fiber loop mirror,” IEEE Photon. Technol. Lett. 20(18), 1530–1532 (2008). [CrossRef]  

18. H. Murai, Y. Kanda, M. Kagawa, and S. Arahira, “Regenerative SPM-based wavelength conversion and field demonstration of 160-Gb/s all-optical 3R operation,” J. Lightwave Technol. 28(6), 910–921 (2010). [CrossRef]  

Supplementary Material (1)

Media 1: MOV (3963 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 Paradigm shifts in optical interfaces triggered the Terabit-LAN initiatives over core transport networks. (a) Trend in optical system capacity per fiber. (b) Concept of Terabit-LAN over WDM networks.
Fig. 2
Fig. 2 Overview of the two five-year research projects to prepare on-demand data transfer at 100G. (a) Project “Lambda Access.” (b) Project “Lambda Utilities.”
Fig. 3
Fig. 3 Technologies enabling virtualized 100GE network access to WDM network. (a) Adaptive inverse multiplexing for 100GE. (b) Mega-byte super jumbo frame processing.
Fig. 4
Fig. 4 Technologies enabling Ethernet fair aggregation and single-lambda 100GE. (a) Single-lambda 100GE by 50 Gbaud DQPSK. (b) Fair-queuing Ethernet frame aggregation.
Fig. 5
Fig. 5 Scalable multi-domain routing using PCEs. (a) Concept of PCE-based multi-domain path control system. (b) Performance evaluation of signaling time.
Fig. 6
Fig. 6 Spectral-efficient Polmux RZ-DQPSK with LDPC and RS FEC. (a) Field trial configuration. (b) Performance of LDPC and RS FEC. (c) Stable operation at 125 G.
Fig. 7
Fig. 7 (a) Schematic image of dual-mode optical 3R regenerator. (b) Demonstration of 160-Gb/s OOK/DPSK dual-mode 3R operation, (c) Results of re-circulating loop transmission over 5000 km. Insets are eye-diagrams of 160-Gb/s signals and optically demultiplexed 40-Gb/s measured with an electrical bandwidth of 40 GHz.
Fig. 8
Fig. 8 Field demonstration configuration for on-demand 100GE virtual connections over core transport network.
Fig. 9
Fig. 9 World-first on-demand 100-Gb/s download of blu-ray-disk-size image files. (a) 100G-bandwidth on-demand configuration via diverse route. (b) 100-Gb/s download demonstration (Media 1).
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.