This special issue contains a collection of invited and contributed papers, including one review article and twelve research articles, on advanced monitoring and telemetry in optical networks.
© 2021 Optical Society of America
Service providers have made it clear that they are looking beyond physical capabilities when selecting their networking ecosystems. The network’s ability to collect, aggregate, and analyze performance metrics and take action in response to this data analysis is of growing importance, with an increased emphasis on streaming telemetry rather than legacy polling techniques. In contrast to network attributes such as transmission capacity where growth has been slowed by physical realities, advancements in monitoring and processing capabilities have accelerated the potential use of telemetry to drive network operation. The challenge is to hone the effort and harness the tracked data to maximize the advantages that can be obtained. To fully investigate these challenges, this special issue is focused on the intelligent use of monitoring and telemetry to control and manage optical networks.
While increased monitoring potentially exposes an array of performance metrics, data explosion is a prime concern. Solutions must remain scalable; it is essential to determine which telemetric data can be most effective in network management, whether it be automating common tasks or driving new and advanced applications. Other fundamental considerations are the monitoring frequency and where the data should be collected and/or analyzed (e.g., distributed at the network elements or more centralized, packet layer and optical layer) and the ensuing ramifications for communication/processing burden and optimality.
Several works in this special issue focus on monitoring and telemetry aspects for optical and packet networks. In “Machine-learning-based telemetry for monitoring long-haul optical transmission impairments: methodologies and challenges” the authors thoroughly review the state of the art of telemetry systems and propose a unified workflow for designing telemetry modules. The paper “Concept and implementation study of advanced DSP-based fiber-longitudinal optical power profile monitoring toward optical network tomography” concentrates on multi-span optical tomography, whereas the authors of “Opening up ROADMs: streaming telemetry” work on streaming telemetry with sub-second updates of the full C-band with a sub-GHz resolution. At the packet layer, the work “FPGA-based network microburst analysis system with efficient packet capturing” focuses on the capture and analysis of sub-millisecond packet bursts, as they can cause network latency and packet loss. Finally, the paper “Reliable and scalable Kafka-based framework for optical network telemetry” presents a framework that exploits the built-in scalability and reliability of Apache Kafka.
The ultimate goal, of course, is taking full advantage of the collected telemetric data and machine learning (ML) techniques can be used to analyze them. One can envision a wide range of applications, e.g., performance data can be tracked for purposes of failure prediction, detection, and localization. In “Reflective fiber fault detection and characterization using long-short-term memory,” the authors propose a multi-task learning model from data obtained by the optical time domain reflectometry principle for the detection and localization of fiber reflective faults. The paper “Machine-learning-based soft-failure localization with partial software-defined networking telemetry” evaluates an ML-based soft-failure localization framework in scenarios of partial telemetry. The authors of “Forecasting loss of signal in optical networks with machine learning” found that loss of light events can be forecasted with good precision 1–7 days before they occur. Finally, the work “Monitoring and diagnostic technologies using deep neural networks for predictive optical network maintenance” describes several workflows to collect monitoring data from whitebox transponders, apply ML to perform diagnosis, and notify the management system of the results.
Additionally, telemetry can be used for parameter estimation. The authors of “Experimental validation of CNNs versus FFNNs for time- and energy-efficient EVM estimation in coherent optical systems” explore two ML-based error vector magnitude estimation schemes for mQAM signal quality monitoring purposes, whereas those of “QoT assessment of the optical spectrum as a service in disaggregated network scenarios” use commercially available sliceable coherent transceivers to assess the generalized signal to noise ratio (GSNR) based QoT for multi-domain optical services.
Finally, it is natural for monitoring to become an integral part of a software-defined network to drive automation. The authors of “Autonomous Raman amplifiers in multi-band software-defined optical transport networks” present a controller architecture for autonomously managed multi-band Raman amplification and those of “Cognitive and autonomous QoT-driven optical line controller” propose and experimentally test a vendor-agnostic optical line controller architecture to autonomously set the working point of the optical amplifiers.
It has been a great pleasure for all of us to guest-edit this special issue, as the papers are of high quality. We hope our readers will find inspiration in this rich research body to exploit monitoring and telemetry for these and other network applications.
Last but not least, we would like to issue special thanks to our Editor-in-Chief Jane Simmons, who took a very active role in shaping this special issue and whose dedication and promptness ensured both quality improvement in many papers, as well as a timely publication of the special issue.
Lead Guest Editor
KDDI Research, Inc., Japan