Internet-Draft | Network Telemetry Framework | November 2021 |
Song, et al. | Expires 2 June 2022 | [Page] |
Network telemetry is a technology for gaining network insight and facilitating efficient and automated network management. It encompasses various techniques for remote data generation, collection, correlation, and consumption. This document describes an architectural framework for network telemetry, motivated by challenges that are encountered as part of the operation of networks and by the requirements that ensue. This document clarifies the terminologies and classifies the modules and components of a network telemetry system from different perspectives. The framework and taxonomy help to set a common ground for the collection of related work and provide guidance for related technique and standard developments.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 2 June 2022.¶
Copyright (c) 2021 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
Network visibility is the ability of management tools to see the state and behavior of a network, which is essential for successful network operation. Network Telemetry revolves around network data that can help provide insights about the current state of the network, including network devices, forwarding, control, and management planes, and that can be generated and obtained through a variety of techniques, including but not limited to network instrumentation and measurements, and that can be processed for purposes ranging from service assurance to network security using a wide variety of techniques including machine learning, data analysis, and correlation. In this document, Network Telemetry refer to both the data itself (i.e., "Network Telemetry Data"), and the techniques and processes used to generate, export, collect, and consume that data for use by potentially automated management applications. Network telemetry extends beyond the historical network Operations, Administration, and Management (OAM) techniques and expects to support better flexibility, scalability, accuracy, coverage, and performance.¶
However, the term "network telemetry" lacks an unambiguous definition. The scope and coverage of it cause confusion and misunderstandings. It is beneficial to clarify the concept and provide a clear architectural framework for network telemetry, so we can articulate the technical field, and better align the related techniques and standard works.¶
To fulfill such an undertaking, we first discuss some key characteristics of network telemetry which set a clear distinction from the conventional network OAM and show that some conventional OAM technologies can be considered a subset of the network telemetry technologies. We then provide an architectural framework for network telemetry which includes four modules, each concerned with a different category of telemetry data and corresponding procedures. All the modules are internally structured in the same way, including components that allow to configure data sources in regard to what data to generate and how to make that available to client applications, components that instrument the underlying data sources, and components that perform the actual rendering, encoding, and exporting of the generated data. We show how the network telemetry framework can benefit the current and future network operations. Based on the distinction of modules and function components, we can map the existing and emerging techniques and protocols into the framework. The framework can also simplify the tasks for designing, maintaining, and understanding a network telemetry system. At last, we outline the evolution stages of the network telemetry system and discuss the potential security concerns.¶
The purpose of the framework and taxonomy is to set a common ground for the collection of related work and provide guidance for future technique and standard developments. To the best of our knowledge, this document is the first such effort for network telemetry in industry standards organizations.¶
Before further discussion, we list some key terminology and acronyms used in this document. We make an intended differentiation between the terms of network telemetry and OAM. However, it should be understood that there is not a hard-line distinction between the two concepts. Rather, network telemetry is considered as an extension of OAM. It covers all the existing OAM protocols but puts more emphasis on the newer and emerging techniques and protocols concerning all aspects of network data from acquisition to consumption.¶
The term "big data" is used to describe the extremely large volume of data sets that can be analyzed computationally to reveal patterns, trends, and associations. Networks are undoubtedly a source of big data because of their scale and the volume of network traffic they forward. When a network's endpoints do not represent individual users (e.g. in industrial, datacenter, and infrastructure contexts), network operations can often benefit from large-scale data collection without breaching user privacy.¶
Today one can access advanced big data analytics capability through a plethora of commercial and open source platforms (e.g., Apache Hadoop), tools (e.g., Apache Spark), and techniques (e.g., machine learning). Thanks to the advance of computing and storage technologies, network big data analytics gives network operators an opportunity to gain network insights and move towards network autonomy. Some operators start to explore the application of Artificial Intelligence (AI) to make sense of network data. Software tools can use the network data to detect and react on network faults, anomalies, and policy violations, as well as predicting future events. In turn, the network policy updates for planning, intrusion prevention, optimization, and self-healing may be applied.¶
It is conceivable that an autonomic network [RFC7575] is the logical next step for network evolution following Software Defined Network (SDN), aiming to reduce (or even eliminate) human labor, make more efficient use of network resources, and provide better services more aligned with customer requirements. The related technique of Intent-based Networking (IBN) [I-D.irtf-nmrg-ibn-concepts-definitions] requires network visibility and telemetry data in order to ensure that the network is behaving as intended.¶
However, while the data processing capability is improved and applications are hungry for more data, the networks lag behind in extracting and translating network data into useful and actionable information in efficient ways. The system bottleneck is shifting from data consumption to data supply. Both the number of network nodes and the traffic bandwidth keep increasing at a fast pace. The network configuration and policy change at smaller time slots than before. More subtle events and fine-grained data through all network planes need to be captured and exported in real time. In a nutshell, it is a challenge to get enough high-quality data out of the network in a manner that is efficient, timely, and flexible. Therefore, we need to survey the existing technologies and protocols and identify any potential gaps.¶
In the remainder of this section, first we clarify the scope of network data (i.e., telemetry data) concerned in the context. Then, we discuss several key use cases for today's and future network operations. Next, we show why the current network OAM techniques and protocols are insufficient for these use cases. The discussion underlines the need of new methods, techniques, and protocols, as well as the extensions of existing ones, which we assign under the umbrella term - Network Telemetry.¶
Any information that can be extracted from networks (including data plane, control plane, and management plane) and used to gain visibility or as basis for actions is considered telemetry data. It includes statistics, event records and logs, snapshots of state, configuration data, etc. It also covers the outputs of any active and passive measurements [RFC7799]. In some cases, raw data is processed in network before being sent to a data consumer. Such processed data is also considered telemetry data. The value of telemetry data varies. Less but higher quality data are often better than lots of low quality data. A classification of telemetry data is provided in Section 3. To preserve user privacy, the user packet content should not be collected.¶
The following set of use cases is essential for network operations. While the list is by no means exhaustive, it is enough to highlight the requirements for data velocity, variety, volume, and veracity, the attributes of big data, in networks.¶
For a long time, network operators have relied upon SNMP [RFC3416], Command-Line Interface (CLI), or Syslog [RFC5424] to monitor the network. Some other OAM techniques as described in [RFC7276] are also used to facilitate network troubleshooting. These conventional techniques are not sufficient to support the above use cases for the following reasons:¶
These challenges were addressed by newer standards and techniques (e.g., IPFIX/Netflow, Packet Sampling (PSAMP), IOAM, and YANG-Push) and more are emerging. These standards and techniques need to be recognized and accommodated in a new framework.¶
Network telemetry has emerged as a mainstream technical term to refer to the network data collection and consumption techniques. Several network telemetry techniques and protocols (e.g., IPFIX [RFC7011] and gRPC [grpc]) have been widely deployed. Network telemetry allows separate entities to acquire data from network devices so that data can be visualized and analyzed to support network monitoring and operation. Network telemetry covers the conventional network OAM and has a wider scope. It is expected that network telemetry can provide the necessary network insight for autonomous networks and address the shortcomings of conventional OAM techniques.¶
Network telemetry usually assumes machines as data consumers rather than human operators. Hence, the network telemetry can directly trigger the automated network operation, while in contrast some conventional OAM tools are designed and used to help human operators to monitor and diagnose the networks and guide manual network operations. Such a proposition leads to very different techniques.¶
Although new network telemetry techniques are emerging and subject to continuous evolution, several characteristics of network telemetry have been well accepted. Note that network telemetry is intended to be an umbrella term covering a wide spectrum of techniques, so the following characteristics are not expected to be held by every specific technique.¶
In addition, an ideal network telemetry solution may also have the following features or properties:¶
It is worth noting that a network telemetry system should not be intrusive to normal network operations by avoiding the pitfall of the "observer effect". That is, it should not change the network behavior and affect the forwarding performance. Moreover, high-volume telemetry traffic may cause network congestion unless proper isolation or traffic engineering techniques are in place, or congestion control mechanisms ensure that telemetry traffic backs off if it exceeds the network capacity. [RFC8084] and [RFC8085] are relevant Best Current Practices (BCP) in this space.¶
Although in many cases a system for network telemetry involves a remote data collecting and consuming entity, it is important to understand that there are no inherent assumptions about how a system should be architected. While a network architecture with centralized controller (e.g., SDN) seems a natural fit for network telemetry, network telemetry can work in distributed fashions as well. For example, telemetry data producers and consumers can have a peer-to-peer relationship, in which a network node can be the direct consumer of telemetry data from other nodes.¶
Network data analytics and machine-learning technologies are applied for network operation automation, relying on abundant and coherent data from networks. Data acquisition that is limited to a single source and static in nature will in many cases not be sufficient to meet an application's telemetry data needs. As a result, multiple data sources, involving a variety of techniques and standards, will need to be integrated. It is desirable to have a framework that classifies and organizes different telemetry data source and types, defines different components of a network telemetry system and their interactions, and helps coordinate and integrate multiple telemetry approaches across layers. This allows flexible combinations of data for different applications, while normalizing and simplifying interfaces. In detail, such a framework would benefit application development for the following reasons:¶
A telemetry framework collects together all the telemetry-related works from different sources and working groups within IETF. This makes it possible to assemble a comprehensive network telemetry system and to avoid repetitious or redundant work. The framework should cover the concepts and components from the standardization perspective. This document describes the modules which make up a network telemetry framework and decomposes the telemetry system into a set of distinct components that existing and future work can easily map to.¶
Disclaimer: large-scale network data collection is a major threat to user privacy [RFC7258]. The network telemetry framework presented in this document should not be applied to collect and retain individual user data or any data that can identify end users without consent. Any data collection or retention using the framework must be tightly limited to protect user privacy.¶
The top level network telemetry framework partitions the network telemetry into four modules based on the telemetry data object source and represents their relationship. At the next level, the framework decomposes each module into separate components. Each of the modules follows the same underlying structure, with one component dedicated to the configuration of data subscriptions and data sources, a second component dedicated to encoding and exporting data, and a third component instrumenting the generation of telemetry related to the underlying resources. Throughout the framework, the same set of abstract data acquiring mechanisms and data types (Section 3.3) are applied. The two-level architecture with the uniform data abstraction helps accurately pinpoint a protocol or technique to its position in a network telemetry system or disaggregate a network telemetry system into manageable parts.¶
Telemetry can be applied on the forwarding plane, the control plane, and the management plane in a network, as well as other sources out of the network, as shown in Figure 1. Therefore, we categorize the network telemetry into four distinct modules with each having its own interface to Network Operation Applications.¶
The rationale of this partition lies in the different telemetry data objects which result in different data source and export locations. Such differences have profound implications on in-network data programming and processing capability, data encoding and transport protocol, and required data bandwidth and latency. Data can be sent directly, or proxied via the control and management planes. There are advantages/disadvantages to both approaches.¶
Note that in some cases the network controller itself may be the source of telemetry data that is unique to it or derived from the telemetry data collected from the network elements. Some of the principles and taxonomy specific to the control plane and management plane telemetry could also be applied to the controller when it is required to provide the telemetry data to Network Operation Applications hosted outside. The scope of the document is focused on the network elements telemetry and further details related to controllers are thus out of scope.¶
We summarize the major differences of the four modules in the following table. They are compared from six angles:¶
Data Object is the target and source of each module. Because the data source varies, the location where data is mostly conveniently exported also varies. For example, forwarding plane data mainly originates as data exported from the forwarding Application-Specific Integrated Circuits (ASICs), while control plane data mainly originates from the protocol daemons running on the control CPU(s). For convenience and efficiency, it is preferred to export the data off the device from locations near the source. Because the locations that can export data have different capabilities, different choices of data model, encoding, and transport method are made to balance the performance and cost. For example, the forwarding chip has high throughput but limited capacity for processing complex data and maintaining states, while the main control CPU is capable of complex data and state processing, but has limited bandwidth for high throughput data. As a result, the suitable telemetry protocol for each module can be different. Some representative techniques are shown in the corresponding table blocks to highlight the technical diversity of these modules. Note that the selected techniques just reflect the de facto state of the art and are by no means exhaustive (e.g., IPFIX can also be implemented over TCP and SCTP, but that is not recommended for forwarding plane). The key point is that one cannot expect to use a universal protocol to cover all the network telemetry requirements.¶
Note that the interaction with the applications that consume network telemetry data can be indirect. Some in-device data transfer is possible. For example, in the management plane telemetry, the management plane will need to acquire data from the data plane. Some operational states can only be derived from data plane data sources such as the interface status and statistics. As another example, obtaining control plane telemetry data may require the ability to access the Forwarding Information Base (FIB) of the data plane.¶
On the other hand, an application may involve more than one plane and interact with multiple planes simultaneously. For example, an SLA compliance application may require both the data plane telemetry and the control plane telemetry.¶
The requirements and challenges for each module are summarized as follows (note that the requirements may pertain across all telemetry modules; however, we emphasize those that are most pronounced for a particular plane).¶
The management plane of network elements interacts with the Network Management System (NMS), and provides information such as performance data, network logging data, network warning and defects data, and network statistics and state data. The management plane includes many protocols, including some that are considered "legacy", such as SNMP and syslog. Regardless the protocol, management plane telemetry must address the following requirements:¶
The control plane telemetry refers to the health condition monitoring of different network control protocols at all layers of the protocol stack. Keeping track of the operational status of these protocols is beneficial for detecting, localizing, and even predicting various network issues, as well as network optimization, in real-time and with fine granularity. Some particular challenges and issues faced by the control plane telemetry are as follows:¶
An effective forwarding plane telemetry system relies on the data that the network device can expose. The quality, quantity, and timeliness of data must meet some stringent requirements. This raises some challenges to the network data plane devices where the first-hand data originates.¶
Although not specific to the forwarding plane, these challenges are more difficult to the forwarding plane because of the limited resource and flexibility. Data plane programmability is essential to support network telemetry. Newer data plane forwarding chips are equipped with advanced telemetry features and provide flexibility to support customized telemetry functions.¶
Technique Taxonomy: concerning about how one instruments the telemetry, there can be multiple possible dimensions to classify the forwarding plane telemetry techniques.¶
Events that occur outside the boundaries of the network system are another important source of network telemetry. Correlating both internal telemetry data and external events with the requirements of network systems, as presented in [I-D.pedro-nmrg-anticipated-adaptation], provides a strategic and functional advantage to management operations.¶
As with other sources of telemetry information, the data and events must meet strict requirements, especially in terms of timeliness, which is essential to properly incorporate external event information into network management applications. The specific challenges are described as follows:¶
Organizing both internal and external telemetry information together will be key for the general exploitation of the management possibilities of current and future network systems, as reflected in the incorporation of cognitive capabilities to new hardware and software (virtual) elements.¶
The telemetry module at each plane can be further partitioned into five distinct conceptual components:¶
Broadly speaking, network data can be acquired through subscription (push) and query (poll). A subscription is a contract between publisher and subscriber. After initial setup, the subscribed data is automatically delivered to registered subscribers until the subscription expires. There are two variations of subscription. The subscriptions can be either pre-defined, or the subscribers are allowed to configure and tailor the published data to their specific needs.¶
In contrast, queries are used when a client expects immediate and one-off feedback from network devices. The queried data may be directly extracted from some specific data source, or synthesized and processed from raw data. Queries work well for interactive network telemetry applications.¶
In general, data can be pulled (i.e., queried) whenever needed, but in many cases, pushing the data (i.e., subscription) is more efficient, and can reduce the latency of a client detecting a change. From the data consumer point of view, there are four types of data from network devices that a telemetry data consumer can subscribe or query:¶
The above telemetry data types are not mutually exclusive. Rather, they are often composite. Derived data is composed of simple data; Event-triggered data can be simple or derived; streaming data can be based on some recurring event. The relationships of these data types are illustrated in Figure 4.¶
Subscription usually deals with event-triggered data and streaming data, and query usually deals with simple data and derived data. But the other ways are also possible. Advanced network telemetry techniques are designed mainly for event-triggered or streaming data subscription, and derived data query.¶
The following table shows how the existing mechanisms (mainly published in IETF and with the emphasis on the latest new technologies) are positioned in the framework. Given the vast body of existing work, we cannot provide an exhaustive list, so the mechanisms in the tables should be considered as just examples. Also, some comprehensive protocols and techniques may cover multiple aspects or modules of the framework, so a name in a block only emphasizes one particular characteristic of it. More details about some listed mechanisms can be found in Appendix A.¶
Although the framework is generally suitable for any network environments, the multi-domain telemetry has some unique challenges which deserve further architectural consideration, which is out of the scope of this document.¶
Network telemetry is an evolving technical area. As the network moves towards the automated operation, network telemetry applications undergo several stages of evolution which add new layer of requirements to the underlying network telemetry techniques. Each stage is built upon the techniques adopted by the previous stages plus some new requirements.¶
Existing technologies are ready for stage 0 and stage 1. Individual stage 2 and stage 3 applications are also possible now. However, the future autonomic networks may need a comprehensive operation management system which works at stage 2 and stage 3 to cover all the network operation tasks. A well-defined network telemetry framework is the first step towards this direction.¶
The complexity of network telemetry raises significant security implications. For example, telemetry data can be manipulated to exhaust various network resources at each plane as well as the data consumer; falsified or tampered data can mislead the decision-making and paralyze networks; wrong configuration and programming for telemetry is equally harmful. The telemetry data is highly sensitive, which exposes a lot of information about the network and its configuration. Some of that information can make designing attacks against the network much easier (e.g., exact details of what software and patches have been installed), and allows an attacker to determine whether a device may be subject to unprotected security vulnerabilities.¶
Given that this document has proposed a framework for network telemetry and the telemetry mechanisms discussed are more extensive (in both message frequency and traffic amount) than the conventional network OAM concepts, we must also reflect that various new security considerations may also arise. A number of techniques already exist for securing the forwarding plane, the control plane, and the management plane in a network, but it is important to consider if any new threat vectors are now being enabled via the use of network telemetry procedures and mechanisms.¶
Security considerations for networks that use telemetry methods may include:¶
Some security considerations highlighted above may be minimized or negated with policy management of network telemetry. In a network telemetry deployment it would be advantageous to separate telemetry capabilities into different classes of policies, i.e., Role Based Access Control and Event-Condition-Action policies. Also, potential conflicts between network telemetry mechanisms must be detected accurately and resolved quickly to avoid unnecessary network telemetry traffic propagation escalating into an unintended or intended denial of service attack.¶
Further study of the security issues will be required, and it is expected that the security mechanisms and protocols are developed and deployed along with a network telemetry system.¶
This document includes no request to IANA.¶
The other contributors of this document are Tianran Zhou, Zhenbin Li, Zhenqiang Li, Daniel King, Adrian Farrel, and Alexander Clemm¶
We would like to thank Rob Wilton, Greg Mirsky, Randy Presuhn, Joe Clarke, Victor Liu, James Guichard, Uri Blumenthal, Giuseppe Fioccola, Yunan Gu, Parviz Yegani, Young Lee, Qin Wu, Gyan Mishra, Ben Schwartz, Alexey Melnikov, Michael Scharf, Dhruv Dhody, Martin Duke, and many others who have provided helpful comments and suggestions to improve this document.¶
In this non-normative appendix, we provide an overview of some existing techniques and standard proposals for each network telemetry module.¶
NETCONF [RFC6241] is a popular network management protocol recommended by IETF. Its core strength is for managing configuration, but can also be used for data collection. YANG-Push [RFC8641] [RFC8639] extends NETCONF and enables subscriber applications to request a continuous, customized stream of updates from a YANG datastore. Providing such visibility into changes made upon YANG configuration and operational objects enables new capabilities based on the remote mirroring of configuration and operational state. Moreover, distributed data collection mechanism [I-D.ietf-netconf-distributed-notif] via UDP based publication channel [I-D.ietf-netconf-udp-notif] provides enhanced efficiency for the NETCONF based telemetry.¶
gRPC Network Management Interface (gNMI) [gnmi] is a network management protocol based on the gRPC [grpc] RPC (Remote Procedure Call) framework. With a single gRPC service definition, both configuration and telemetry can be covered. gRPC is an HTTP/2 [RFC7540]-based open-source micro-service communication framework. It provides a number of capabilities which are well-suited for network telemetry, including:¶
BGP Monitoring Protocol (BMP) [RFC7854] is used to monitor BGP sessions and is intended to provide a convenient interface for obtaining route views.¶
The BGP routing information is collected from the monitored device(s) to the BMP monitoring station by setting up the BMP TCP session. The BGP peers are monitored by the BMP Peer Up and Peer Down Notifications. The BGP routes (including Adjacency_RIB_In [RFC7854], Adjacency_RIB_out [RFC8671], and Local_Rib [I-D.ietf-grow-bmp-local-rib]) are encapsulated in the BMP Route Monitoring Message and the BMP Route Mirroring Message, providing both an initial table dump and real-time route updates. In addition, BGP statistics are reported through the BMP Stats Report Message, which could be either timer triggered or event-driven. Future BMP extensions could further enrich BGP monitoring applications.¶
The Alternate Marking method enables efficient measurements of packet loss, delay, and jitter both in IP and Overlay Networks, as presented in [RFC8321] and [RFC8889].¶
This technique can be applied to point-to-point and multipoint-to-multipoint flows. Alternate Marking creates batches of packets by alternating the value of 1 bit (or a label) of the packet header. These batches of packets are unambiguously recognized over the network and the comparison of packet counters for each batch allows the packet loss calculation. The same idea can be applied to delay measurement by selecting ad hoc packets with a marking bit dedicated for delay measurements.¶
Alternate Marking method needs two counters each marking period for each flow under monitor. For instance, by considering n measurement points and m monitored flows, the order of magnitude of the packet counters for each time interval is n*m*2 (1 per color).¶
Since networks offer rich sets of network performance measurement data (e.g., packet counters), conventional approaches run into limitations. The bottleneck is the generation and export of the data and the amount of data that can be reasonably collected from the network. In addition, management tasks related to determining and configuring which data to generate lead to significant deployment challenges.¶
The Multipoint Alternate Marking approach, described in [RFC8889], aims to resolve this issue and make the performance monitoring more flexible in case a detailed analysis is not needed.¶
An application orchestrates network performance measurements tasks across the network to allow for optimized monitoring. The application can choose how roughly or precisely to configure measurement points depending on the application's requirements.¶
Using Alternate Marking, it is possible to monitor a Multipoint Network without in depth examination by using the Network Clustering (subnetworks that are portions of the entire network that preserve the same property of the entire network, called clusters). So in the case that there is packet loss or the delay is too high then the specific filtering criteria could be applied to gather a more detailed analysis by using a different combination of clusters up to a per-flow measurement as described in Alternate-Marking (AM) [RFC8321].¶
In summary, an application can configure end-to-end network monitoring. If the network does not experience issues, this approximate monitoring is good enough and is very cheap in terms of network resources. However, in case of problems, the application becomes aware of the issues from this approximate monitoring and, in order to localize the portion of the network that has issues, configures the measurement points more extensively, allowing more detailed monitoring to be performed. After the detection and resolution of the problem, the initial approximate monitoring can be used again.¶
Hardware-based Dynamic Network Probe (DNP) [I-D.song-opsawg-dnp4iq] proposes a programmable means to customize the data that an application collects from the data plane. A direct benefit of DNP is the reduction of the exported data. A full DNP solution covers several components including data source, data subscription, and data generation. The data subscription needs to define the derived data which can be composed and derived from the raw data sources. The data generation takes advantage of the moderate in-network computing to produce the desired data.¶
While DNP can introduce unforeseeable flexibility to the data plane telemetry, it also faces some challenges. It requires a flexible data plane that can be dynamically reprogrammed at run-time. The programming API is yet to be defined.¶
Traffic on a network can be seen as a set of flows passing through network elements. IP Flow Information Export (IPFIX) [RFC7011] provides a means of transmitting traffic flow information for administrative or other purposes. A typical IPFIX enabled system includes a pool of Metering Processes that collects data packets at one or more Observation Points, optionally filters them and aggregates information about these packets. An Exporter then gathers each of the Observation Points together into an Observation Domain and sends this information via the IPFIX protocol to a Collector.¶
Classical passive and active monitoring and measurement techniques are either inaccurate or resource-consuming. It is preferable to directly acquire data associated with a flow's packets when the packets pass through a network. In-situ OAM (iOAM) [I-D.ietf-ippm-ioam-data], a data generation technique, embeds a new instruction header to user packets and the instruction directs the network nodes to add the requested data to the packets. Thus, at the path end, the packet's experience gained on the entire forwarding path can be collected. Such firsthand data is invaluable to many network OAM applications.¶
However, iOAM also faces some challenges. The issues on performance impact, security, scalability and overhead limits, encapsulation difficulties in some protocols, and cross-domain deployment need to be addressed.¶
The postcard-based telemetry, as embodied in IOAM DEX [I-D.ietf-ippm-ioam-direct-export] and IOAM Marking [I-D.song-ippm-postcard-based-telemetry], is a complementary technique to the passport-based IOAM. PBT directly exports data at each node through an independent packet. At the cost of higher bandwidth overhead and the need for data correlation, PBT shows several unique advantages. It can also help to identify packet drop location in case a packet is dropped on its forwarding path.¶
Various data planes raises unique OAM requirements. IETF has published OAM technique and framework documents (e.g., [RFC8924] and [RFC5085]) targeting different data planes such as Multi-Protocol Label Switching (MPLS), L2 Virtual Private Network (L2-VPN), Network Virtualization Overlays (NVO3), Virtual Extensible LAN (VXLAN), Bit Indexed Explicit Replication (BIER), Service Function Chaining (SFC), Segment Routing (SR), and Deterministic Networking (DETNET). The aforementioned data plane telemetry techniques can be used to enhance the OAM capability on such data planes.¶
To ensure that the information provided by external event detectors and used by the network management solutions is meaningful for management purposes, the network telemetry framework must ensure that such detectors (sources) are easily connected to the management solutions (sinks). This requires the specification of a list of potential external data sources that could be of interest in network management and match it to the connectors and/or interfaces required to connect them.¶
Categories of external event sources that may be of interest to network management include::¶
Additional types of detector types can be added to the system, but they will be generally the result of composing the properties offered by these main classes.¶
For allowing external event detectors to be properly integrated with other management solutions, both elements must expose interfaces and protocols that are subject to their particular objective. Since external event detectors will be focused on providing their information to their main consumers, which generally will not be limited to the network management solutions, the framework must include the definition of the required connectors for ensuring the interconnection between detectors (sources) and their consumers within the management systems (sinks) are effective.¶
In some situations, the interconnection between the external event detectors and the management system is via the management plane. For those situations there will be a special connector that provides the typical interfaces found in most other elements connected to the management plane. For instance, the interfaces could accomplish this with a specific data model (YANG) and specific telemetry protocol, such as NETCONF, YANG-Push, or gRPC.¶