Internet-Draft | Network Anomaly Semantics | October 2023 |
Graf, et al. | Expires 25 April 2024 | [Page] |
This document explains why and how semantic metadata annotation helps to test and validate outlier detection, supports supervised and semi-supervised machine learning development and make anomalies for humans apprehensible. The proposed semantics uniforms the network anomaly data exchange between and among operators and vendors to improve their network outlier detection systems.¶
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 25 April 2024.¶
Copyright (c) 2023 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
Network Anomaly Detection Architecture [Ahf23] provides an overall introduction into how anomaly detection is being applied into the IP network domain and which operational data is needed. It approaches the problem space by automating what a Network Engineer would normally do when veryfing a network connectivity service. Monitor from different network plane perspectives to understand wherever one network plane affects another negatively.¶
In order to fine tune outlier detection, the results provided as analytical data need to be reviewed by a Network Engineer. Keeping the human out of the monitoring but still involving him in the alert verification loop.¶
This document describes what information is needed to understand the output of the outlier detection for a Network Engineer, but also at the same time is semantically structured that it can be used for outlier detection testing by comparing the results systematically and set a baseline for supervised machine learning which requires labeled operational data.¶
Outlier Detection, also known as anomaly detection, describes a systematic approach to identify rare data points deviating significantly from the majority. Outliers are commonly classified in three categories:¶
For each outlier a score between 0 and 1 is being calculated. The higher the value, the higher the probability that the observed data point is an outlier. Anomaly detection: A survey [VAP09] gives additional details on anomaly detection and its types.¶
The Data Mesh [Deh22] Architecture distinguishes between operational and analytical data. Operational data refers to collected data from operational systems. While analytical data refers to insights gained from operational data.¶
In terms of network observability, semantics of operational network metrics are defined by IETF and are categorized as described in the Network Telemetry Framework [RFC9232] in the following three different network planes:¶
In terms of network observability, semantics of analytical data refers to incident notifications or service level indicators. For example the incident notification described in Section 7.2 of [I-D.feng-opsawg-incident-management], the health status and symptoms described in the Service Assurance Intend Based Networking [RFC9418] or the precision availability metrics defined in [I-D.ietf-ippm-pam] or network anomalies and its symptoms as described in this document.¶
In this section observed network symptoms are specified and categorized according to the following scheme:¶
Which action the network node performed for a packet in the Forwarding Plane, a path or adjancency in the Control Plane or state or statistical changes in the Management Plane. For Forwarding Plane we distinguish between missing, where the drop occured outside the measured network node, drop and on-path delay, which was measured on the network node. For control-plane we distinguish between reachability, which refers to a change in the routing or forwarding information base (RIB/FIB) and adjcacency which refers to a change in peering or link-layer resolution. For Management Plane we refer to state or statistical changes on interfaces.¶
For each action one or more reasons describinging why this action was used. For Drops in Forwarding Plane we distinguish between Unreachable because network layer reachability information was missing, administered because an administrator configured a rule preventing the forwarding for this packet and Corrupt where the network node was unable to determine where to forward to due to packet, software or hardware error. For On-Path Delay we distinguish between Minimum, Average and Maximum Delay for a given Flow.¶
For each reason one or more relation describe the cause why the action was chosen. These reason could relate network plane entity, a packet, control-plane or node administered instruction.¶
Table 1 consolidates for the forwarding plane a list of common symptoms with their actions, reasons and relations.¶
Action | Reason | Relation |
---|---|---|
Missing | Previous | Time |
Drop | Unreachable | next-hop |
Drop | Unreachable | link-layer |
Drop | Unreachable | Time To Life expired |
Drop | Unreachable | Fragmentation needed and Don't Fragment set |
Drop | Administered | Access-List |
Drop | Administered | Unicast Reverse Path Forwarding |
Drop | Administered | Discard Route |
Drop | Administered | Policed |
Drop | Administered | Shaped |
Drop | Corrupt | Bad Packet |
Drop | Corrupt | Bad Egress Interface |
Delay | Min | - |
Delay | Mean | - |
Delay | Max | - |
Table 2 consolidates for the control plane a list of common symptoms with their actions, reasons and relations.¶
Action | Reason | Relation |
---|---|---|
Reachability | Update | Imported |
Reachability | Update | Received |
Reachability | Withdraw | Received |
Reachability | Withdraw | Peer Down |
Adjacency | Established | Peer |
Adjacency | Established | Link-Layer |
Adjacency | Teared Down | Peer |
Adjacency | Teared Down | Link-Layer |
Table 3 consolidates for the management plane a list of common symptoms with their actions, reasons and relations.¶
Action | Reason | Relation |
---|---|---|
Interface | Up | Link-Layer |
Interface | Down | Link-Layer |
Interface | Errors | - |
Interface | Discards | - |
Interface | Unknown Protocol | - |
Metadata adds additional context to data. For instance, in networks the software version of a network node where management plane metrics are obtained from as described in [I-D.claise-opsawg-collected-data-manifest]. Where in Semantic Metadata the meaning or ontology of the annotated data is being described.¶
The security considerations.¶
The authors would like to thank xxx for their review and valuable comments.¶