Internet-Draft | High-Precision Service Metrics | October 2021 |
Clemm, et al. | Expires 23 April 2022 | [Page] |
This document defines a set of metrics for high-precision networking services. These metrics can be used to assess the service levels that are being delivered for a networking flow. Specifically, they can be used to determine the degree of compliance with which service levels are being delivered relative to service level objectives that were defined for the flow. The metrics can be used as part of flow records and/or accounting records. They can also be used to continuously monitor the quality with which high-precision networking service are being delivered.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 23 April 2022.¶
Copyright (c) 2021 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.¶
Many networking applications increasingly rely on high-precision networking services that have clearly defined service level objectives (SLOs), for example with regards to end-to-end latency. Applications requiring such services include industrial networks, for example cloud-based industrial controllers for precision machinery, vehicular applications, for example tele-driving in which a vehicle is remotely controlled by a human operators, or Augmented Reality / Virtual Reality (AR/VR) applications involving rendering of point clouds remotely. Many of those applications are not tolerant of degrading service levels. A slight miss in SLOs does not merely result in a slight deterioration of the Quality of Experience to end users, but may render the application inoperable. At the same time, many of those applications are mission critical, in which sudden failures can jeopardize safety or have other adverse consequences. However, clearly those applications represent significant business opportunity demanding dependable technical solutions.¶
Because of this, efforts such as Deterministic Networking (DetNet) [RFC8655] are attempting to create solutions in which clear bounds on parameters such as end-to-end latency and jitter can be defined in order to make service levels being delivered predictable and, ideally, deterministic. However, one area that has not kept pace concerns metrics that can account for service levels with which services are delivered, specifically the degree of precision for agreed-upon service level objectives. Such metrics, and the instrumentation to support them, are important for a number of purposes, including monitoring (to ensure that networking services are performing according to their objectives) as well as accounting (to maintain a record of service levels actually delivered, important for monetization of such services as well as for triaging of problems).¶
The current state-of-the-art of such metrics includes (for example) interface metrics, useful to obtain data on traffic volume and behavior that can be observed at an interface [RFC2863] [RFC8343] but agnostic of actual end-to-end service levels and not specific to distinct flows. Flow records [RFC7011] [RFC7012] maintain statistics about flows, including flow volume and flow duration, but again contain very little information about end-to-end service levels, let alone whether the service levels delivered meet their targets, i.e. their associated SLOs.¶
This specification introduces a new set of metrics aimed at capturing end-to-end service levels for a flow, specifically the degree to which flows comply with the SLOs that are in effect.¶
It should be noted that at this point, the set of metrics proposed here is intended as a "starter set" that is intended to spark further discussion. Other metrics are certainly conceivable; we expect that the list of metrics will evolve over time as part of Working Group discussions.¶
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.¶
The following section proposes a set of accounting metrics focus on end-to-end latency objectives. They indicate whether any violations of end-to-end latency occurred at the packet level. These metrics are intended to be applied on a per-flow basis and are intended to assess the degree to which a flow's end-to-end service levels comply with the SLO in effect for that flow.¶
While the focus in this document concerns end-to-end latency objectives, analogous metrics could also be defined for other end-to-end service level parameters, such as loss (which is distinct from loss occurring at any one given interface) or delay variation.¶
The following additional set of metrics may be useful in certain scenarios as well. However, their precise definition may be subject to policy and further discussion is needed:¶
Note that there is no definition of Severely Violated Packets. The term "severe" is used in conjunction with the occurrence of multiple violations related to multiple packets, not any one packet in isolation.¶
From these first-order metrics, second-order metrics can be defined that build on the first set of metrics. Some of these metrics are modeled after Mean Time Between Failure, or MTBF metrics - a "failure" in this context referring to a failure to deliver a packet according to its SLO.¶
The same set of metrics can also be applied to significant violations, and to severe violations:¶
The next set of metrics puts the violations in relationship to non-violations. It is intended to provide an analogous measure to that of availability, typically defined as the number of time units during which a system (or service) is unavailable divided by the total number of time units. In analogy, a time unit that is "violated" can be viewed as one in which a service is not available with the advertised precision:¶
It should be noted that certain Service Level Agreements may be statistical in nature, requiring the service levels of packets in a flow to adhere to certain distributions. For example, an SLA might state that any given SLO applies only to a certain percentage of packets, allowing for a certain amount of violations to take place. A "violated packet" in that case does not necessarily constitute an SLO violation. However, it is still useful to maintain those statistics, as the number of violated packets still matters when looked at in proportion to the total number of packets.¶
Along that vein, an SLA might establish an SLO of, say, end-to-end latency to not exceed 20ms for 99% of packets, to not exceed 25ms for 99.999% of packets, and to never exceed 30ms for anything beyond. In that case, any individual packet missing the 20 ms latency target cannot be considered an SLO violation in itself, but compliance with the SLO may need to be assessed after the fact.¶
To support statistical SLAs more directly, it is feasible to support additional metrics, such as metrics that represent histograms for service level parameters with buckets corresponding to individual service level objectives. For the example just given, a histogram for a given flow could be maintained with three buckets: one containing the count of packets within 20ms, a second with a count of packets between 20 and 25ms (or simply all within 25ms), a third with a count of packet between 25 and 30ms (or simply all packets within 30ms, and a fourth with a count of anything beyond (or simply a total count). Of course, the number of buckets and the boundaries between those buckets should correspond to the needs of the application respectively SLA, i.e. to the specific guarantees and SLOs that were provided. The definition of histogram metrics is for further study.¶
The following is a list of items for which further discussion is needed as to whether they should be included in the scope of this specification:¶
TBD¶
Instrumentation for metrics that are used to assess compliance with SLOs consitute an interesting target for an attacker. By interfering with the maintaining of such metrics, services could be falsely identified as being in compliance (when they are not), or vice-versa flagged as being non-compliant (when indeed they are). While this document does not specify how networks should be instrumented to maintain the identified metrics, such instrumentation needs to be properly secured to ensure accurate measurements and prohibit tampering with metrics being kept.¶
Where metrics are being defined relative to an SLO, the configuration of those SLOs needs to be properly secured. Likewise, where SLOs can be adjusted, it needs to be clear which particular SLO any given metrics instance refers to. The same service levels that constitute SLO violations for one flow, and that should be maintained as part of the "violated time units", "violated packets", and related metrics, may be perfectly compliant for another flow. Where it is not possible to properly tie together SLOs and violation metrics, it will be preferrable to merely maintain statistics about sevice levels that were delivered (for example, overall histograms of end-to-end latency), without assessing which of these constitute violations.¶
By the same token, where the definition of what constitutes a "severe" violation or a "significant" violation depends on policy or context, the configuration of such policy or context needs to be specially secured and the configuration of this policy be bound to the metrics being maintained. This way it will be clear which policy was in effect when those metrics were being assessed. An attacker that is able to tamper with such policies will render the corresponding metrics useless (in the best case) or misleading (in the worst case).¶