Internet-Draft | Overview and Principles of Internet TE | August 2023 |
Farrel | Expires 5 February 2024 | [Page] |
This document describes the principles of traffic engineering (TE) in the Internet. The document is intended to promote better understanding of the issues surrounding traffic engineering in IP networks and the networks that support IP networking, and to provide a common basis for the development of traffic engineering capabilities for the Internet. The principles, architectures, and methodologies for performance evaluation and performance optimization of operational networks are also discussed.¶
This work was first published as RFC 3272 in May 2002. This document obsoletes RFC 3272 by making a complete update to bring the text in line with best current practices for Internet traffic engineering and to include references to the latest relevant work in the IETF.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 5 February 2024.¶
Copyright (c) 2023 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
This document describes the principles of Internet traffic engineering (TE). The objective of the document is to articulate the general issues and principles for Internet TE, and where appropriate to provide recommendations, guidelines, and options for the development of preplanned (offline) and dynamic (online) Internet TE capabilities and support systems.¶
Even though Internet TE is most effective when applied end-to-end, the focus of this document is TE within a given domain (such as an autonomous system). However, because a preponderance of Internet traffic tends to originate in one autonomous system and terminate in another, this document also provides an overview of aspects pertaining to inter-domain TE.¶
This document provides a terminology and taxonomy for describing and understanding common Internet TE concepts.¶
This work was first published as [RFC3272] in May 2002. This document obsoletes [RFC3272] by making a complete update to bring the text in line with best current practices for Internet TE and to include references to the latest relevant work in the IETF. It is worth noting around three fifths of the RFCs referenced in this document post-date the publication of RFC 3272. Appendix A provides a summary of changes between RFC 3272 and this document.¶
One of the most significant functions performed in the Internet is the routing and forwarding of traffic from ingress nodes to egress nodes. Therefore, one of the most distinctive functions performed by Internet traffic engineering is the control and optimization of these routing and forwarding functions, to steer traffic through the network.¶
Internet traffic engineering is defined as that aspect of Internet network engineering dealing with the issues of performance evaluation and performance optimization of operational IP networks. Traffic engineering encompasses the application of technology and scientific principles to the measurement, characterization, modeling, and control of Internet traffic [RFC2702], [AWD2].¶
It is the performance of the network as seen by end users of network services that is paramount. The characteristics visible to end users are the emergent properties of the network, which are the characteristics of the network when viewed as a whole. A central goal of the service provider, therefore, is to enhance the emergent properties of the network while taking economic considerations into account. This is accomplished by addressing traffic oriented performance requirements while utilizing network resources without excessive waste and in a reliable way. Traffic oriented performance measures include delay, delay variation, packet loss, and throughput.¶
Internet TE responds to network events (such as link or node failures, reported or predicted network congestion, planned maintenance, service degradation, planned changes in the traffic matrix, etc.). Aspects of capacity management respond at intervals ranging from days to years. Routing control functions operate at intervals ranging from milliseconds to days. Packet level processing functions operate at very fine levels of temporal resolution (up to milliseconds) while reacting to statistical measures of the real-time behavior of traffic.¶
Thus, the optimization aspects of TE can be viewed from a control perspective, and can be both proactive and reactive. In the proactive case, the TE control system takes preventive action to protect against predicted unfavorable future network states, for example, by engineering backup paths. It may also take action that will lead to a more desirable future network state. In the reactive case, the control system responds to correct issues and adapt to network events, such as routing after failure.¶
Another important objective of Internet TE is to facilitate reliable network operations [RFC2702]. Reliable network operations can be facilitated by providing mechanisms that enhance network integrity and by embracing policies emphasizing network survivability. This reduces the vulnerability of services to outages arising from errors, faults, and failures occurring within the network infrastructure.¶
The optimization aspects of TE can be achieved through capacity management and traffic management. In this document, capacity management includes capacity planning, routing control, and resource management. Network resources of particular interest include link bandwidth, buffer space, and computational resources. In this document, traffic management includes:¶
One major challenge of Internet TE is the realization of automated control capabilities that adapt quickly and cost effectively to significant changes in network state, while still maintaining stability of the network. Performance evaluation can assess the effectiveness of TE methods, and the results of this evaluation can be used to identify existing problems, guide network re-optimization, and aid in the prediction of potential future problems. However, this process can also be time consuming and may not be suitable to act on short-lived changes in the network.¶
Performance evaluation can be achieved in many different ways. The most notable techniques include analytic methods, simulation, and empirical methods based on measurements.¶
Traffic engineering comes in two flavors:¶
In the latter case, any deviation from the optimum distribution (e.g., caused by a fiber cut) is reverted upon repair without further optimization. However, this form of TE relies upon the notion that the planned state of the network is optimal. Hence, in such a mode there are two levels of TE: the TE-planning task to enable optimum traffic distribution, and the routing and forwarding tasks that keep traffic flows attached to the pre-planned distribution.¶
As a general rule, TE concepts and mechanisms must be sufficiently specific and well-defined to address known requirements, but simultaneously flexible and extensible to accommodate unforeseen future demands (see Section 6.1).¶
As mentioned in Section 1.1, Internet traffic engineering provides performance optimization of IP networks while utilizing network resources economically and reliably. Such optimization is supported at the control/controller level and within the data/forwarding plane.¶
The key elements required in any TE solution are as follows:¶
Some TE solutions rely on these elements to a lesser or greater extent. Debate remains about whether a solution can truly be called TE if it does not include all of these elements. For the sake of this document, we assert that all TE solutions must include some aspects of all of these elements. Other solutions can be classed as "partial TE" and also fall in scope of this document.¶
Policy allows for the selection of paths (including next hops) based on information beyond basic reachability. Early definitions of routing policy, e.g., [RFC1102] and [RFC1104], discuss routing policy being applied to restrict access to network resources at an aggregate level. BGP is an example of a commonly used mechanism for applying such policies, see [RFC4271] and [RFC8955]. In the TE context, policy decisions are made within the control plane or by controllers in the management plane, and govern the selection of paths. Examples can be found in [RFC4655] and [RFC5394]. TE solutions may cover the mechanisms to distribute and/or enforce policies, but definition of specific policies is left to the network operator.¶
Path steering is the ability to forward packets using more information than just knowledge of the next hop. Examples of path steering include IPv4 source routes [RFC0791], RSVP-TE explicit routes [RFC3209], Segment Routing [RFC8402], and Service Function Chaining [RFC7665]. Path steering for TE can be supported via control plane protocols, by encoding in the data plane headers, or by a combination of the two. This includes when control is provided by a controller using a network-facing control protocol.¶
Resource management provides resource-aware control and forwarding. Examples of resources are bandwidth, buffers, and queues, all of which can be managed to control loss and latency.¶
The scope of this document is intra-domain TE because this is the practical level of TE technology that exists in the Internet at the time of writing. That is, it describes TE within a given autonomous system in the Internet. This document discusses concepts pertaining to intra-domain traffic control, including such issues as routing control, micro and macro resource allocation, and the control coordination problems that arise consequently.¶
This document describes and characterizes techniques already in use or in advanced development for Internet TE. The way these techniques fit together is discussed and scenarios in which they are useful will be identified.¶
Although the emphasis in this document is on intra-domain traffic engineering, an overview of the high-level considerations pertaining to inter-domain TE is provided in Section 7. Inter-domain Internet TE is crucial to the performance enhancement of the world-wide Internet infrastructure.¶
Whenever possible, relevant requirements from existing IETF documents and other sources are incorporated by reference.¶
This section provides terminology which is useful for Internet TE. The definitions presented apply to this document. These terms may have other meanings elsewhere.¶
The Internet aims to convey IP packets from ingress nodes to egress nodes efficiently, expeditiously, and economically. Furthermore, in a multiclass service environment (e.g., Diffserv capable networks - see Section 5.1.1.2), the resource sharing parameters of the network must be appropriately determined and configured according to prevailing policies and service models to resolve resource contention issues arising from mutual interference between packets traversing the network. Thus, consideration must be given to resolving competition for network resources between traffic flows belonging to the same service class (intra-class contention resolution) and traffic flows belonging to different classes (inter-class contention resolution).¶
The context of Internet traffic engineering includes the following sub-contexts:¶
The context of Internet TE and the different problem scenarios are discussed in the following subsections.¶
IP networks range in size from small clusters of routers situated within a given location, to thousands of interconnected routers, switches, and other components distributed all over the world.¶
At the most basic level of abstraction, an IP network can be represented as a distributed dynamic system consisting of:¶
The network elements and resources may have specific characteristics restricting the manner in which the traffic demand is handled. Additionally, network resources may be equipped with traffic control mechanisms managing the way in which the demand is serviced. Traffic control mechanisms may be used to:¶
A configuration management and provisioning system may allow the settings of the traffic control mechanisms to be manipulated by external or internal entities in order to exercise control over the way in which the network elements respond to internal and external stimuli.¶
The details of how the network carries packets are specified in the policies of the network administrators and are installed through network configuration management and policy-based provisioning systems. Generally, the types of service provided by the network also depend upon the technology and characteristics of the network elements and protocols, the prevailing service and utility models, and the ability of the network administrators to translate policies into network configurations.¶
Internet networks have two significant characteristics:¶
The dynamic characteristics of IP and IP/MPLS networks can be attributed in part to fluctuations in demand, to the interaction between various network protocols and processes, to the rapid evolution of the infrastructure which demands the constant inclusion of new technologies and new network elements, and to transient and persistent faults which occur within the system.¶
Packets contend for the use of network resources as they are conveyed through the network. A network resource is considered to be congested if, for an interval of time, the arrival rate of packets exceeds the output capacity of the resource. Network congestion may result in some of the arriving packets being delayed or even dropped.¶
Network congestion increases transit delay and delay variation, may lead to packet loss, and reduces the predictability of network services. Clearly, while congestion may be a useful tool at ingress edge nodes, network congestion is highly undesirable. Combating network congestion at a reasonable cost is a major objective of Internet TE although it may need to be traded with other objectives to keep the costs reasonable.¶
Efficient sharing of network resources by multiple traffic flows is a basic operational premise for the Internet. A fundamental challenge in network operation is to increase resource utilization while minimizing the possibility of congestion.¶
The Internet has to function in the presence of different classes of traffic with different service requirements. This requirement is clarified in the architecture for Differentiated Services (Diffserv) [RFC2475]. That document describes how packets can be grouped into behavior aggregates such that each aggregate has a common set of behavioral characteristics or a common set of delivery requirements. Delivery requirements of a specific set of packets may be specified explicitly or implicitly. Two of the most important traffic delivery requirements are:¶
There are several problems associated with operating a network described in the previous section. This section analyzes the problem context in relation to TE. The identification, abstraction, representation, and measurement of network features relevant to TE are significant issues.¶
A particular challenge is to formulate the problems that traffic engineering attempts to solve. For example:¶
Another class of problems is how to measure and estimate relevant network state parameters. Effective TE relies on a good estimate of the offered traffic load as well as a view of the underlying topology and associated resource constraints. Offline planning requires a full view of the topology of the network or partial network that is being planned.¶
Still another class of problem is how to characterize the state of the network and how to evaluate its performance. The performance evaluation problem is two-fold: one aspect relates to the evaluation of the system-level performance of the network; the other aspect relates to the evaluation of resource-level performance, which restricts attention to the performance analysis of individual network resources.¶
In this document, we refer to the system-level characteristics of the network as the "macro-states" and the resource-level characteristics as the "micro-states." The system-level characteristics are also known as the emergent properties of the network. Correspondingly, we refer to the TE schemes dealing with network performance optimization at the systems level as "macro-TE" and the schemes that optimize at the individual resource level as "micro-TE." Under certain circumstances, the system-level performance can be derived from the resource-level performance using appropriate rules of composition, depending upon the particular performance measures of interest.¶
Another fundamental class of problem concerns how to effectively optimize network performance. Performance optimization may entail translating solutions for specific TE problems into network configurations. Optimization may also entail some degree of resource management control, routing control, and capacity augmentation.¶
Network congestion is one of the most significant problems in an operational IP context. A network element is said to be congested if it experiences sustained overload over an interval of time. Although congestion at the edge of the network may be beneficial in ensuring that the network delivers as much traffic as possible, network congestion almost always results in degradation of service quality to end users. Congestion control schemes can include demand-side policies and supply-side policies. Demand-side policies may restrict access to congested resources or dynamically regulate the demand to alleviate the overload situation. Supply-side policies may expand or augment network capacity to better accommodate offered traffic. Supply-side policies may also re-allocate network resources by redistributing traffic over the infrastructure. Traffic redistribution and resource re-allocation serve to increase the 'effective capacity' of the network.¶
The emphasis of this document is primarily on congestion management schemes falling within the scope of the network, rather than on congestion management systems dependent upon sensitivity and adaptivity from end-systems. That is, the aspects that are considered in this document with respect to congestion management are those solutions that can be provided by control entities operating on the network and by the actions of network administrators and network operations systems.¶
The solution context for Internet TE involves analysis, evaluation of alternatives, and choice between alternative courses of action. Generally, the solution context is based on making inferences about the current or future state of the network, and making decisions that may involve a preference between alternative sets of action. More specifically, the solution context demands reasonable estimates of traffic workload, characterization of network state, derivation of solutions which may be implicitly or explicitly formulated, and possibly instantiating a set of control actions. Control actions may involve the manipulation of parameters associated with routing, control over tactical capacity acquisition, and control over the traffic management functions.¶
The following list of instruments may be applicable to the solution context of Internet TE.¶
Determining traffic characteristics through measurement or estimation is very useful within the realm of the TE solution space. Traffic estimates can be derived from customer subscription information, traffic projections, traffic models, and from actual measurements. The measurements may be performed at different levels, e.g., at the traffic-aggregate level or at the flow level. Measurements at the flow level or on small traffic aggregates may be performed at edge nodes, when traffic enters and leaves the network. Measurements for large traffic-aggregates may be performed within the core of the network.¶
To conduct performance studies and to support planning of existing and future networks, a routing analysis may be performed to determine the paths the routing protocols will choose for various traffic demands, and to ascertain the utilization of network resources as traffic is routed through the network. Routing analysis captures the selection of paths through the network, the assignment of traffic across multiple feasible routes, and the multiplexing of IP traffic over traffic trunks (if such constructs exist) and over the underlying network infrastructure. A model of network topology is necessary to perform routing analysis. A network topology model may be extracted from:¶
Topology information may also be derived from servers that monitor network state, and from servers that perform provisioning functions.¶
Routing in operational IP networks can be administratively controlled at various levels of abstraction including the manipulation of BGP attributes and IGP metrics. For path-oriented technologies such as MPLS, routing can be further controlled by the manipulation of relevant TE parameters, resource parameters, and administrative policy constraints. Within the context of MPLS, the path of an explicitly routed label switched path (LSP) can be computed and established in various ways including:¶
Minimizing congestion is a significant aspect of Internet traffic engineering. This subsection gives an overview of the general approaches that have been used or proposed to combat congestion.¶
Congestion management policies can be categorized based upon the following criteria (see [YARE95] for a more detailed taxonomy of congestion control schemes):¶
Congestion Management Based on Response Timescales¶
Medium (minutes to days): Several control policies fall within the medium timescale category. Examples include:¶
When these schemes are adaptive, they rely on measurement systems. A measurement system monitors changes in traffic distribution, traffic loads, and network resource utilization and then provides feedback to the online or offline TE mechanisms and tools so that they can trigger control actions within the network. The TE mechanisms and tools can be implemented in a distributed or centralized fashion. A centralized scheme may have full visibility into the network state and may produce more optimal solutions. However, centralized schemes are prone to single points of failure and may not scale as well as distributed schemes. Moreover, the information utilized by a centralized scheme may be stale and might not reflect the actual state of the network. It is not an objective of this document to make a recommendation between distributed and centralized schemes: that is a choice that network administrators must make based on their specific needs.¶
Reactive Versus Preventive Congestion Management Schemes¶
Supply-Side Versus Demand-Side Congestion Management Schemes¶
The operational context of Internet TE is characterized by constant changes that occur at multiple levels of abstraction. The implementation context demands effective planning, organization, and execution. The planning aspects may involve determining prior sets of actions to achieve desired objectives. Organizing involves arranging and assigning responsibility to the various components of the TE system and coordinating the activities to accomplish the desired TE objectives. Execution involves measuring and applying corrective or perfective actions to attain and maintain desired TE goals.¶
This section describes a generic process model that captures the high-level practical aspects of Internet traffic engineering in an operational context. The process model is described as a sequence of actions that must be carried out to optimize the performance of an operational network (see also [RFC2702], [AWD2]). This process model may be enacted explicitly or implicitly, by a software process or by a human.¶
The TE process model is iterative [AWD2]. The four phases of the process model described below are repeated as a continual sequence.¶
The key components of the traffic engineering process model are as follows.¶
This section presents a short taxonomy of traffic engineering systems constructed based on TE styles and views as listed below and described in greater detail in the following subsections of this document.¶
Traffic engineering methodologies can be classified as time- dependent, state-dependent, or event-dependent. All TE schemes are considered to be dynamic in this document. Static TE implies that no TE methodology or algorithm is being applied - it is a feature of network planning, but lacks the reactive and flexible nature of TE.¶
In time-dependent TE, historical information based on periodic variations in traffic (such as time of day) is used to pre-program routing and other TE control mechanisms. Additionally, customer subscription or traffic projection may be used. Pre-programmed routing plans typically change on a relatively long time scale (e.g., daily). Time-dependent algorithms do not attempt to adapt to short-term variations in traffic or changing network conditions. An example of a time-dependent algorithm is a centralized optimizer where the input to the system is a traffic matrix and multi-class QoS requirements as described in [MR99]. Another example of such a methodology is the application of data mining to Internet traffic [AJ19] which enables the use of various machine learning algorithms to identify patterns within historically collected datasets about Internet traffic, and to extract information in order to guide decision-making, and to improve efficiency and productivity of operational processes.¶
State-dependent TE adapts the routing plans based on the current state of the network which provides additional information on variations in actual traffic (i.e., perturbations from regular variations) that could not be predicted using historical information. Constraint-based routing is an example of state-dependent TE operating in a relatively long timescale. An example of operating in a relatively short timescale is a load-balancing algorithm described in [MATE]. The state of the network can be based on parameters flooded by the routers. Another approach is for a particular router performing adaptive TE to send probe packets along a path to gather the state of that path. [RFC6374] defines protocol extensions to collect performance measurements from MPLS networks. Another approach is for a management system to gather the relevant information directly from network elements using telemetry data collection "publication/subscription" techniques [RFC7923]. Timely gathering and distribution of state information is critical for adaptive TE. While time-dependent algorithms are suitable for predictable traffic variations, state-dependent algorithms may be needed to increase network efficiency and to provide resilience to adapt to changes in network state.¶
Event-dependent TE methods can also be used for TE path selection. Event-dependent TE methods are distinct from time-dependent and state-dependent TE methods in the manner in which paths are selected. These algorithms are adaptive and distributed in nature, and typically use learning models to find good paths for TE in a network. While state-dependent TE models typically use available-link-bandwidth (ALB) [E.360.1] flooding for TE path selection, event-dependent TE methods do not require ALB flooding. Rather, event-dependent TE methods typically search out capacity by learning models, as in the success-to-the-top (STT) method [RFC6601]. ALB flooding can be resource intensive, since it requires link bandwidth to carry routing protocol link state advertisements, processor capacity to process those advertisements, and the overhead of the advertisements and their processing can limit area/Autonomous System (AS) size. Modeling results suggest that event-dependent TE methods could lead to a reduction in ALB flooding overhead without loss of network throughput performance [I-D.ietf-tewg-qos-routing].¶
A fully functional TE system is likely to use all aspects of time-dependent, state-dependent, and event-dependent methodologies as described in Section 4.3.1.¶
Traffic engineering requires the computation of routing plans. The computation may be performed offline or online. The computation can be done offline for scenarios where routing plans need not be executed in real time. For example, routing plans computed from forecast information may be computed offline. Typically, offline computation is also used to perform extensive searches on multi- dimensional solution spaces.¶
Online computation is required when the routing plans must adapt to changing network conditions as in state-dependent algorithms. Unlike offline computation (which can be computationally demanding), online computation is geared toward relatively simple and fast calculations to select routes, fine-tune the allocations of resources, and perform load balancing.¶
Under centralized control there is a central authority which determines routing plans and perhaps other TE control parameters on behalf of each router. The central authority periodically collects network-state information from all routers, and sends routing information to the routers. The update cycle for information exchange in both directions is a critical parameter directly impacting the performance of the network being controlled. Centralized control may need high processing power and high bandwidth control channels.¶
Distributed control determines route selection by each router autonomously based on the router's view of the state of the network. The network state information may be obtained by the router using a probing method or distributed by other routers on a periodic basis using link state advertisements. Network state information may also be disseminated under exception conditions. Examples of protocol extensions used to advertise network link state information are defined in [RFC5305], [RFC6119], [RFC7471], [RFC8570], and [RFC8571]. See also Section 5.1.3.9.¶
In practice, most TE systems will be a hybrid of central and distributed control. For example, a popular MPLS approach to TE is to use a central controller based on an active, stateful Path Computation Element (PCE), but to use routing and signaling protocols to make local decisions at routers within the network. Local decisions may be able to respond more quickly to network events, but may result in conflicts with decisions made by other routers.¶
Network operations for TE systems may also use a hybrid of offline and online computation. TE paths may be precomputed based on stable-state network information and planned traffic demands, but may then be modified in the active network depending on variations in network state and traffic load. Furthermore, responses to network events may be precomputed offline to allow rapid reactions without further computation, or may be derived online depending on the nature of the events.¶
Lastly, note that a fully functional TE system is likely to use all aspects of time-dependent, state-dependent, and event-dependent methodologies as described in Section 4.1.¶
As discussed in Section 5.1.2.2, one of the main drivers for SDN is a decoupling of the network control plane from the data plane [RFC7149]. However, SDN may also combine centralized control of resources, and facilitate application-to-network interaction via an application programming interface (API) such as [RFC8040]. Combining these features provides a flexible network architecture that can adapt to network requirements of a variety of higher-layer applications, a concept often referred to as the "programmable network" [RFC7426].¶
The centralized control aspect of SDN helps improve network resource utilization compared with distributed network control, where local policy may often override network-wide optimization goals. In an SDN environment, the data plane forwards traffic to its desired destination. However, before traffic reaches the data plane, the logically centralized SDN control plane often determines the path the application traffic will take in the network. Therefore, the SDN control plane needs to be aware of the underlying network topology, capabilities and current node and link resource state.¶
Using a PCE-based SDN control framework [RFC7491], the available network topology may be discovered by running a passive instance of OSPF or IS-IS, or via BGP-LS [RFC7752], to generate a Traffic Engineering Database (TED, see Section 5.1.3.14). The PCE is used to compute a path (see Section 5.1.3.11) based on the TED and available bandwidth, and further path optimization may be based on requested objective functions [RFC5541]. When a suitable path has been computed the programming of the explicit network path may be performed using either a signaling protocol that traverses the length of the path [RFC3209] or per-hop with each node being directly programmed [RFC8283] by the SDN controller.¶
By utilizing a centralized approach to network control, additional network benefits are also available, including Global Concurrent Optimization (GCO) [RFC5557]. A GCO path computation request will simultaneously use the network topology and a set of new path signaling requests, along with their respective constraints, for optimal placement in the network. Correspondingly, a GCO-based computation may be applied to recompute existing network paths to groom traffic and to mitigate congestion.¶
Traffic engineering algorithms may require local and global network- state information.¶
Local information is the state of a portion of the domain. Examples include the bandwidth and packet loss rate of a particular path, or the state and capabilities of a network link. Local state information may be sufficient for certain instances of distributed control TE.¶
Global information is the state of the entire TE domain. Examples include a global traffic matrix, and loading information on each link throughout the domain of interest. Global state information is typically required with centralized control. Distributed TE systems may also need global information in some cases.¶
TE systems may also be classified as prescriptive or descriptive.¶
Prescriptive traffic engineering evaluates alternatives and recommends a course of action. Prescriptive TE can be further categorized as either corrective or perfective. Corrective TE prescribes a course of action to address an existing or predicted anomaly. Perfective TE prescribes a course of action to evolve and improve network performance even when no anomalies are evident.¶
Descriptive traffic engineering, on the other hand, characterizes the state of the network and assesses the impact of various policies without recommending any particular course of action.¶
One way to express a service request is through "intent". Intent-Based Networking aims to produce networks that are simpler to manage and operate, requiring only minimal intervention. Intent is defined in [RFC9315] as a set of operational goals (that a network should meet) and outcomes (that a network is supposed to deliver), defined in a declarative manner without specifying how to achieve or implement them.¶
Intent provides data and functional abstraction so that users and operators do not need to be concerned with low-level device configuration or the mechanisms used to achieve a given intent. This approach can be conceptually easier for a user, but may be less expressive in terms of constraints and guidelines.¶
Intent-Based Networking is applicable to TE because many of the high-level objectives may be expressed as "intent." For example, load balancing, delivery of services, and robustness against failures. The intent is converted by the management system into TE actions within the network.¶
Open-loop traffic engineering control is where control action does not use feedback information from the current network state. The control action may use its own local information for accounting purposes, however.¶
Closed-loop traffic engineering control is where control action utilizes feedback information from the network state. The feedback information may be in the form of current measurement or recent historical records.¶
Tactical traffic engineering aims to address specific performance problems (such as hot-spots) that occur in the network from a tactical perspective, without consideration of overall strategic imperatives. Without proper planning and insights, tactical TE tends to be ad hoc in nature.¶
Strategic traffic engineering approaches the TE problem from a more organized and systematic perspective, taking into consideration the immediate and longer-term consequences of specific policies and actions.¶
This section briefly reviews different TE-related approaches proposed and implemented in telecommunications and computer networks using IETF protocols and architectures. These approaches are organized into three categories:¶
The discussion is not intended to be comprehensive. It is primarily intended to illuminate existing approaches to TE in the Internet. A historic overview of TE in telecommunications networks was provided in Section 4 of [RFC3272], and Section 4.6 of that document presented an outline of some early approaches to TE conducted in other standards bodies. It is out of the scope of this document to provide an analysis of the history of TE or an inventory of TE-related efforts conducted by other SDOs.¶
This subsection reviews a number of IETF activities pertinent to Internet traffic engineering. Some of these technologies are widely deployed, others are mature but have seen less deployment, and some are unproven or still under development.¶
The IETF developed the Integrated Services (Intserv) model that requires resources, such as bandwidth and buffers, to be reserved a priori for a given traffic flow to ensure that the quality of service requested by the traffic flow is satisfied. The Integrated Services model includes additional components beyond those used in the best-effort model such as packet classifiers, packet schedulers, and admission control. A packet classifier is used to identify flows that are to receive a certain level of service. A packet scheduler handles the scheduling of service to different packet flows to ensure that QoS commitments are met. Admission control is used to determine whether a router has the necessary resources to accept a new flow.¶
The main issue with the Integrated Services model has been scalability [RFC2998], especially in large public IP networks which may potentially have millions of active traffic flows in transit concurrently. Pre-Congestion Notification (PCN) [RFC5559] solves the scaling problems of Intserv by using measurement-based admission control (and flow termination to handle failures) between edge-nodes. Nodes between the edges of the internetwork have no per-flow operations and the edge nodes can use RSVP per-flow or per-aggregate.¶
A notable feature of the Integrated Services model is that it requires explicit signaling of QoS requirements from end systems to routers [RFC2753]. The Resource Reservation Protocol (RSVP) performs this signaling function and is a critical component of the Integrated Services model. RSVP is described in Section 5.1.3.2.¶
The goal of Differentiated Services (Diffserv) within the IETF was to devise scalable mechanisms for categorization of traffic into behavior aggregates, which ultimately allows each behavior aggregate to be treated differently, especially when there is a shortage of resources such as link bandwidth and buffer space [RFC2475]. One of the primary motivations for Diffserv was to devise alternative mechanisms for service differentiation in the Internet that mitigate the scalability issues encountered with the Intserv model.¶
Diffserv uses the Differentiated Services field in the IP header (the DS field) consisting of six bits in what was formerly known as the Type of Service (TOS) octet. The DS field is used to indicate the forwarding treatment that a packet should receive at a transit node [RFC2474]. Diffserv includes the concept of Per-Hop Behavior (PHB) groups. Using the PHBs, several classes of services can be defined using different classification, policing, shaping, and scheduling rules.¶
For an end-user of network services to utilize Differentiated Services provided by its Internet Service Provider (ISP), it may be necessary for the user to have an SLA with the ISP. An SLA may explicitly or implicitly specify a Traffic Conditioning Agreement (TCA) which defines classifier rules as well as metering, marking, discarding, and shaping rules.¶
Packets are classified, and possibly policed and shaped at the ingress to a Diffserv network. When a packet traverses the boundary between different Diffserv domains, the DS field of the packet may be re-marked according to existing agreements between the domains.¶
Differentiated Services allows only a finite number of service classes to be specified by the DS field. The main advantage of the Diffserv approach relative to the Intserv model is scalability. Resources are allocated on a per-class basis and the amount of state information is proportional to the number of classes rather than to the number of application flows.¶
Once the network has been planned and the packets marked at the network edge, the Diffserv model deals with traffic management issues on a per hop basis. The Diffserv control model consists of a collection of micro-TE control mechanisms. Other TE capabilities, such as capacity management (including routing control), are also required in order to deliver acceptable service quality in Diffserv networks. The concept of Per Domain Behaviors has been introduced to better capture the notion of Differentiated Services across a complete domain [RFC3086].¶
Diffserv procedures can also be applied in an MPLS context. See Section 6.8 for more information.¶
SR Policy [RFC9256] is an evolution of Segment Routing (see Section 5.1.3.12) to enhance the TE capabilities of SR. It is a framework that enables instantiation of an ordered list of segments on a node for implementing a source routing policy with a specific intent for traffic steering from that node.¶
An SR Policy is identified through the tuple <head-end, color, end-point>. The head-end is the IP address of the node where the policy is instantiated. The endpoint is the IP address of the destination of the policy. The color is an index that associates the SR Policy with an intent (e.g., low latency).¶
The head-end node is notified of SR Policies and associated SR paths via configuration or by extensions to protocols such as PCEP [RFC8664] or BGP [I-D.ietf-idr-segment-routing-te-policy]. Each SR path consists of a Segment-List (an SR source-routed path), and the head-end uses the endpoint and color parameters to classify packets to match the SR policy and so determine along which path to forward them. If an SR Policy is associated with a set of SR paths, each is associated with a weight for weighted load balancing. Furthermore, multiple SR Policies may be associated with a set of SR paths to allow multiple traffic flows to be placed on the same paths.¶
An SR Binding SID (BSID) may also be associated with each candidate path associated with an SR Policy, or with the SR Policy itself. The head-end node installs a BSID-keyed entry in the forwarding plane and assigns it the action of steering packets that match the entry to the selected path of the SR Policy. This steering can be done in various ways:¶
In addition to IP-based TE mechanisms, layer 4 transport-based TE approaches can be considered in specific deployment contexts (e.g., data centers, multi-homing). For example, the 3GPP defines the Access Traffic Steering, Switching, and Splitting (ATSSS) [ATSSS] service functions as follows.¶
The control plane is used to provide hosts and specific network devices with a set of policies that specify which flows are eligible to use the ATSSS service. The traffic that matches an ATSSS policy can be distributed among the available access networks following one of the following four modes.¶
For resource management purposes, hosts and network devices support means such as congestion control, RTT measurement, and packet scheduling.¶
For TCP traffic, Multipath TCP [RFC8684] and the 0-RTT Convert Protocol [RFC8803] are used to provide the ATSSS service.¶
Multipath QUIC [I-D.ietf-quic-multipath] and Proxying UDP in HTTP [RFC9298] are used to provide the ATSSS service for UDP traffic. Note that QUIC [RFC9000] natively support the switching and steering functions. Indeed, QUIC supports a connection migration procedure that allows peers to change their layer 4 transport coordinates (IP addresses, port numbers) without breaking the underlying QUIC connection.¶
Extensions to Datagram Congestion Control Protocol (MP-DCCP) [RFC4340] to support multipath operations [I-D.ietf-tsvwg-multipath-dccp].¶
Deterministic Networking (DetNet) [RFC8655] is an architecture for applications with critical timing and reliability requirements. The layered architecture particularly focuses on developing DetNet service capabilities in the data plane [RFC8938]. The DetNet service sub-layer provides a set of Packet Replication, Elimination, and Ordering Functions (PREOF) functions to provide end-to-end service assurance. The DetNet forwarding sub-layer provides corresponding forwarding assurance (low packet loss, bounded latency, and in-order delivery) functions using resource allocations and explicit route mechanisms.¶
The separation into two sub-layers allows a greater flexibility to adapt DetNet capability over a number of TE data plane mechanisms such as IP, MPLS, and Segment Routing. More importantly it interconnects IEEE 802.1 Time Sensitive Networking (TSN) [RFC9023] deployed in Industry Control and Automation Systems (ICAS).¶
DetNet can be seen as a specialized branch of TE, since it sets up explicit optimized paths with allocation of resources as requested. A DetNet application can express its QoS attributes or traffic behavior using any combination of DetNet functions described in sub-layers. They are then distributed and provisioned using well- established control and provisioning mechanisms adopted for traffic engineering.¶
In DetNet, a considerable amount of state information is required to maintain per-flow queuing disciplines and resource reservation for a large number of individual flows. This can be quite challenging for network operations during network events such as faults, change in traffic volume or re-provisioning. Therefore, DetNet recommends support for aggregated flows, however, it still requires a large amount of control signaling to establish and maintain DetNet flows.¶
Note that DetNet might suffer from some of the scalability concerns described for Intserv in Section 5.1.1.1, but the scope of DetNet's deployment scenarios is smaller and so less exposed to scaling issues.¶
This document describes various TE mechanisms available in the network. However, distributed applications in general and, in particular, bandwidth-greedy P2P applications that are used, for example, for file sharing, cannot directly use those techniques. As per [RFC5693], applications could greatly improve traffic distribution and quality by cooperating with external services that are aware of the network topology. Addressing the Application-Layer Traffic Optimization (ALTO) problem means, on the one hand, deploying an ALTO service to provide applications with information regarding the underlying network (e.g., basic network location structure and preferences of network paths) and, on the other hand, enhancing applications in order to use such information to perform better-than-random selection of the endpoints with which they establish connections.¶
The basic function of ALTO is based on abstract maps of a network. These maps provide a simplified view, yet enough information about a network for applications to effectively utilize them. Additional services are built on top of the maps. [RFC7285] describes a protocol implementing the ALTO services as an information-publishing interface that allows a network to publish its network information to network applications. This information can include network node locations, groups of node-to-node connectivity arranged by cost according to configurable granularities, and end-host properties. The information published by the ALTO Protocol should benefit both the network and the applications. The ALTO Protocol uses a REST-ful design and encodes its requests and responses using JSON [RFC8259] with a modular design by dividing ALTO information publication into multiple ALTO services (e.g., the Map service, the Map-Filtering Service, the Endpoint Property Service, and the Endpoint Cost Service).¶
[RFC8189] defines a new service that allows an ALTO Client to retrieve several cost metrics in a single request for an ALTO filtered cost map and endpoint cost map. [RFC8896] extends the ALTO cost information service so that applications decide not only 'where' to connect, but also 'when'. This is useful for applications that need to perform bulk data transfer and would like to schedule these transfers during an off-peak hour, for example. [I-D.ietf-alto-performance-metrics] introduces network performance metrics, including network delay, jitter, packet loss rate, hop count, and bandwidth. The ALTO server may derive and aggregate such performance metrics from BGP-LS (see Section 5.1.3.10) or IGP-TE (see Section 5.1.3.9), or management tools, and then expose the information to allow applications to determine 'where' to connect based on network performance criteria. The ALTO WG is evaluating the use of network TE properties while making application decisions for new use cases such as Edge computing and Datacenter interconnect.¶
One of the main drivers for Software Defined Networking (SDN) [RFC7149] is a decoupling of the network control plane from the data plane. This separation has been achieved for TE networks with the development of MPLS/GMPLS (see Section 5.1.3.3 and Section 5.1.3.5) and the PCE (Section 5.1.3.11). One of the advantages of SDN is its logically centralized control regime that allows a full view of the underlying networks. Centralized control in SDN helps improve network resource utilization compared with distributed network control.¶
Abstraction and Control of TE Networks (ACTN) [RFC8453] defines a hierarchical SDN architecture which describes the functional entities and methods for the coordination of resources across multiple domains, to provide composite traffic-engineered services. ACTN facilitates composed, multi-domain connections and provides them to the user. ACTN is focused on:¶
The ACTN managed infrastructure is built from traffic-engineered network resources, which may include statistical packet bandwidth, physical forwarding plane sources (such as wavelengths and time slots), forwarding and cross-connect capabilities. The type of network virtualization seen in ACTN allows customers and applications (tenants) to utilize and independently control allocated virtual network resources as if they were physically their own resource. The ACTN network is "sliced", with tenants being given a different partial and abstracted topology view of the physical underlying network.¶
An IETF Network Slice is a logical network topology connecting a number of endpoints using a set of shared or dedicated network resources [I-D.ietf-teas-ietf-network-slices]. The resources are used to satisfy specific Service Level Objectives (SLOs) specified by the consumer.¶
IETF network slices are not, of themselves, TE constructs. However, a network operator that offers IETF network slices is likely to use many TE tools in order to manage their network and provide the services.¶
IETF Network Slices are defined such that they are independent of the underlying infrastructure connectivity and technologies used. From a customer's perspective, an IETF Network Slice looks like a VPN connectivity matrix with additional information about the level of service that the customer requires between the endpoints. From an operator's perspective, the IETF Network Slice looks like a set of routing or tunneling instructions with the network resource reservations necessary to provide the required service levels as specified by the SLOs. The concept of an IETF network slice is consistent with an enhanced VPN (VPN+) [I-D.ietf-teas-enhanced-vpn].¶
Constraint-based routing refers to a class of routing systems that compute routes through a network subject to the satisfaction of a set of constraints and requirements. In the most general case, constraint-based routing may also seek to optimize overall network performance while minimizing costs.¶
The constraints and requirements may be imposed by the network itself or by administrative policies. Constraints may include bandwidth, hop count, delay, and policy instruments such as resource class attributes. Constraints may also include domain-specific attributes of certain network technologies and contexts which impose restrictions on the solution space of the routing function. Path oriented technologies such as MPLS have made constraint-based routing feasible and attractive in public IP networks.¶
The concept of constraint-based routing within the context of MPLS TE requirements in IP networks was first described in [RFC2702] and led to developments such as MPLS-TE [RFC3209] as described in Section 5.1.3.3.¶
Unlike QoS-based routing (for example, see [RFC2386], [MA], and [I-D.ietf-idr-performance-routing]) which generally addresses the issue of routing individual traffic flows to satisfy prescribed flow-based QoS requirements subject to network resource availability, constraint-based routing is applicable to traffic aggregates as well as flows and may be subject to a wide variety of constraints which may include policy restrictions.¶
The traditional approach to routing in an IGP network relies on the IGPs deriving "shortest paths" over the network based solely on the IGP metric assigned to the links. Such an approach is often limited: traffic may tend to converge toward the destination, possibly causing congestion; and it is not possible to steer traffic onto paths depending on the end-to-end qualities demanded by the applications.¶
To overcome this limitation, various sorts of TE have been widely deployed (as described in this document), where the TE component is responsible for computing the path based on additional metrics and/or constraints. Such paths (or tunnels) need to be installed in the routers' forwarding tables in addition to, or as a replacement for the original paths computed by IGPs. The main drawback of these TE approaches is the additional complexity of protocols and management, and the state that may need to be maintained within the network.¶
IGP flexible algorithms (flex-algos) [RFC9350] allow IGPs to construct constraint-based paths over the network by computing constraint- based next hops. The intent of flex-algos is to reduce TE complexity by letting an IGP perform some basic TE computation capabilities. Flex-algo includes a set of extensions to the IGPs that enable a router to send TLVs that:¶
A given combination of calculation-type, metric-type, and constraints is known as a "Flexible Algorithm Definition" (or FAD). A router that sends such a set of TLVs also assigns a specific identifier (the Flexible Algorithm) to the specified combination of calculation-type, metric-type, and constraints.¶
There are two use cases for flex-algo: in IP networks [I-D.ietf-lsr-ip-flexalgo] and in Segment Routing networks [RFC9350]. In the first case, flex-algo computes paths to an IPv4 or IPv6 address, in the second case, flex-algo computes paths to a prefix SID (see Section 5.1.3.12).¶
Examples of where flex-algo can be useful include:¶
RSVP is a soft-state signaling protocol [RFC2205]. It supports receiver-initiated establishment of resource reservations for both multicast and unicast flows. RSVP was originally developed as a signaling protocol within the Integrated Services framework (see Section 5.1.1.1) for applications to communicate QoS requirements to the network and for the network to reserve relevant resources to satisfy the QoS requirements [RFC2205].¶
In RSVP, the traffic sender or source node sends a PATH message to the traffic receiver with the same source and destination addresses as the traffic which the sender will generate. The PATH message contains: (1) a sender traffic specification describing the characteristics of the traffic, (2) a sender template specifying the format of the traffic, and (3) an optional advertisement specification which is used to support the concept of One Pass With Advertising (OPWA) [RFC2205]. Every intermediate router along the path forwards the PATH message to the next hop determined by the routing protocol. Upon receiving a PATH message, the receiver responds with a RESV message which includes a flow descriptor used to request resource reservations. The RESV message travels to the sender or source node in the opposite direction along the path that the PATH message traversed. Every intermediate router along the path can reject or accept the reservation request of the RESV message. If the request is rejected, the rejecting router will send an error message to the receiver and the signaling process will terminate. If the request is accepted, link bandwidth and buffer space are allocated for the flow and the related flow state information is installed in the router.¶
One of the issues with the original RSVP specification was scalability. This was because reservations were required for micro- flows, so that the amount of state maintained by network elements tended to increase linearly with the number of traffic flows. These issues are described in [RFC2961] which also modifies and extends RSVP to mitigate the scaling problems to make RSVP a versatile signaling protocol for the Internet. For example, RSVP has been extended to reserve resources for aggregation of flows [RFC3175], to set up MPLS explicit label switched paths (see Section 5.1.3.3), and to perform other signaling functions within the Internet. [RFC2961] also describes a mechanism to reduce the amount of Refresh messages required to maintain established RSVP sessions.¶
MPLS is a forwarding scheme which also includes extensions to conventional IP control plane protocols. MPLS extends the Internet routing model and enhances packet forwarding and path control [RFC3031].¶
At the ingress to an MPLS domain, Label Switching Routers (LSRs) classify IP packets into Forwarding Equivalence Classes (FECs) based on a variety of factors, including, e.g., a combination of the information carried in the IP header of the packets and the local routing information maintained by the LSRs. An MPLS label stack entry is then prepended to each packet according to their forwarding equivalence classes. The MPLS label stack entry is 32 bits long and contains a 20-bit label field.¶
An LSR makes forwarding decisions by using the label prepended to packets as the index into a local next hop label forwarding entry (NHLFE). The packet is then processed as specified in the NHLFE. The incoming label may be replaced by an outgoing label (label swap), and the packet may be forwarded to the next LSR. Before a packet leaves an MPLS domain, its MPLS label may be removed (label pop). A Label Switched Path (LSP) is the path between an ingress LSRs and an egress LSRs through which a labeled packet traverses. The path of an explicit LSP is defined at the originating (ingress) node of the LSP. MPLS can use a signaling protocol such as RSVP or LDP to set up LSPs.¶
MPLS is a powerful technology for Internet TE because it supports explicit LSPs which allow constraint-based routing to be implemented efficiently in IP networks [AWD2]. The requirements for TE over MPLS are described in [RFC2702]. Extensions to RSVP to support instantiation of explicit LSP are discussed in [RFC3209] and Section 5.1.3.4.¶
RSVP-TE is a protocol extension of RSVP (Section 5.1.3.2) for traffic engineering. The base specification is found in [RFC3209]. RSVP-TE enables the establishment of traffic-engineered MPLS LSPs (TE LSPs), using loose or strict paths, and taking into consideration network constraints such as available bandwidth. The extension supports signaling LSPs on explicit paths that could be administratively specified, or computed by a suitable entity (such as a PCE, Section 5.1.3.11) based on QoS and policy requirements, taking into consideration the prevailing network state as advertised by IGP extension for IS-IS in [RFC5305], for OSPFV2 in [RFC3630], and for OSPFv3 in [RFC5329]. RSVP-TE enables the reservation of resources (for example, bandwidth) along the path.¶
RSVP-TE includes the ability to preempt LSPs based on priorities, and uses link affinities to include or exclude links from the LSPs. The protocol is further extended to support Fast Reroute (FRR) [RFC4090], Diffserv [RFC4124], and bidirectional LSPs [RFC7551]. RSVP-TE extensions for support for GMPLS (see Section 5.1.3.5) are specified in [RFC3473].¶
Requirements for point-to-multipoint (P2MP) MPLS TE LSPs are documented in [RFC4461], and signaling protocol extensions for setting up P2MP MPLS TE LSPs via RSVP-TE are defined in [RFC4875] where a P2MP LSP is comprised of multiple source-to-leaf (S2L) sub-LSPs. To determine the paths for P2MP LSPs, selection of the branch points (based on capabilities, network state, and policies) is key [RFC5671]¶
RSVP-TE has evolved to provide real time dynamic metrics for path selection for low latency paths using extensions to IS-IS [RFC8570] and OSPF [RFC7471] based on STAMP [RFC8972] and TWAMP [RFC5357] performance measurements.¶
RSVP-TE has historically been used when bandwidth was constrained, however, as bandwidth has increased, RSVP-TE has developed into a bandwidth management tool to provide bandwidth efficiency and proactive resource management.¶
GMPLS extends MPLS control protocols to encompass time-division (e.g., Synchronous Optical Network / Synchronous Digital Hierarchy (SONET/SDH), Plesiochronous Digital Hierarchy (PDH), Optical Transport Network (OTN)), wavelength (lambdas), and spatial switching (e.g., incoming port or fiber to outgoing port or fiber) as well as continuing to support packet switching. GMPLS provides a common set of control protocols for all of these layers (including some technology-specific extensions) each of which has a distinct data or forwarding plane. GMPLS covers both the signaling and the routing part of that control plane and is based on the TE extensions to MPLS (see Section 5.1.3.4).¶
In GMPLS [RFC3945], the original MPLS architecture is extended to include LSRs whose forwarding planes rely on circuit switching, and therefore cannot forward data based on the information carried in either packet or cell headers. Specifically, such LSRs include devices where the switching is based on time slots, wavelengths, or physical ports. These additions impact basic LSP properties: how labels are requested and communicated, the unidirectional nature of MPLS LSPs, how errors are propagated, and information provided for synchronizing the ingress and egress LSRs [RFC3473].¶
The IETF IP Performance Metrics (IPPM) working group has developed a set of standard metrics that can be used to monitor the quality, performance, and reliability of Internet services. These metrics can be applied by network operators, end-users, and independent testing groups to provide users and service providers with a common understanding of the performance and reliability of the Internet component 'clouds' they use/provide [RFC2330]. The criteria for performance metrics developed by the IPPM working group are described in [RFC2330]. Examples of performance metrics include one-way packet loss [RFC7680], one-way delay [RFC7679], and connectivity measures between two nodes [RFC2678]. Other metrics include second-order measures of packet loss and delay.¶
Some of the performance metrics specified by the IPPM working group are useful for specifying SLAs. SLAs are sets of service level objectives negotiated between users and service providers, wherein each objective is a combination of one or more performance metrics, possibly subject to certain constraints.¶
The IETF Real Time Flow Measurement (RTFM) working group produced an architecture that defines a method to specify traffic flows as well as a number of components for flow measurement (meters, meter readers, manager) [RFC2722]. A flow measurement system enables network traffic flows to be measured and analyzed at the flow level for a variety of purposes. As noted in RFC 2722, a flow measurement system can be very useful in the following contexts:¶
A flow measurement system consists of meters, meter readers, and managers. A meter observes packets passing through a measurement point, classifies them into groups, accumulates usage data (such as the number of packets and bytes for each group), and stores the usage data in a flow table. A group may represent any collection of user applications, hosts, networks, etc. A meter reader gathers usage data from various meters so it can be made available for analysis. A manager is responsible for configuring and controlling meters and meter readers. The instructions received by a meter from a manager include flow specifications, meter control parameters, and sampling techniques. The instructions received by a meter reader from a manager include the address of the meter whose data are to be collected, the frequency of data collection, and the types of flows to be collected.¶
IP Flow Information Export (IPFIX) [RFC5470] defines an architecture that is very similar to the RTFM architecture and includes Metering, Exporting, and Collecting Processes. [RFC5472] describes the applicability of IPFIX and makes a comparison with RTFM, pointing out that, architecturally, while RTM talks about devices, IPFIX deals with processed to clarify that multiple of those processes may be co-located on the same machine. The IPFIX protocol [RFC7011] is widely implemented.¶
[RFC3124] provides a set of congestion control mechanisms for the use of transport protocols. It also allows the development of mechanisms for unifying congestion control across a subset of an endpoint's active unicast connections (called a congestion group). A congestion manager continuously monitors the state of the path for each congestion group under its control. The manager uses that information to instruct a scheduler on how to partition bandwidth among the connections of that congestion group.¶
[RFC5305] describes the extensions to the Intermediate System to Intermediate System (IS-IS) protocol to support TE, similarly [RFC3630] specifies TE extensions for OSPFv2, and [RFC5329] has the same description for OSPFv3.¶
IS-IS and OSPF share the common concept of TE extensions to distribute TE parameters such as link type and ID, local and remote IP addresses, TE metric, maximum bandwidth, maximum reservable bandwidth and unreserved bandwidth, and admin group. The information distributed by the IGPs in this way can be used to build a view of the state and capabilities of a TE network (see Section 5.1.3.14).¶
The difference between IS-IS and OSPF is in the details of how they encode and transmit the TE parameters:¶
In a number of environments, a component external to a network is called upon to perform computations based on the network topology and current state of the connections within the network, including TE information. This is information typically distributed by IGP routing protocols within the network (see Section 5.1.3.9).¶
The Border Gateway Protocol (BGP) (see also Section 7) is one of the essential routing protocols that glue the Internet together. BGP Link State (BGP-LS) [RFC7752] is a mechanism by which link-state and TE information can be collected from networks and shared with external components using the BGP routing protocol. The mechanism is applicable to physical and virtual IGP links, and is subject to policy control.¶
Information collected by BGP-LS can be used, for example, to construct the TED (Section 5.1.3.14) for use by the Path Computation Element (PCE, see Section 5.1.3.11), or may be used by Application-Layer Traffic Optimization (ALTO) servers (see Section 5.1.2.1).¶
Constraint-based path computation is a fundamental building block for TE in MPLS and GMPLS networks. Path computation in large, multi-domain networks is complex and may require special computational components and cooperation between the elements in different domains. The Path Computation Element (PCE) [RFC4655] is an entity (component, application, or network node) that is capable of computing a network path or route based on a network graph and applying computational constraints.¶
Thus, a PCE can provide a central component in a TE system operating on the TE Database (TED, see Section 5.1.3.14) with delegated responsibility for determining paths in MPLS, GMPLS, or Segment Routing networks. The PCE uses the Path Computation Element Communication Protocol (PCEP) [RFC5440] to communicate with Path Computation Clients (PCCs), such as MPLS LSRs, to answer their requests for computed paths or to instruct them to initiate new paths [RFC8281] and maintain state about paths already installed in the network [RFC8231].¶
PCEs form key components of a number of TE systems. More information about the applicability of PCE can be found in [RFC8051], while [RFC6805] describes the application of PCE to determining paths across multiple domains. PCE also has potential use in Abstraction and Control of TE Networks (ACTN) (see Section 5.1.2.2), Centralized Network Control [RFC8283], and Software Defined Networking (SDN) (see Section 4.3.2).¶
The Segment Routing (SR) architecture [RFC8402] leverages the source routing and tunneling paradigms. The path a packet takes is defined at the ingress and the packet is tunneled to the egress.¶
In a protocol realization, an ingress node steers a packet using a set of instructions, called segments, that are included in an SR header prepended to the packet: a label stack in MPLS case, and a series of 128-bit segment identifiers in the IPv6 case.¶
Segments are identified by Segment Identifiers (SIDs). There are four types of SID that are relevant for TE.¶
Binding SID: A Binding SID has two purposes:¶
A segment can represent any instruction, topological or service-based. SIDs can be looked up in a global context (domain wide) as well as in some other context (see, for example, "context labels" in Section 3 of [RFC5331]).¶
The application of "policy" to Segment Routing can make SR into a TE mechanism as described in Section 5.1.1.3.¶
Bit Index Explicit Replication (BIER) [RFC8279] specifies an encapsulation for multicast forwarding that can be used on MPLS or Ethernet transports. A mechanism known as Tree Engineering for Bit Index Explicit Replication (BIER-TE) [RFC9262] provides a component that could be used to build a traffic-engineered multicast system. BIER-TE does not of itself offer full traffic engineering, and the abbreviation "TE" does not, in this case, refer to traffic engineering.¶
In BIER-TE, path steering is supported via the definition of a bitstring attached to each packet that determines how the packet is forwarded and replicated within the network. Thus, this bitstring steers the traffic within the network and forms an element of a traffic engineering system. A central controller that is aware of the capabilities and state of the network as well as the demands of the various traffic flows, is able to select multicast paths that take account of the available resources and demands. This controller, therefore, is responsible for the policy elements of traffic engineering.¶
Resource management has implications for the forwarding plane beyond the steering of packets defined for BIER-TE. These include the allocation of buffers to meet the requirements of admitted traffic, and may include policing and/or rate-shaping mechanisms achieved via various forms of queuing. This level of resource control, while optional, is important in networks that wish to support congestion management policies to control or regulate the offered traffic to deliver different levels of service and alleviate congestion problems, or those networks that wish to control latencies experienced by specific traffic flows.¶
The network states that are relevant to TE need to be stored in the system and presented to the user. The Traffic Engineering Database (TED) is a collection of all TE information about all TE nodes and TE links in the network. It is an essential component of a TE system, such as MPLS-TE [RFC2702] or GMPLS [RFC3945]. In order to formally define the data in the TED and to present the data to the user, the data modeling language YANG [RFC7950] can be used as described in [RFC8795].¶
The TE control system needs to have a management interface that is human-friendly and a control interface that is programmable for automation. The Network Configuration Protocol (NETCONF) [RFC6241] or the RESTCONF Protocol [RFC8040] provide programmable interfaces that are also human-friendly. These protocols use XML or JSON encoded messages. When message compactness or protocol bandwidth consumption needs to be optimized for the control interface, other protocols, such as Group Communication for the Constrained Application Protocol (CoAP) [RFC7390] or gRPC [GRPC], are available, especially when the protocol messages are encoded in a binary format. Along with any of these protocols, the data modeling language YANG [RFC7950] can be used to formally and precisely define the interface data.¶
The Path Computation Element Communication Protocol (PCEP) [RFC5440] is another protocol that has evolved to be an option for the TE system control interface. The messages of PCEP are TLV-based, not defined by a data modeling language such as YANG.¶
The Internet is dominated by client-server interactions, principally Web traffic and multimedia streams, although in the future, more sophisticated media servers may become dominant. The location and performance of major information servers has a significant impact on the traffic patterns within the Internet as well as on the perception of service quality by end users.¶
A number of dynamic load-balancing techniques have been devised to improve the performance of replicated information servers. These techniques can cause spatial traffic characteristics to become more dynamic in the Internet because information servers can be dynamically picked based upon the location of the clients, the location of the servers, the relative utilization of the servers, the relative performance of different networks, and the relative performance of different parts of a network. This process of assignment of distributed servers to clients is called traffic directing. It is an application layer function.¶
Traffic directing schemes that allocate servers in multiple geographically dispersed locations to clients may require empirical network performance statistics to make more effective decisions. In the future, network measurement systems may need to provide this type of information.¶
When congestion exists in the network, traffic directing and traffic engineering systems should act in a coordinated manner. This topic is for further study.¶
The issues related to location and replication of information servers, particularly web servers, are important for Internet traffic engineering because these servers contribute a substantial proportion of Internet traffic.¶
This section describes high-level recommendations for traffic engineering in the Internet in general terms.¶
The recommendations describe the capabilities needed to solve a TE problem or to achieve a TE objective. Broadly speaking, these recommendations can be categorized as either functional or non-functional recommendations.¶
The subsections that follow first summarize the non-functional requirements, and then detail the functional requirements.¶
The generic non-functional recommendations for Internet traffic engineering are listed in the paragraphs that follow. In a given context, some of these recommendations may be critical while others may be optional. Therefore, prioritization may be required during the development phase of a TE system to tailor it to a specific operational context.¶
Routing control is a significant aspect of Internet traffic engineering. Routing impacts many of the key performance measures associated with networks, such as throughput, delay, and utilization. Generally, it is very difficult to provide good service quality in a wide area network without effective routing control. A desirable TE routing system is one that takes traffic characteristics and network constraints into account during route selection while maintaining stability.¶
Shortest path first (SPF) IGPs are based on shortest path algorithms and have limited control capabilities for TE [RFC2702], [AWD2]. These limitations include:¶
Pure SPF protocols do not take network constraints and traffic characteristics into account during route selection. For example, IGPs always select the shortest paths based on link metrics assigned by administrators, so load sharing cannot be performed across paths of different costs. Note that link metrics are assigned following a range of operator-selected policies that might reflect preference for the use of some links over others, and "shortest" might not, therefore, be purely a measure of distance. Using shortest paths to forward traffic may cause the following problems:¶
Because of these limitations, capabilities are needed to enhance the routing function in IP networks. Some of these capabilities are summarized below.¶
Traffic mapping is the assignment of traffic workload onto (pre-established) paths to meet certain requirements. Thus, while constraint-based routing deals with path selection, traffic mapping deals with the assignment of traffic to established paths which may have been generated by constraint-based routing or by some other means. Traffic mapping can be performed by time-dependent or state- dependent mechanisms, as described in Section 4.1.¶
An important aspect of the traffic mapping function is the ability to establish multiple paths between an originating node and a destination node, and the capability to distribute the traffic between the two nodes across the paths according to configured policies. A precondition for this scheme is the existence of flexible mechanisms to partition traffic and then assign the traffic partitions onto the parallel paths (described as "parallel traffic trunks" in [RFC2702]). When traffic is assigned to multiple parallel paths, it is recommended that special care should be taken to ensure proper ordering of packets belonging to the same application (or traffic flow) at the destination node of the parallel paths.¶
Mechanisms that perform the traffic mapping functions should aim to map the traffic onto the network infrastructure to minimize congestion. If the total traffic load cannot be accommodated, or if the routing and mapping functions cannot react fast enough to changing traffic conditions, then a traffic mapping system may use short timescale congestion control mechanisms (such as queue management, scheduling, etc.) to mitigate congestion. Thus, mechanisms that perform the traffic mapping functions complement existing congestion control mechanisms. In an operational network, traffic should be mapped onto the infrastructure such that intra-class and inter-class resource contention are minimized (see Section 2).¶
When traffic mapping techniques that depend on dynamic state feedback (e.g., MATE [MATE] and suchlike) are used, special care must be taken to guarantee network stability.¶
The importance of measurement in TE has been discussed throughout this document. A TE system should include mechanisms to measure and collect statistics from the network to support the TE function. Additional capabilities may be needed to help in the analysis of the statistics. The actions of these mechanisms should not adversely affect the accuracy and integrity of the statistics collected. The mechanisms for statistical data acquisition should also be able to scale as the network evolves.¶
Traffic statistics may be classified according to long-term or short-term timescales. Long-term traffic statistics are very useful for traffic engineering. Long-term traffic statistics may periodically record network workload (such as hourly, daily, and weekly variations in traffic profiles) as well as traffic trends. Aspects of the traffic statistics may also describe class of service characteristics for a network supporting multiple classes of service. Analysis of the long-term traffic statistics may yield other information such as busy-hour characteristics, traffic growth patterns, persistent congestion problems, hot-spot, and imbalances in link utilization caused by routing anomalies.¶
A mechanism for constructing traffic matrices for both long-term and short-term traffic statistics should be in place. In multi-service IP networks, the traffic matrices may be constructed for different service classes. Each element of a traffic matrix represents a statistic about the traffic flow between a pair of abstract nodes. An abstract node may represent a router, a collection of routers, or a site in a VPN.¶
Traffic statistics should provide reasonable and reliable indicators of the current state of the network on the short-term scale. Some short term traffic statistics may reflect link utilization and link congestion status. Examples of congestion indicators include excessive packet delay, packet loss, and high resource utilization. Examples of mechanisms for distributing this kind of information include SNMP, probing tools, FTP, IGP link state advertisements, and NETCONF/RESTCONF, etc.¶
The recommendations in Section 6.2 and Section 6.3 may be sub-optimal or even ineffective if the amount of traffic flowing on a route or path exceeds the capacity of the resource on that route or path. Several approaches can be used to increase the performance of TE systems.¶
Combining some element of all three of these measures is advisable to achieve a better TE system.¶
Network survivability refers to the capability of a network to maintain service continuity in the presence of faults. This can be accomplished by promptly recovering from network impairments and maintaining the required QoS for existing services after recovery. Survivability is an issue of great concern within the Internet community due to the demand to carry mission-critical traffic, real-time traffic, and other high priority traffic over the Internet. Survivability can be addressed at the device level by developing network elements that are more reliable; and at the network level by incorporating redundancy into the architecture, design, and operation of networks. It is recommended that a philosophy of robustness and survivability should be adopted in the architecture, design, and operation of TE used to control IP networks (especially public IP networks). Because different contexts may demand different levels of survivability, the mechanisms developed to support network survivability should be flexible so that they can be tailored to different needs. A number of tools and techniques have been developed to enable network survivability including MPLS Fast Reroute [RFC4090], Topology Independent Loop-free Alternate Fast Re-route for Segment Routing [I-D.ietf-rtgwg-segment-routing-ti-lfa] RSVP-TE Extensions in Support of End-to-End GMPLS Recovery [RFC4872], and GMPLS Segment Recovery [RFC4873].¶
The impact of service outages varies significantly for different service classes depending on the duration of the outage which can vary from milliseconds (with minor service impact) to seconds (with possible call drops for IP telephony and session timeouts for connection-oriented transactions) to minutes and hours (with potentially considerable social and business impact). Outages of different durations have different impacts depending on the nature of the traffic flows that are interrupted.¶
Failure protection and restoration capabilities are available in multiple layers as network technologies have continued to evolve. Optical networks are capable of providing dynamic ring and mesh restoration functionality at the wavelength level. At the SONET/SDH layer survivability capability is provided with Automatic Protection Switching (APS) as well as self-healing ring and mesh architectures. Similar functionality is provided by layer 2 technologies such as Ethernet.¶
Rerouting is used at the IP layer to restore service following link and node outages. Rerouting at the IP layer occurs after a period of routing convergence which may require seconds to minutes to complete. Path-oriented technologies such as MPLS ([RFC3469]) can be used to enhance the survivability of IP networks in a potentially cost-effective manner.¶
An important aspect of multi-layer survivability is that technologies at different layers may provide protection and restoration capabilities at different granularities in terms of time scales and at different bandwidth granularity (from the level of packets to that of wavelengths). Protection and restoration capabilities can also be sensitive to different service classes and different network utility models. Coordinating different protection and restoration capabilities across multiple layers in a cohesive manner to ensure network survivability is maintained at reasonable cost is a challenging task. Protection and restoration coordination across layers may not always be feasible, because networks at different layers may belong to different administrative domains.¶
The following paragraphs present some of the general recommendations for protection and restoration coordination.¶
Because MPLS is path-oriented, it has the potential to provide faster and more predictable protection and restoration capabilities than conventional hop-by-hop routed IP systems. Protection types for MPLS networks can be divided into four categories.¶
See [RFC3469] and [RFC6372] for a more comprehensive discussion of MPLS based recovery.¶
Another issue to consider is the concept of protection options. We use notation such as "m:n protection", where m is the number of protection LSPs used to protect n working LSPs. In all cases except 1+1 protection, the resources associated with the protection LSPs can be used to carry preemptable best-effort traffic when the working LSP is functioning correctly.¶
Networks are often implemented as layers. A layer relationship may represent the interaction between technologies (for example, an IP network operated over an optical network), or the relationship between different network operators (for example, a customer network operated over a service provider's network). Note that a multi-layer network does not imply the use of multiple technologies, although some form of encapsulation is often applied.¶
Multi-layer traffic engineering presents a number of challenges associated with scalability and confidentiality. These issues are addressed in [RFC7926] which discusses the sharing of information between domains through policy filters, aggregation, abstraction, and virtualization. That document also discusses how existing protocols can support this scenario with special reference to BGP-LS (see Section 5.1.3.10).¶
PCE (see Section 5.1.3.11) is also a useful tool for multi-layer networks as described in [RFC6805], [RFC8685], and [RFC5623]. Signaling techniques for multi-layer TE are described in [RFC6107].¶
See also Section 6.6 for examination of multi-layer network survivability.¶
Increasing requirements to support multiple classes of traffic in the Internet, such as best effort and mission critical data, call for IP networks to differentiate traffic according to some criteria and to give preferential treatment to certain types of traffic. Large numbers of flows can be aggregated into a few behavior aggregates based on some criteria based on common performance requirements in terms of packet loss ratio, delay, and jitter, or in terms of common fields within the IP packet headers.¶
Differentiated Services (Diffserv) [RFC2475] can be used to ensure that SLAs defined to differentiate between traffic flows are met. Classes of service (CoS) can be supported in a Diffserv environment by concatenating per-hop behaviors (PHBs) along the routing path. A PHB is the forwarding behavior that a packet receives at a Diffserv- compliant node, and it can be configured at each router. PHBs are delivered using buffer management and packet scheduling mechanisms and require that the ingress nodes use traffic classification, marking, policing, and shaping.¶
TE can complement Diffserv to improve utilization of network resources. TE can be operated on an aggregated basis across all service classes [RFC3270], or on a per-service class basis. The former is used to provide better distribution of the traffic load over the network resources (see [RFC3270] for detailed mechanisms to support aggregate TE). The latter case is discussed below since it is specific to the Diffserv environment, with so called Diffserv-aware traffic engineering [RFC4124].¶
For some Diffserv networks, it may be desirable to control the performance of some service classes by enforcing relationships between the traffic workload contributed by each service class and the amount of network resources allocated or provisioned for that service class. Such relationships between demand and resource allocation can be enforced using a combination of, for example:¶
It may also be desirable to limit the performance impact of high-priority traffic on relatively low-priority traffic. This can be achieved, for example, by controlling the percentage of high-priority traffic that is routed through a given link. Another way to accomplish this is to increase link capacities appropriately so that lower-priority traffic can still enjoy adequate service quality. When the ratio of traffic workload contributed by different service classes varies significantly from router to router, it may not be enough to rely on conventional IGP routing protocols or on TE mechanisms that are not sensitive to different service classes. Instead, it may be desirable to perform TE, especially routing control and mapping functions, on a per-service class basis. One way to accomplish this in a domain that supports both MPLS and Diffserv is to define class-specific LSPs and to map traffic from each class onto one or more LSPs that correspond to that service class. An LSP corresponding to a given service class can then be routed and protected/restored in a class-dependent manner, according to specific policies.¶
Performing TE on a per-class basis may require per-class parameters to be distributed. It is common to have some classes share some aggregate constraints (e.g., maximum bandwidth requirement) without enforcing the constraint on each individual class. These classes can be grouped into class-types, and per-class-type parameters can be distributed to improve scalability. This also allows better bandwidth sharing between classes in the same class-type. A class-type is a set of classes that satisfy the following two conditions:¶
See [RFC4124] for detailed requirements on Diffserv-aware TE.¶
Offline and online (see Section 4.2) TE considerations are of limited utility if the network cannot be controlled effectively to implement the results of TE decisions and to achieve the desired network performance objectives.¶
Capacity augmentation is a coarse-grained solution to TE issues. However, it is simple, may be applied through creating parallel links that form part of an ECMP scheme, and may be advantageous if bandwidth is abundant and cheap. However, bandwidth is not always abundant and cheap, and additional capacity might not always be the best solution. Adjustments of administrative weights and other parameters associated with routing protocols provide finer-grained control, but this approach is difficult to use and imprecise because of the way the routing protocols interactions occur across the network.¶
Control mechanisms can be manual (e.g., static configuration), partially-automated (e.g., scripts), or fully-automated (e.g., policy based management systems). Automated mechanisms are particularly useful in large-scale networks. Multi-vendor interoperability can be facilitated by standardized management tools (e.g., YANG models) to support the control functions required to address TE objectives.¶
Network control functions should be secure, reliable, and stable as these are often needed to operate correctly in times of network impairments (e.g., during network congestion or attacks).¶
Inter-domain TE is concerned with performance optimization for traffic that originates in one administrative domain and terminates in a different one.¶
BGP [RFC4271] is the standard exterior gateway protocol used to exchange routing information between autonomous systems (ASes) in the Internet. BGP includes a decision process that calculates the preference for routes to a given destination network. There are two fundamental aspects to inter-domain TE using BGP:¶
Most BGP implementations provide constructs that facilitate the implementation of complex BGP policies based on pre-configured logical conditions. These can be used to control import and export of incoming and outgoing routes, control redistribution of routes between BGP and other protocols, and influence the selection of best paths by manipulating the attributes (either standardized, or local to the implementation) associated with the BGP decision process.¶
When considering inter-domain TE with BGP, note that the outbound traffic exit point is controllable, whereas the interconnection point where inbound traffic is received typically is not. Therefore, it is up to each individual network to implement TE strategies that deal with the efficient delivery of outbound traffic from its customers to its peering points. The vast majority of TE policy is based on a "closest exit" strategy, which offloads inter-domain traffic at the nearest outbound peering point towards the destination AS. Most methods of manipulating the point at which inbound traffic enters are either ineffective, or not accepted in the peering community.¶
Inter-domain TE with BGP is generally effective, but it is usually applied in a trial-and-error fashion because a TE system usually only has a view of the available network resources within one domain (an AS in this case). A systematic approach for inter-domain TE requires cooperation between the domains. Further, what may be considered a good solution in one domain may not necessarily be a good solution in another. Moreover, it is generally considered inadvisable for one domain to permit a control process from another domain to influence the routing and management of traffic in its network.¶
MPLS TE-tunnels (LSPs) can add a degree of flexibility in the selection of exit points for inter-domain routing by applying the concept of relative and absolute metrics. If BGP attributes are defined such that the BGP decision process depends on IGP metrics to select exit points for inter-domain traffic, then some inter-domain traffic destined to a given peer network can be made to prefer a specific exit point by establishing a TE-tunnel between the router making the selection and the peering point via a TE-tunnel and assigning the TE-tunnel a metric which is smaller than the IGP cost to all other peering points. RSVP-TE protocol extensions for inter-domain MPLS and GMPLS are described in [RFC5151].¶
Similarly to intra-domain TE, inter-domain TE is best accomplished when a traffic matrix can be derived to depict the volume of traffic from one AS to another.¶
Layer 4 multipath transport protocols are designed to move traffic between domains and to allow control of the selection of the paths. To be truly effective, these protocols would require visibility of paths and network conditions in other domains, and that information may not be available, might not be complete, and is not necessarily trustworthy.¶
This section provides an overview of some TE practices in IP networks. The focus is on aspects of control of the routing function in operational contexts. The intent here is to provide an overview of the commonly used practices: the discussion is not intended to be exhaustive.¶
Service providers apply many of the TE mechanisms described in this document to optimize the performance of their IP networks, although others choose to not use any of them. These techniques include capacity planning including adding ECMP options for long timescales; routing control using IGP metrics and MPLS, as well as path planning and path control using MPLS and Segment Routing for medium timescales; and traffic management mechanisms for short timescale.¶
Administrators of MPLS-TE networks specify and configure link attributes and resource constraints such as maximum reservable bandwidth and resource class attributes for the links in the domain. A link state IGP that supports TE extensions (IS-IS-TE or OSPF-TE) is used to propagate information about network topology and link attributes to all routers in the domain. Network administrators specify the LSPs that are to originate at each router. For each LSP, the network administrator specifies the destination node and the attributes of the LSP which indicate the requirements that are to be satisfied during the path selection process. The attributes may include an explicit path for the LSP to follow, or the originating router may use a local constraint-based routing process to compute the path of the LSP. RSVP-TE is used as a signaling protocol to instantiate the LSPs. By assigning proper bandwidth values to links and LSPs, congestion caused by uneven traffic distribution can be avoided or mitigated.¶
The bandwidth attributes of an LSP relate to the bandwidth requirements of traffic that flows through the LSP. The traffic attribute of an LSP can be modified to accommodate persistent shifts in demand (traffic growth or reduction). If network congestion occurs due to unexpected events, existing LSPs can be rerouted to alleviate the situation, or the network administrator can configure new LSPs to divert some traffic to alternative paths. The reservable bandwidth of the congested links can also be reduced to force some LSPs to be rerouted to other paths. A traffic matrix in an MPLS domain can also be estimated by monitoring the traffic on LSPs. Such traffic statistics can be used for a variety of purposes including network planning and network optimization.¶
Network management and planning systems have evolved and assumed a lot of the responsibility for determining traffic paths in TE networks. This allows a network-wide view of resources, and facilitates coordination of the use of resources for all traffic flows in the network. Initial solutions using a PCE to perform path computation on behalf of network routers have given way to an approach that follows the SDN architecture. A stateful PCE is able to track all of the LSPs in the network and can redistribute them to make better use of the available resources. Such a PCE can form part of a network orchestrator that uses PCEP or some other configuration and management interface to instruct the signaling protocol or directly program the routers.¶
Segment Routing leverages a centralized TE controller and either an MPLS or IPv6 forwarding plane, but does not need to use a signaling protocol or management plane protocol to reserve resources in the routers. All resource reservation is logical within the controller, and not distributed to the routers. Packets are steered through the network using Segment Routing, and this may have configuration and operational scaling benefits.¶
As mentioned in Section 7, there is usually no direct control over the distribution of inbound traffic to a domain. Therefore, the main goal of inter-domain TE is to optimize the distribution of outbound traffic between multiple inter-domain links. When operating a geographically widespread network (such as for a multi-national or global network provider), maintaining the ability to operate the network in a regional fashion where desired, while continuing to take advantage of the benefits of a globally interconnected network, also becomes an important objective.¶
Inter-domain TE with BGP begins with the placement of multiple peering interconnection points that are in close proximity to traffic sources/destination, and offer lowest-cost paths across the network between the peering points and the sources/destinations. Some location-decision problems that arise in association with inter-domain routing are discussed in [AWD5].¶
Once the locations of the peering interconnects have been determined and implemented, the network operator decides how best to handle the routes advertised by the peer, as well as how to propagate the peer's routes within their network. One way to engineer outbound traffic flows in a network with many peering interconnects is to create a hierarchy of peers. Generally, the shortest AS paths will be chosen to forward traffic but BGP metrics can be used to prefer some peers and so favor particular paths. Preferred peers are those peers attached through peering interconnects with the most available capacity. Changes may be needed, for example, to deal with a "problem peer" who is difficult to work with on upgrades or is charging high prices for connectivity to their network. In that case, the peer may be given a reduced preference. This type of change can affect a large amount of traffic, and is only used after other methods have failed to provide the desired results.¶
When there are multiple exit points toward a given peer, and only one of them is congested, it is not necessary to shift traffic away from the peer entirely, but only from the one congested connections. This can be achieved by using passive IGP metrics, AS_PATH filtering, or prefix filtering.¶
This document does not introduce new security issues.¶
Network security is, of course, an important issue. In general, TE mechanisms are security-neutral: they may use tunnels which can slightly help protect traffic from inspection and which, in some cases, can be secured using encryption; they put traffic onto predictable paths within the network that may make it easier to find and attack; they increase the complexity or operation and management of the network; and they enable traffic to be steered onto more secure links or to more secure parts of the network.¶
The consequences of attacks on the control and management protocols used to operate TE networks can be significant: traffic can be hijacked to pass through specific nodes that perform inspection, or even to be delivered to the wrong place; traffic can be steered onto paths that deliver quality that is below the desired quality; and, networks can be congested or have resources on key links consumed. Thus, it is important to use adequate protection mechanisms on all protocols used to deliver TE.¶
Certain aspects of a network may be deduced from the details of the TE paths that are used. For example, the link connectivity of the network, and the quality and load on individual links may be inferred from knowing the paths of traffic and the requirements they place on the network (for example, by seeing the control messages or through path- trace techniques). Such knowledge can be used to launch targeted attacks (for example, taking down critical links) or can reveal commercially sensitive information (for example, whether a network is close to capacity). Network operators may, therefore, choose techniques that mask or hide information from within the network.¶
External control interfaces that are introduced to provide additional control and management of TE systems (see Section 5.1.2) provide flexibility to management and to customers, but do so at the risk of exposing the internals of a network to potentially malicious actors. The protocols used at these interfaces must be secured to protect against snooping and modification, and use of the interfaces must be authenticated.¶
This draft makes no requests for IANA action.¶
Much of the text in this document is derived from RFC 3272. The editor and contributors to this document would like to express their gratitude to all involved in that work. Although the source text has been edited in the production of this document, the original authors should be considered as Contributors to this work. They were:¶
Daniel O. Awduche Movaz Networks Angela Chiu Celion Networks Anwar Elwalid Lucent Technologies Indra Widjaja Bell Labs, Lucent Technologies XiPeng Xiao Redback Networks¶
The acknowledgements in RFC3272 were as below. All people who helped in the production of that document also need to be thanked for the carry-over into this new document.¶
The authors would like to thank Jim Boyle for inputs on the recommendations section, Francois Le Faucheur for inputs on Diffserv aspects, Blaine Christian for inputs on measurement, Gerald Ash for inputs on routing in telephone networks and for text on event-dependent TE methods, Steven Wright for inputs on network controllability, and Jonathan Aufderheide for inputs on inter-domain TE with BGP. Special thanks to Randy Bush for proposing the TE taxonomy based on "tactical versus strategic" methods. The subsection describing an "Overview of ITU Activities Related to Traffic Engineering" was adapted from a contribution by Waisum Lai. Useful feedback and pointers to relevant materials were provided by J. Noel Chiappa. Additional comments were provided by Glenn Grotefeld during the working last call process. Finally, the authors would like to thank Ed Kern, the TEWG co-chair, for his comments and support.¶
The early versions of this document were produced by the TEAS Working Group's RFC3272bis Design Team. The full list of members of this team is:¶
Acee Lindem Adrian Farrel Aijun Wang Daniele Ceccarelli Dieter Beller Jeff Tantsura Julien Meuric Liu Hua Loa Andersson Luis Miguel Contreras Martin Horneffer Tarek Saad Xufeng Liu¶
The production of this document includes a fix to the original text resulting from an Errata Report by Jean-Michel Grimaldi.¶
The editor of this document would also like to thank Dhruv Dhody, Gyan Mishra, Joel Halpern, Dave Taht, John Scudder, Rich Salz, Behcet Sarikaya, and Bob Briscoe for review comments.¶
This work is partially supported by the European Commission under Horizon 2020 grant agreement number 101015857 Secured autonomic traffic management for a Tera of SDN flows (Teraflow).¶
The following people contributed substantive text to this document:¶
Gert Grammel EMail: ggrammel@juniper.net Loa Andersson EMail: loa@pi.nu Xufeng Liu EMail: xufeng.liu.ietf@gmail.com Lou Berger EMail: lberger@labn.net Jeff Tantsura EMail: jefftant.ietf@gmail.com Daniel King EMail: daniel@olddog.co.uk Boris Hassanov EMail: bhassanov@yandex-team.ru Kiran Makhijani Email: kiranm@futurewei.com Dhruv Dhody Email: dhruv.ietf@gmail.com Mohamed Boucadair Email: mohamed.boucadair@orange.com¶
The changes to this document since RFC 3272 are substantial and not easily summarized as section-by-section changes. The material in the document has been moved around considerably, some of it removed, and new text added.¶
The approach taken here is to list the table of content of both the previous RFC and this document saying, respectively, where the text has been placed and where the text came from.¶
Edited in place in Section 1.¶
Retained as Section 2 with some text removed.¶
Rewritten as Section 2.3.¶
Edited as Section 2.4.¶
Retained as Section 5, but the very historic aspects have been deleted.¶
Retained as Section 5.1 with many new subsections.¶
Retained as Section 6.6.¶
Section 1: Based on Section 1 of RFC 3272.¶
Section 2: Based on Section 2. of RFC 3272.¶
Section 2.3: Based on Section 2.3 of RFC 3272.¶
Section 2.4: Based on Section 2.4 of RFC 3272.¶
Section 3: Based on Section 3 of RFC 3272.¶
Section 4: Based on Section 5 of RFC 3272.¶
Section 4.3: Based on Section 5.3 of RFC 3272.¶
Section 4.5: Based on Section 5.5 of RFC 3272.¶
Section 5: Based on Section 4 of RFC 3272.¶
Section 5.1: Based on Section 4.5 of RFC 3272.¶
Section 5.1.3.1: Based on Section 4.4 of RFC 3272.¶
Section 6: Based on Section 6 of RFC 3272.¶
Section 6.6: Based on Section 6.5 of RFC 3272.¶