Internet-Draft | AERO | March 2022 |
Templin | Expires 8 September 2022 | [Page] |
This document specifies an Automatic Extended Route Optimization (AERO) service for IP internetworking over Overlay Multilink Network (OMNI) interfaces. AERO/OMNI use an IPv6 link-local address format that supports operation of the IPv6 Neighbor Discovery (IPv6 ND) protocol. Prefix delegation/registration services are employed for network admission and to manage the IP forwarding and routing systems. Secure multilink operation, mobility management, multicast, traffic path selection and route optimization are naturally supported through dynamic neighbor cache updates. AERO is a widely-applicable mobile internetworking service especially well-suited to aviation services, intelligent transportation systems, mobile end user devices and many other applications.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 8 September 2022.¶
Copyright (c) 2022 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
Automatic Extended Route Optimization (AERO) fulfills the requirements of Distributed Mobility Management (DMM) [RFC7333] and route optimization [RFC5522] for aeronautical networking and other network mobility use cases including intelligent transportation systems and enterprise mobile device users. AERO is a secure internetworking and mobility management service that employs the Overlay Multilink Network Interface (OMNI) [I-D.templin-6man-omni] Non-Broadcast, Multiple Access (NBMA) virtual link model. The OMNI link is a virtual overlay configured over one or more concatenated underlay Internetworks, and nodes on the link can exchange original IP packets as single-hop neighbors. The OMNI Adaptation Layer (OAL) supports multilink operation for increased reliability and path optimization while providing fragmentation and reassembly services to support improved performance and Maximum Transmission Unit (MTU) diversity. This specification provides a mobility service architecture companion to the OMNI specification.¶
The AERO service connects Hosts and Clients over Proxy/Servers and Relays that are seen as OMNI link neighbors; AERO further includes Gateways that interconnect diverse Internetworks as OMNI link segments through OAL forwarding at a layer below IP. Each node's OMNI interface uses an IPv6 link-local address format that supports operation of the IPv6 Neighbor Discovery (IPv6 ND) protocol [RFC4861]. A Client's OMNI interface can be configured over multiple underlay interfaces, and therefore appears as a single interface with multiple link-layer addresses. Each link-layer address is subject to change due to mobility and/or multilink fluctuations, and link-layer address changes are signaled by ND messaging the same as for any IPv6 link.¶
AERO provides a secure cloud-based service where mobile node Clients may use Proxy/Servers acting as proxys and/or designated routers while fixed nodes may use any Relay on the link for efficient communications. Fixed nodes forward original IP packets destined to other AERO nodes via the nearest Relay, which forwards them through the cloud. Mobile node Clients discover shortest paths to OMNI link neighbors through AERO route optimization. Both unicast and multicast communications are supported, and Clients may efficiently move between locations while maintaining continuous communications with correspondents and without changing their IP Address.¶
AERO Gateways peer with Proxy/Servers in a secured private BGP overlay routing instance to establish a Segment Routing Topology (SRT) spanning tree over the underlay Internetworks of one or more disjoint administrative domains concatenated as a single unified OMNI link. Each OMNI link instance is characterized by the set of Mobility Service Prefixes (MSPs) common to all mobile nodes. Relays provide an optimal route from (fixed) correspondent nodes on underlay Internetworks to (mobile or fixed) nodes on the OMNI link. To the underlay Internetwork, the Relay is the source of a route to the MSP; hence uplink traffic to mobile nodes is naturally routed to the nearest Relay.¶
AERO can be used with OMNI links that span private-use Internetworks and/or public Internetworks such as the global Internet. In both cases, Clients may be located behind Network Address Translators (NATs) on the path to their associated Proxy/Servers. A means for robust traversal of NATs while avoiding "triangle routing" and critical infrastructure traffic concentration is therefore provided.¶
AERO assumes the use of PIM Sparse Mode in support of multicast communication. In support of Source Specific Multicast (SSM) when a Mobile Node is the source, AERO route optimization ensures that a shortest-path multicast tree is established with provisions for mobility and multilink operation. In all other multicast scenarios there are no AERO dependencies.¶
AERO provides a secure aeronautical internetworking service for both manned and unmanned aircraft, where the aircraft is treated as a mobile node that can connect an Internet of Things (IoT). AERO is also applicable to a wide variety of other use cases. For example, it can be used to coordinate the links of mobile nodes (e.g., cellphones, tablets, laptop computers, etc.) that connect into a home enterprise network via public access networks with VPN or non-VPN services enabled according to the appropriate security model. AERO can also be used to facilitate terrestrial vehicular and urban air mobility (as well as pedestrian communication services) for future intelligent transportation systems [I-D.ietf-ipwave-vehicular-networking][I-D.templin-ipwave-uam-its]. Other applicable use cases are also in scope.¶
Along with OMNI, AERO provides secured optimal routing support for the "6M's" of modern Internetworking, including:¶
The following numbered sections present the AERO specification. The appendices at the end of the document are non-normative.¶
The terminology in the normative references applies; especially, the terminology in the OMNI specification [I-D.templin-6man-omni] is used extensively throughout. The following terms are defined within the scope of this document:¶
Throughout the document, the simple terms "Host", "Client", "Proxy/Server", "Gateway" and "Relay" refer to "AERO Host", "AERO Client", "AERO Proxy/Server", "AERO Gateway" and "AERO Relay", respectively. Capitalization is used to distinguish these terms from other common Internetworking uses in which they appear without capitalization.¶
The terminology of IPv6 ND [RFC4861], DHCPv6 [RFC8415] and OMNI [I-D.templin-6man-omni] (including the names of node variables, messages and protocol constants) is used throughout this document. The terms "All-Routers multicast", "All-Nodes multicast", "Solicited-Node multicast" and "Subnet-Router anycast" are defined in [RFC4291]. Also, the term "IP" is used to generically refer to either Internet Protocol version, i.e., IPv4 [RFC0791] or IPv6 [RFC8200].¶
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119][RFC8174] when, and only when, they appear in all capitals, as shown here.¶
The following sections specify the operation of IP over OMNI links using the AERO service:¶
AERO Hosts configure an OMNI interface over an underlay interface connected to a Client's ENET and coordinate with both other AERO Hosts and Clients over the ENET. As an implementation matter, the Host either assigns the same (MNP-based) IP address from the underlay interface to the OMNI interface, or configures the "OMNI interface" as a virtual sublayer of the underlay interface itself. AERO Hosts treat the ENET as an ANET, and treat the upstream Client for the ENET as a Proxy/Server. AERO Hosts are seen as OMNI link termination endpoints.¶
AERO Clients can be deployed as fixed infrastructure nodes close to end systems, or as Mobile Nodes (MNs) that can change their network attachment points dynamically. AERO Clients configure OMNI interfaces over underlay interfaces with addresses that may change due to mobility. AERO Clients register their Mobile Network Prefixes (MNPs) with the AERO service, and distribute the MNPs to ENETs (which may connect AERO Hosts and other Clients). AERO Clients provide Proxy/Server-like services for Hosts and other Clients on downstream-attached ENETs.¶
AERO Gateways, Proxy/Servers and Relays are critical infrastructure elements in fixed (i.e., non-mobile) INET deployments and hence have permanent and unchanging INET addresses. Together, they constitute the AERO service which provides an OMNI link virtual overlay for connecting AERO Clients and Hosts. AERO Gateways (together with Proxy/Servers) provide the secured backbone supporting infrastructure for a Segment Routing Topology (SRT) spanning tree for the OMNI link.¶
AERO Gateways forward carrier packets both within the same SRT segment and between disjoint SRT segments based on an IPv6 encapsulation mid-layer known as the OMNI Adaptation Layer (OAL) [I-D.templin-6man-omni]. The OMNI interface and OAL provide a virtual bridging service, since the inner IP TTL/Hop Limit is not decremented. Each Gateway also peers with Proxy/Servers and other Gateways in a dynamic routing protocol instance to provide a Distributed Mobility Management (DMM) service for the list of active MNPs (see Section 3.2.3). Gateways assign one or more Mobility Service Prefixes (MSPs) to the OMNI link and configure secured tunnels with Proxy/Servers, Relays and other Gateways; they further maintain forwarding table entries for each MNP or non-MNP prefix in service on the OMNI link.¶
AERO Proxy/Servers distributed across one or more SRT segments provide default forwarding and mobility/multilink services for AERO Client mobile nodes. Each Proxy/Server also peers with Gateways in a dynamic routing protocol instance to advertise its list of associated MNPs (see Section 3.2.3). Hub Proxy/Servers provide prefix delegation/registration services and track the mobility/multilink profiles of each of their associated Clients, where each delegated prefix becomes an MNP taken from an MSP. Proxy/Servers at ANET/INET boundaries provide a forwarding service for ANET Clients and Hosts to communicate with peers in external INETs, while Proxy/Servers in the open INET provide an authentication service for INET Client IPv6 ND messages but only a secondary forwarding service when the Client cannot forward directly to a peer or Gateway. Source Clients securely coordinate with target Clients by sending control messages via a First-Hop Segment (FHS) Proxy/Server which forwards them over the SRT spanning tree to a Last-Hop Segment (LHS) Proxy/Server which finally forwards them to the target.¶
AERO Relays are Proxy/Servers that provide forwarding services to exchange original IP packets between the OMNI link and nodes on other links/networks. Relays run a dynamic routing protocol to discover any non-MNP prefixes in service on other links/networks, and Relays that connect to larger Internetworks (such as the Internet) may originate default routes. The Relay redistributes OMNI link MSP(s) into other links/networks, and redistributes non-MNP prefixes via OMNI link Gateway BGP peerings.¶
Figure 1 presents the basic OMNI link reference model:¶
In this model:¶
An OMNI link configured over a single underlay network appears as a single unified link with a consistent addressing plan; all nodes on the link can exchange carrier packets via simple L2 encapsulation (i.e., following any necessary NAT traversal) since the underlay is connected. In common practice, however, OMNI links are often configured over an SRT spanning tree that bridges multiple distinct underlay network segments managed under different administrative authorities (e.g., as for worldwide aviation service providers such as ARINC, SITA, Inmarsat, etc.). Individual underlay networks may also be partitioned internally, in which case each internal partition appears as a separate segment.¶
The addressing plan of each SRT segment is consistent internally but will often bear no relation to the addressing plans of other segments. Each segment is also likely to be separated from others by network security devices (e.g., firewalls, proxys, packet filtering gateways, etc.), and disjoint segments often have no common physical link connections. Therefore, nodes can only be assured of exchanging carrier packets directly with correspondents in the same segment, and not with those in other segments. The only means for joining the segments therefore is through inter-domain peerings between AERO Gateways.¶
The OMNI link spans multiple SRT segments using the OMNI Adaptation Layer (OAL) [I-D.templin-6man-omni] to provide the network layer with a virtual abstraction similar to a bridged campus LAN. The OAL is an OMNI interface sublayer that inserts a mid-layer IPv6 encapsulation header for inter-segment forwarding (i.e., bridging) without decrementing the network-layer TTL/Hop Limit of the original IP packet. An example OMNI link SRT is shown in Figure 2:¶
Gateway, Proxy/Server and Relay OMNI interfaces are configured over both secured tunnels and open INET underlay interfaces within their respective SRT segments. Within each segment, Gateways configure "hub-and-spokes" BGP peerings with Proxy/Servers and Relays as "spokes". Adjacent SRT segments are joined by Gateway-to-Gateway peerings to collectively form a spanning tree over the entire SRT. The "secured" spanning tree supports authentication and integrity for critical control plane messages. The "unsecured" spanning tree conveys ordinary carrier packets without security codes and that must be treated by destinations according to data origin authentication procedures. AERO nodes can employ route optimization to cause carrier packets to take more direct paths between OMNI link neighbors without having to follow strict spanning tree paths.¶
The AERO Multinet service concatenates SRT segments to form larger networks through Gateway-to-Gateway peerings as originally described in the "Catenet Model for Internetworking" [IEN48]; especially Figure 2 follows directly from the illustrations in [IEN48-2]. The Catenet model inspired the global public Internet as it is known today, while AERO applies the Catenet concepts to provide true Multinet services for the future architecture.¶
AERO nodes on OMNI links use the Link-Local Address (LLA) prefix fe80::/64 [RFC4291] to assign LLAs used for network-layer addresses in link-scoped IPv6 ND and data messages. AERO Clients use LLAs constructed from MNPs (i.e., "MNP-LLAs") while other AERO nodes use LLAs constructed based on 32-bit Mobility Service ID (MSID) values ("ADM-LLAs") per [I-D.templin-6man-omni]. Non-MNP routes are also represented the same as for MNP-LLAs, but may include a prefix that is not properly covered by an MSP.¶
AERO nodes also use the Unique Local Address (ULA) prefix fd00::/8 followed by a pseudo-random 40-bit OMNI domain identifier to form the prefix {ULA}::/48, then include a 16-bit OMNI link identifier '*' to form the prefix {ULA*}::/64 [RFC4291]. The AERO node then uses the prefix {ULA*}::/64 to form "MNP-ULAs" or "ADM-ULA"s as specified in [I-D.templin-6man-omni] to support OAL addressing. (The prefix {ULA*}::/64 appearing alone and with no suffix represents "default".) AERO Clients also use Temporary ULAs (TMP-ULAs) constructed per [I-D.templin-6man-omni], where the addresses are typically used only in initial control message exchanges until a stable MNP-LLA/ULA is assigned (and may sometimes be used for sustained communications within a local routing region).¶
AERO MSPs, MNPs and non-MNP routes are typically based on Global Unicast Addresses (GUAs), but in some cases may be based on private-use addresses. A GUA block is also reserved for OMNI link anycast purposes. See [I-D.templin-6man-omni] for a full specification of LLAs, ULAs and GUAs used by AERO nodes on OMNI links.¶
Finally, AERO Clients and Proxy/Servers configure node identification values as specified in [I-D.templin-6man-omni].¶
The AERO routing system comprises a private Border Gateway Protocol (BGP) [RFC4271] service coordinated between Gateways and Proxy/Servers (Relays also engage in the routing system as simplified Proxy/Servers). The service supports carrier packet forwarding at a layer below IP and does not interact with the public Internet BGP routing system, but supports redistribution of information for other links and networks discovered by Relays.¶
In a reference deployment, each Proxy/Server is configured as an Autonomous System Border Router (ASBR) for a stub Autonomous System (AS) using a 32-bit AS Number (ASN) [RFC4271] that is unique within the BGP instance, and each Proxy/Server further uses eBGP to peer with one or more Gateways but does not peer with other Proxy/Servers. Each SRT segment in the OMNI link must include one or more Gateways in a "hub" AS, which peer with the Proxy/Servers within that segment as "spoke" ASes. All Gateways within the same segment are members of the same hub AS, and use iBGP to maintain a consistent view of all active routes currently in service. The Gateways of different segments peer with one another using eBGP.¶
Gateways maintain forwarding table entries only for the MNP-ULAs corresponding to MNP and non-MNP routes that are currently active, and also maintain black-hole routes for the OMNI link MSPs so that carrier packets destined to non-existent MNP-ULAs are dropped with a Destination Unreachable message returned. In this way, Proxy/Servers and Relays have only partial topology knowledge (i.e., they only maintain routing information for their directly associated Clients and non-AERO links) and they forward all other carrier packets to Gateways which have full topology knowledge.¶
Each OMNI link segment assigns a unique ADM-ULA sub-prefix of {ULA*}::/96 known as the "SRT prefix". For example, a first segment could assign {ULA*}::1000/116, a second could assign {ULA*}::2000/116, a third could assign {ULA*}::3000/116, etc. Within each segment, each Proxy/Server configures an ADM-ULA within the segment's SRT prefix, e.g., the Proxy/Servers within {ULA*}::2000/116 could assign the ADM-ULAs {ULA*}::2011/116, {ULA*}::2026/116, {ULA*}::2003/116, etc.¶
The administrative authorities for each segment must therefore coordinate to assure mutually-exclusive ADM-ULA prefix assignments, but internal provisioning of ADM-ULAs an independent local consideration for each administrative authority. For each ADM-ULA prefix, the Gateway(s) that connect that segment assign the all-zero's address of the prefix as a Subnet Router Anycast address. For example, the Subnet Router Anycast address for {ULA*}::1023/116 is simply {ULA*}::1000.¶
ADM-ULA prefixes are statically represented in Gateway forwarding tables. Gateways join multiple SRT segments into a unified OMNI link over multiple diverse network administrative domains. They support a virtual bridging service by first establishing forwarding table entries for their ADM-ULA prefixes either via standard BGP routing or static routes. For example, if three Gateways ('A', 'B' and 'C') from different segments serviced {ULA*}::1000/116, {ULA*}::2000/116 and {ULA*}::3000/116 respectively, then the forwarding tables in each Gateway appear as follows:¶
These forwarding table entries rarely change, since they correspond to fixed infrastructure elements in their respective segments.¶
MNP (and non-MNP) ULAs are instead dynamically advertised in the AERO routing system by Proxy/Servers and Relays that provide service for their corresponding MNPs. For example, if three Proxy/Servers ('D', 'E' and 'F') service the MNPs 2001:db8:1000:2000::/56, 2001:db8:3000:4000::/56 and 2001:db8:5000:6000::/56 then the routing system would include:¶
A full discussion of the BGP-based routing system used by AERO is found in [I-D.ietf-rtgwg-atn-bgp].¶
The 64-bit sub-prefixes of {ULA}::/48 identify up to 2^16 distinct Segment Routing Topologies (SRTs). Each SRT is a mutually-exclusive OMNI link overlay instance using a distinct set of ULAs, and emulates a bridged campus LAN service for the OMNI link. In some cases (e.g., when redundant topologies are needed for fault tolerance and reliability) it may be beneficial to deploy multiple SRTs that act as independent overlay instances. A communication failure in one instance therefore will not affect communications in other instances.¶
Each SRT is identified by a distinct value in bits 48-63 of {ULA}::/48, i.e., as {ULA}::/64, {ULA}:1::/64, {ULA}:2::/64, etc. Each OMNI interface is identified by a unique interface name (e.g., omni0, omni1, omni2, etc.) and assigns an OMNI IPv6 anycast address used for OMNI interface determination in Safety-Based Multilink (SBM) as discussed in [I-D.templin-6man-omni]. Each OMNI interface further applies Performance-Based Multilink (PBM) internally.¶
The Gateways and Proxy/Servers of each independent SRT engage in BGP peerings to form a spanning tree with the Gateways in non-leaf nodes and the Proxy/Servers in leaf nodes. The spanning tree is configured over both secured and unsecured underlay network paths. The secured spanning tree is used to convey secured control messages between Proxy/Servers and Gateways, while the unsecured spanning tree forwards data messages and/or unsecured control messages.¶
Each SRT segment is identified by a unique ADM-ULA prefix used by all Proxy/Servers and Gateways in the segment. Each AERO node must therefore discover an SRT prefix that correspondents can use to determine the correct segment, and must publish the SRT prefix in IPv6 ND messages.¶
Original IPv6 sources can direct IPv6 packets to an AERO node by including a standard IPv6 Segment Routing Header (SRH) [RFC8754] with the OMNI IPv6 anycast address for the selected OMNI link as either the IPv6 destination or as an intermediate hop within the SRH. This allows the original source to determine the specific OMNI link SRT an original IPv6 packet will traverse when there may be multiple alternatives.¶
When an AERO node processes the SRH and forwards the original IPv6 packet to the correct OMNI interface, the OMNI interface writes the next IPv6 address from the SRH into the IPv6 destination address and decrements Segments Left. If decrementing would cause Segments Left to become 0, the OMNI interface deletes the SRH before forwarding. This form of Segment Routing supports Safety-Based Multilink (SBM).¶
OMNI interfaces are virtual interfaces configured over one or more underlay interfaces classified as follows:¶
OMNI interfaces use OAL encapsulation and fragmentation as discussed in Section 3.6. OMNI interfaces use L2 encapsulation (see: Section 3.6) to exchange carrier packets with OMNI link neighbors over INET or VPNed interfaces as well as over ANET interfaces for which the Client and FHS Proxy/Server may be multiple IP hops away. OMNI interfaces use link-layer encapsulation only (i.e., and no other L2 encapsulations) over Direct underlay interfaces or ANET interfaces when the Client and FHS Proxy/Server are known to be on the same underlay link.¶
OMNI interfaces maintain a neighbor cache for tracking per-neighbor state the same as for any interface. OMNI interfaces use IPv6 ND messages including Router Solicitation (RS), Router Advertisement (RA), Neighbor Solicitation (NS), Neighbor Advertisement (NA) and Redirect for neighbor cache management. In environments where spoofing may be a threat, OMNI neighbors should invoke OAL Identification window synchronization in their IPv6 ND message exchanges.¶
OMNI interfaces send IPv6 ND messages with an OMNI option formatted as specified in [I-D.templin-6man-omni]. The OMNI option includes prefix registration information, Interface Attributes and/or Multilink Forwarding Parameters containing link information parameters for the OMNI interface's underlay interfaces and any other per-neighbor information.¶
A Host's OMNI interface is configured over an underlay interface connected to an ENET provided by an upstream Client. From the Host's perspective, the ENET appears as an ANET and the upstream Client appears as a Proxy/Server. The Host does not provide OMNI intermediate node services and is therefore a logical termination point for the OMNI link.¶
A Client's OMNI interface may be configured over multiple ANET/INET underlay interfaces. For example, common mobile handheld devices have both wireless local area network ("WLAN") and cellular wireless links. These links are often used "one at a time" with low-cost WLAN preferred and highly-available cellular wireless as a standby, but a simultaneous-use capability could provide benefits. In a more complex example, aircraft frequently have many wireless data link types (e.g. satellite-based, cellular, terrestrial, air-to-air directional, etc.) with diverse performance and cost properties.¶
If a Client's multiple ANET/INET underlay interfaces are used "one at a time" (i.e., all other interfaces are in standby mode while one interface is active), then successive IPv6 ND messages all include OMNI option Multilink Forwarding Parameters sub-options with the same underlay interface index. In that case, the Client would appear to have a single underlay interface but with a dynamically changing link-layer address.¶
If the Client has multiple active ANET/INET underlay interfaces, then from the perspective of IPv6 ND it would appear to have multiple link-layer addresses. In that case, IPv6 ND message OMNI options MAY include Interface Attributes and/or Multilink Forwarding Parameters sub-options with different underlay interface indexes.¶
Proxy/Servers on the open Internet include only a single INET underlay interface. INET Clients therefore discover only the INADDR information for the Proxy/Server's INET interface. Proxy/Servers on an ANET/INET boundary include both an ANET and INET underlay interface. ANET Clients therefore must discover both the ANET and INET INADDR information for the Proxy/Server.¶
Gateway and Proxy/Server OMNI interfaces are configured over underlay interfaces that provide both secured tunnels for carrying IPv6 ND and BGP protocol control plane messages and open INET access for carrying unsecured messages. The OMNI interface configures both an ADM-LLA and its corresponding ADM-ULA, and acts as an OAL source to encapsulate and fragment original IP packets while presenting the resulting carrier packets over the secured or unsecured underlay paths. Note that Gateway and Proxy/Server end-to-end transport protocol sessions used by the BGP are run directly over the OMNI interface and use ADM-ULA source and destination addresses. The OMNI interface employs the OAL to encapsulate the original IP packets for these sessions as carrier packets (i.e., even though the OAL header may use the same ADM-ULAs as the original IP header) and forwards them over the secured underlay path.¶
AERO Proxy/Servers, Clients and Hosts configure OMNI interfaces as their point of attachment to the OMNI link. AERO nodes assign the MSPs for the link to their OMNI interfaces (i.e., as a "route-to-interface") to ensure that original IP packets with destination addresses covered by an MNP not explicitly associated with another interface are directed to an OMNI interface.¶
OMNI interface initialization procedures for Proxy/Servers, Clients Hosts and Gateways are discussed in the following sections.¶
When a Proxy/Server enables an OMNI interface, it assigns an ADM-{LLA,ULA} appropriate for the given OMNI link SRT segment. The Proxy/Server also configures secured tunnels and engages in BGP routing protocol sessions with one or more neighboring Gateways.¶
The OMNI interface provides a single interface abstraction to the IP layer, but internally includes an NBMA nexus for sending carrier packets to OMNI interface neighbors over underlay INET interfaces and secured tunnels. The Proxy/Server further configures a service to facilitate IPv6 ND exchanges with AERO Clients and manages per-Client neighbor cache entries and IP forwarding table entries based on control message exchanges.¶
Relays are simply Proxy/Servers that run a dynamic routing protocol to redistribute routes between the OMNI interface and INET/ENET interfaces (see: Section 3.2.3). The Relay provisions MNPs to networks on the INET/ENET interfaces (i.e., the same as a Client would do) and advertises the MSP(s) for the OMNI link over the INET/ENET interfaces. The Relay further provides an attachment point of the OMNI link to a non-MNP-based global topology.¶
When a Client enables an OMNI interface, it assigns either an MNP-{LLA, ULA} or a TMP-ULA and sends OMNI-encapsulated RS messages over its ANET/INET underlay interfaces to an FHS Proxy/Server, which coordinates with a Hub Proxy/Server that returns an RA message with corresponding parameters. The RS/RA messages may pass through one or more NATs in the path between the Client and FHS Proxy/Server. (Note: if the Client used a TMP-ULA in its initial RS message, it will discover an MNP-{LLA,ULA} in the corresponding RA that it receives from the FHS Proxy/Server and begin using these new addresses. If the Client is operating outside the context of AERO infrastructure such as in a Mobile Ad-hoc Network (MANET), however, it may continue using TMP-ULAs for Client-to-Client communications until it encounters an infrastructure element that can delegate an MNP.)¶
Clients further extend the OMNI interface over its (downstream) ENET interfaces where it provides a first-hop router for Hosts and other AERO Clients connected to the ENET. A downstream Client that connects via the ENET serviced by an upstream Client can in turn service further downstream ENETs that connect other Hosts and Clients. This "Client-to-Client chaining" can be applied recursively to further extend the OMNI link.¶
When a Host enables an OMNI interface, it assigns an address taken from the ENET underlay interface which may itself be a GUA delegated by the upstream Client. The Host does not assign a link-local address to the OMNI interface, since no autoconfiguration is necessary on that interface. (As an implementation matter, the Host could instead configure the "OMNI interface" as a virtual sublayer of the ENET underlay interface itself.)¶
The Host sends OMNI-encapsulated RS messages over its ENET underlay interface to the upstream Client, which returns encapsulated RAs and provides routing services in the same fashion that Proxy/Servers provides services for Clients. Hosts represent the leaf end systems in recursively-nested hierarchies of concatenated ENETs, i.e., they represent terminating endpoints for the OMNI link.¶
AERO Gateways configure an OMNI interface and assign an ADM-ULA and corresponding Subnet Router Anycast address for each OMNI link SRT segment they connect to. Gateways configure secured tunnels with Proxy/Servers in the same SRT segment and other Gateways in the same (or an adjacent) SRT segment. Gateways then engage in a BGP routing protocol session with neighbors over the secured spanning tree (see: Section 3.2.3).¶
Each Client, Proxy/Server and Gateway OMNI interface maintains a conceptual neighbor cache that includes a Neighbor Cache Entry (NCE) for each of its active neighbors on the OMNI link per [RFC4861]. Each NCE is indexed by the LLA of the neighbor, while the OAL encapsulation ULA determines the context for Identification verification. Clients and Proxy/Servers maintain NCEs through RS/RA exchanges, and also maintain NCEs for any active correspondent peers through NS/NA exchanges.¶
Hosts also maintain NCEs for Clients and other Hosts through the exchange of RS/RA or NS/NA messages. Each NCE is indexed by the non-LLA address assigned to the Host ENET interface, which is the same address used for OMNI L2 encapsulation (i.e., without the insertion of an OAL header). The non-LLA encapsulation format identifies the NCE as a Host-based entry where the Host is a leaf end system in the recursively extended OMNI link.¶
Gateways also maintain NCEs for Clients within their local segments based on NS/NA route optimization messaging (see: Section 3.13.4). When a Gateway creates/updates a NCE for a local segment Client based on NS/NA route optimization, it also maintains MFVI and INADDR state for messages destined to this local segment Client.¶
Proxy/Servers add an additional state DEPARTED to the list of NCE states found in Section 7.3.2 of [RFC4861]. When a Client terminates its association, the Proxy/Server OMNI interface sets a "DepartTime" variable for the NCE to "DEPART_TIME" seconds. DepartTime is decremented unless a new IPv6 ND message causes the state to return to REACHABLE. While a NCE is in the DEPARTED state, the Proxy/Server forwards carrier packets destined to the target Client to the Client's new FHS/Hub Proxy/Server instead. It is RECOMMENDED that DEPART_TIME be set to the default constant value 10 seconds to accept any carrier packets that may be in flight. When DepartTime decrements to 0, the NCE is deleted.¶
Clients determine the service profiles for their FHS and Hub Proxy/Servers by setting the N/A/U flags in a Neighbor Coordination sub-option of the first OMNI option in RS messages. When the N/A/U flags are clear, Proxy/Servers forward all NS/NA messages to the Client, while the Client performs mobility update signaling through the transmission of uNA messages to all active neighbors following a mobility event. However, in some environments this may result in excessive NS/NA control message overhead especially for Clients connected to low-end data links.¶
To minimize NS/NA message overhead, Clients can set the N/A/U flags in the OMNI option extension header of RS messages they send. If the N flag is set, the FHS Proxy/Server that forwards the RS message assumes the role of responding to NS(NUD) messages and maintains peer NCEs associated with the NCE for this Client. If the A flag is set, the Hub Proxy/Server that processes the RS message assumes the role of responding to NS(AR) messages on behalf of this Client NCE. If the U flag is set, the Hub Proxy/Server that processes the RS message becomes responsible for maintaining a "Report List" of sources from which it has received an NS(AR) for this Client NCE. The Hub Proxy/Server maintains each Report List entry for REPORT_TIME seconds, and sends uNA messages to each member of the Report List when it receives a Client mobility update indication (e.g., through receipt of an RS with updated Interface Attributes, Traffic Selectors, etc.).¶
Clients and their Hub Proxy/Servers have full knowledge of the Client's current underlay Interface Attributes, while FHS Proxy/Servers acting in "proxy" mode have knowledge of only the individual Client underlay interfaces they service. Clients determine their FHS and Hub Proxy/Server service models by setting the N/A/U flags in the RS messages they send as discussed above.¶
Clients act as RORs on their own behalf when they receive an NS(AR) from an ROS via their Hub Proxy/Server (Relays instead act as RORs on behalf of non-MNP targets specific to other links/networks that the Relay services and/or "default"). The ROR returns and NA(AR) response to the ROS, which creates or updates a NCE for the target network-layer and link-layer addresses. The ROS then (re)sets ReachableTime for the NCE to REACHABLE_TIME seconds and performs reachability tests over specific underlay interface pairs to determine paths for forwarding carrier packets directly to the target. The ROS otherwise decrements ReachableTime while no further solicited NA messages arrive. It is RECOMMENDED that REACHABLE_TIME be set to the default constant value 30 seconds as specified in [RFC4861].¶
AERO nodes also use the value MAX_UNICAST_SOLICIT to limit the number of NS messages sent when a correspondent may have gone unreachable, the value MAX_RTR_SOLICITATIONS to limit the number of RS messages sent without receiving an RA and the value MAX_NEIGHBOR_ADVERTISEMENT to limit the number of unsolicited NAs that can be sent based on a single event. It is RECOMMENDED that MAX_UNICAST_SOLICIT, MAX_RTR_SOLICITATIONS and MAX_NEIGHBOR_ADVERTISEMENT be set to 3 the same as specified in [RFC4861].¶
Different values for the above constants MAY be administratively set; however, if different values are chosen, all nodes on the link MUST consistently configure the same values. Most importantly, DEPART_TIME and REPORT_TIME SHOULD be set to a value that is sufficiently longer than REACHABLE_TIME to avoid packet loss due to stale route optimization state.¶
OMNI interfaces prepare IPv6 ND messages the same as for standard IPv6 ND, but also include a new option type termed the OMNI option [I-D.templin-6man-omni]. For each IPv6 ND message, OMNI interfaces include one or more OMNI options (and any other ND message options) then completely populate all option information. If the OMNI interface includes an authentication signature, it sets the IPv6 ND message Checksum field to 0 and calculates the authentication signature over the entire length of the OAL packet or super-packet (beginning with a pseudo-header of the IPv6 header) but does not then proceed to calculate the IPv6 ND message checksum itself. Otherwise, the OMNI interface calculates the standard IPv6 ND message checksum over the OAL packet or super-packet and writes the value in the Checksum field. OMNI interfaces verify authentication and integrity of each IPv6 ND message received according to the specific check(s) included, and process the message further only following verification.¶
OMNI options include per-neighbor information that provides multilink forwarding, link-layer address and traffic selector information for the neighbor's underlay interfaces. This information is stored in the neighbor cache and provides the basis for the forwarding algorithm specified in Section 3.10. The information is cumulative and reflects the union of the OMNI information from the most recent IPv6 ND messages received from the neighbor; it is therefore not required that each IPv6 ND message contain all neighbor information.¶
The OMNI option is distinct from any Source/Target Link-Layer Address Options (S/TLLAOs) that may appear in an IPv6 ND message according to the appropriate IPv6 over specific link layer specification (e.g., [RFC2464]). If both an OMNI option and S/TLLAO appear, the former pertains to encapsulation addresses while the latter pertains to the native L2 address format of the underlay media.¶
OMNI interface IPv6 ND messages may also include other IPv6 ND options. In particular, solicitation messages may include a Nonce option if required for verification of advertisement replies. If an OMNI IPv6 ND solicitation message includes a Nonce option, the advertisement reply must echo the same Nonce. If an OMNI IPv6 ND advertisement message includes a Timestamp option, the recipient should check the Timestamp to determine if the message is current.¶
AERO Clients send RS messages to the link-scoped All-Routers multicast address or an ADM-LLA while using unicast or anycast L2 addresses. AERO Proxy/Servers respond by returning unicast RA messages. During the RS/RA exchange, AERO Clients and Proxy/Servers include state synchronization parameters to establish Identification windows and other state.¶
AERO Hosts and Clients on ENET underlay networks send RS messages to the link-scoped All-Routers multicast address, an ADM-LLA of a remote Hub Proxy/Server or the MNP-LLA of an upstream Client while using unicast or anycast L2 addresses. The upstream AERO Client responds by returning a unicast RA message.¶
AERO nodes use NS/NA messages for the following purposes:¶
Additionally, nodes may set the OMNI option PNG flag in NA/RA messages to receive a uNA response from the neighbor. The uNA response MUST set the ACK flag (without also setting the SYN or PNG flags) with the Acknowledgement field set to the Identification used in the PNG message.¶
As discussed in Section 4.4 of [RFC4861] NA messages include three flag bits R, S and O. OMNI interface NA messages treat the flags as follows:¶
In secured environments (e.g., between secured spanning tree neighbors, between neighbors on the same secured ANET, etc.), OMNI interface neighbors can exchange OAL packets using randomly-initialized and monotonically-increasing Identification values (modulo 2**32) without window synchronization. In environments where spoofing is considered a threat, OMNI interface neighbors instead invoke window synchronization in NS/NA message exchanges to maintain send/receive window state in their respective neighbor cache entries as specified in [I-D.templin-6man-omni].¶
When the network layer forwards an original IP packet into an OMNI interface, the interface locates or creates a Neighbor Cache Entry (NCE) that matches the destination. The OMNI interface then invokes the OMNI Adaptation Layer (OAL) as discussed in [I-D.templin-6man-omni] which encapsulates the packet in an IPv6 header to produce an OAL packet. This OAL source then calculates a 2-octet checksum and fragments the OAL packet while including an identical Identification value for each fragment that must be within the window for the LHS Proxy/Server or the target Client itself. The OAL source finally includes the checksum as the final 2 octets of the final fragment, i.e., as a "trailer".¶
The OAL source next includes an identical Compressed Routing Header with 32-bit ID fields (CRH-32) [I-D.bonica-6man-comp-rtg-hdr] with each fragment if necessary containing one or more Multilink Forwarding Vector Indices (MFVIs) as discussed in Section 3.13. The OAL source can instead invoke OAL header compression by replacing the OAL IPv6 header, CRH-32 and Fragment Header with an OAL Compressed Header (OCH).¶
The OAL source finally encapsulates each resulting OAL fragment in L2 headers to form an OAL carrier packet, with source address set to its own L2 address (e.g., 192.0.2.100) and destination set to the L2 address of the next hop OAL intermediate node or destination (e.g., 192.0.2.1). The carrier packet encapsulation format in the above example is shown in Figure 3:¶
Note: the carrier packets exchanged by Hosts on ENETs do not include the OAL IPv6 or CRH-32 headers, i.e., the OAL encapsulation is NULL and only the Fragment Header and L2 encapsulations are included.¶
In this format, the OAL source encapsulates the original IP header and packet body/fragment in an OAL IPv6 header prepared according to [RFC2473], the CRH-32 is a Routing Header extension of the OAL header, the Fragment Header identifies each fragment, and the L2 headers are prepared as discussed in [I-D.templin-6man-omni]. The OAL source transmits each such carrier packet into the SRT spanning tree, where they are forwarded over possibly multiple OAL intermediate nodes until they arrive at the OAL destination.¶
The OMNI link control plane service distributes Client MNP-ULA prefix information that may change dynamically due to regional node mobility as well as Relay non-MNP-ULA and per-segment ADM-ULA prefix information that rarely changes. OMNI link Gateways and Proxy/Servers use the information to establish and maintain a forwarding plane spanning tree that connects all nodes on the link. The spanning tree supports a carrier packet virtual bridging service according to link-layer (instead of network-layer) information, but may often include longer paths than necessary.¶
Each OMNI interface therefore also includes a Multilink Forwarding Information Base (MFIB) with Multilink Forwarding Vectors (MFVs) that can often provide more direct forwarding "shortcuts" that avoid strict spanning tree paths. As a result, the spanning tree is always available but OMNI interfaces can often use the MFIB to greatly improve performance and reduce load on critical infrastructure elements.¶
For carrier packets undergoing re-encapsulation at an OAL intermediate node, the OMNI interface decrements the OAL IPv6 header Hop Limit and discards the carrier packet if the Hop Limit reaches 0. The intermediate node next removes the L2 encapsulation headers from the first segment and re-encapsulates the packet in new L2 encapsulation headers for the next segment.¶
When an FHS Gateway receives a carrier packet with an OCH header that must be forwarded to an LHS Gateway over the unsecured spanning tree, it reconstructs the headers based on MFV state, inserts a CRH-32 immediately following the OAL header and adjusts the OAL payload length and destination address field. The FHS Gateway includes a single MFVI in the CRH-32 that will be meaningful to the LHS Gateway. When the LHS Gateway receives the carrier packet, it locates the MFV for the next hop based on the CRH-32 MFVI then re-applies header compression (resulting in the removal of the CRH-32) and forwards the carrier packet to the next hop.¶
OMNI interfaces (acting as OAL destinations) decapsulate and reassemble OAL packets into original IP packets destined either to the AERO node itself or to a destination reached via an interface other than the OMNI interface the original IP packet was received on. When carrier packets containing OAL fragments addressed to itself arrive, this OAL destination discards the NET encapsulation headers and reassembles to obtain the OAL packet or super-packet (see: [I-D.templin-6man-omni]). The OAL destination then verifies the OAL checksum, discards the OAL encapsulations to obtain the original IP packet(s) and finally forwards them to either the network layer or a next-hop on the OMNI link.¶
AERO nodes employ simple data origin authentication procedures. In particular:¶
AERO nodes silently drop any packets that do not satisfy the above data origin authentication procedures. Further security considerations are discussed in Section 6.¶
The OMNI interface observes the link nature of tunnels, including the Maximum Transmission Unit (MTU), Maximum Reassembly Unit (MRU) and the role of fragmentation and reassembly [I-D.ietf-intarea-tunnels]. The OMNI interface employs an OMNI Adaptation Layer (OAL) that accommodates multiple underlay links with diverse MTUs while observing both a minimum and per-path Maximum Payload Size (MPS). The functions of the OAL and OMNI interface MTU/MRU/MPS considerations are specified in [I-D.templin-6man-omni]. (Note that the OMNI interface MTU can in some sense be considered as "unlimited" since the OMNI interface accepts all packets regardless of their size.)¶
When the network layer presents an original IP packet to the OMNI interface, the OAL source encapsulates and fragments the original IP packet if necessary. When the network layer presents the OMNI interface with multiple original IP packets bound to the same OAL destination, the OAL source can concatenate them as a single OAL super-packet as discussed in [I-D.templin-6man-omni] before applying fragmentation. The OAL source then encapsulates each OAL fragment in L2 headers for transmission as carrier packets over an underlay interface connected to either a physical link (e.g., Ethernet, WiFi, Cellular, etc.) or a virtual link such as an Internet or higher-layer tunnel (see the definition of link in [RFC8200]).¶
Note: Although a CRH-32 may be inserted or removed by a Gateway in the path (see: Section 3.10.4), this does not interfere with the destination's ability to reassemble since the CRH-32 is not included in the fragmentable part and its removal/transformation does not invalidate fragment header information.¶
Original IP packets enter a node's OMNI interface either from the network layer (i.e., from a local application or the IP forwarding system) while carrier packets enter from the link layer (i.e., from an OMNI interface neighbor). All original IP packets and carrier packets entering a node's OMNI interface first undergo data origin authentication as discussed in Section 3.8. Those that satisfy data origin authentication are processed further, while all others are dropped silently.¶
Original IP packets that enter the OMNI interface from the network layer are forwarded to an OMNI interface neighbor using OAL encapsulation and fragmentation to produce carrier packets for transmission over underlay interfaces. (If routing indicates that the original IP packet should instead be forwarded back to the network layer, the packet is dropped to avoid looping). Carrier packets that enter the OMNI interface from the link layer are either re-encapsulated and re-admitted into the OMNI link, or reassembled and forwarded to the network layer where they are subject to either local delivery or IP forwarding. In all cases, the OAL MUST NOT decrement the original IP packet TTL/Hop-count since its forwarding actions occur below the network layer.¶
OMNI interfaces may have multiple underlay interfaces and/or neighbor cache entries for neighbors with multiple underlay interfaces (see Section 3.3). The OAL uses Interface Attributes and/or Traffic Selectors (e.g., port number, flow specification, etc.) to select an outbound underlay interface for each OAL packet and also to select segment routing and/or link-layer destination addresses based on the neighbor's underlay interfaces. AERO implementations SHOULD permit network management to dynamically adjust Traffic Selector values at runtime.¶
If an OAL packet matches the Traffic Selectors of multiple outgoing interfaces and/or neighbor interfaces, the OMNI interface replicates the packet and sends one copy via each of the (outgoing / neighbor) interface pairs; otherwise, it sends a single copy of the OAL packet via an interface with the best matching Traffic Selector. (While not strictly required, the likelihood of successful reassembly may improve when the OMNI interface sends all fragments of the same fragmented OAL packet consecutively over the same underlay interface pair to avoid complicating factors such as delay variance and reordering.) AERO nodes keep track of which underlay interfaces are currently "reachable" or "unreachable", and only use "reachable" interfaces for forwarding purposes.¶
The following sections discuss the OMNI interface forwarding algorithms for Hosts, Clients, Proxy/Servers and Gateways. In the following discussion, an original IP packet's destination address is said to "match" if it is the same as a cached address, or if it is covered by a cached prefix (which may be encoded in an MNP-LLA).¶
When an original IP packet enters a Host's OMNI interface from the network layer the Host searches for a NCE that matches the destination. If there is a matching NCE, the Host performs L2 encapsulation, fragments the encapsulated packet if necessary and forwards the packets into the ENET addressed to the L2 address of the neighbor.¶
After sending a packet, the Host may receive a Redirect message from its upstream Client to inform it of another AERO node on the same ENET that would provide a better first hop. The Host authenticates the Redirect message, then updates its neighbor cache accordingly.¶
When an original IP packet enters a Client's OMNI interface from the network layer the Client searches for a NCE that matches the destination. If there is a matching NCE on an ANET/INET interface (i.e., an upstream interface), the Client selects one or more "reachable" neighbor interfaces in the entry for forwarding purposes. Otherwise, the Client invokes route optimization per Section 3.13 and follows the multilink forwarding procedures outlined there. If there is a matching NCE on an ENET interface (i.e., a downstream interface), the Client instead performs OAL and/or L2 encapsulation and forwards the packet to the downstream Host or Client.¶
When a carrier packet enters a Client's OMNI interface from the link-layer, if the OAL destination matches one of the Client's ULAs the Client (acting as an OAL destination) verifies that the Identification is in-window for this OAL source, then reassembles and decapsulates as necessary and delivers the original IP packet to the network layer. If the OAL destination matches a NCE for a Host or Client on an ENET interface, the Client instead forwards the carrier packet to the Host/Client. If the OAL destination does not match, the Client drops the original IP packet and MAY return a network-layer ICMP Destination Unreachable message subject to rate limiting (see: Section 3.11).¶
When a Client forwards a carrier packet from an ENET Host to a neighbor connected to the same ENET, it also returns a Redirect message to inform the source that it can reach the neighbor directly as an ENET peer.¶
Note: Clients and their FHS Proxy/Server (and other Client) peers can exchange original IP packets over ANET underlay interfaces without invoking the OAL, since the ANET is secured at the link and physical layers. By forwarding original IP packets without invoking the OAL, however, the ANET peers can engage only in classical path MTU discovery since the packets are subject to loss and/or corruption due to the various per-link MTU limitations that may occur within the ANET. Moreover, the original IP packets do not include either the OAL integrity check or per-packet Identification values that can be used for data origin authentication and link-layer retransmissions. The tradeoff therefore involves an assessment of the per-packet encapsulation overhead saved by bypassing the OAL vs. inheritance of classical network "brittleness". (Note however that ANET peers can send small original IP packets without invoking the OAL, while invoking the OAL for larger packets. This presents the beneficial aspects of both small packet efficiency and large packet robustness, with delay variance and reordering as possible side effects.)¶
Note: The forwarding table entries established in peer Clients of a multihop forwarding region are based on MNP-ULAs and/or TMP-ULAs used to seed the multihop routing protocols. When MNP-ULAs are used, the ULA /64 prefix provides topological relevance for the multihop forwarding region, while the 64-bit Interface Identifier encodes the Client MNP. Therefore, Clients can forward atomic fragments with compressed OAL headers that do not include ULA or MFVI information by examining the MNP-based addresses in the actual IP packet header. In other words, each forwarding table entry contains two pieces of forwarding information - the ULA information in the prefix and the MNP information in the interface identifier.¶
When a Proxy/Server receives an original IP packet from the network layer, it drops the packet if routing indicates that it should be forwarded back to the network layer to avoid looping. Otherwise, the Proxy/Server regards the original IP packet the same as if it had arrived as carrier packets with OAL destination set to its own ADM-ULA. When the Proxy/Server receives carrier packets on underlay interfaces with OAL destination set to its own ADM-ULA, it performs OAL reassembly if necessary to obtain the original IP packet. The Proxy/Server then supports multilink forwarding procedures as specified in Section 3.13.2 and/or acts as an ROS to initiate route optimization as specified in Section 3.13.¶
When the Proxy/Server receives a carrier packet with OAL destination set to an MNP-ULA that does not match the MSP, it accepts the carrier packet only if data origin authentication succeeds and if there is a network layer routing table entry for a GUA route that matches the MNP-ULA. If there is no route, the Proxy/Server drops the carrier packet; otherwise, it reassembles and decapsulates to obtain the original IP packet then acts as a Relay to present it to the network layer where it will be delivered according to standard IP forwarding.¶
When a Proxy/Server receives a carrier packet from one of its Client neighbors with OAL destination set to another node, it forwards the packets via a matching NCE or via the spanning tree if there is no matching entry. When the Proxy/Server receives a carrier packet with OAL destination set to the MNP-ULA of one of its Client neighbors established through RS/RA exchanges, it accepts the carrier packet only if data origin authentication succeeds. If the NCE state is DEPARTED, the Proxy/Server changes the OAL destination address to the ADM-ULA of the new Proxy/Server, then re-encapsulates the carrier packet and forwards it to a Gateway which will eventually deliver it to the new Proxy/Server. If the neighbor cache state for the MNP-ULA is REACHABLE, the Proxy/Server forwards the carrier packets to the Client which then must reassemble. (Note that the Proxy/Server does not reassemble carrier packets not explicitly addressed to its own ADM-ULA, since some of the carrier packets of the same original IP packet could be forwarded through a different Proxy/Server.) In that case, the Client may receive fragments that are smaller than its link MTU but that can still be reassembled.¶
Proxy/Servers process carrier packets with OAL destinations that do not match their ADM-ULA in the same manner as for traditional IP forwarding within the OAL, i.e., nodes use IP forwarding to forward packets not explicitly addressed to themselves. (Proxy/Servers include a special case that accepts and reassembles carrier packets destined to the MNP-ULA of one of their Clients received over the secured spanning tree.) Proxy/Servers process carrier packets with their ADM-ULA as the destination by first examining the packet for a CRH-32 header or an OCH header. In that case, the Proxy/Server examines the next MFVI in the carrier packet to locate the MFV entry in the MFIB for next hop forwarding (i.e., without examining IP addresses). When the Proxy/Server forwards the carrier packet, it changes the destination address according to the MFVI value for the next hop found either in the CRH-32 header or in the node's own MFIB. Proxy/Servers must verify that the L2 addresses of carrier packets not received from the secured spanning tree are "trusted" before forwarding according to an MFV (otherwise, the carrier packet must be dropped).¶
Note: Proxy/Servers may receive carrier packets addressed to their own ADM-ULA with CRH-32s that include additional forwarding information. Proxy/Servers use the forwarding information to determine the correct NCE and underlay interface for forwarding to the target Client, then remove the CRH-32 and forward the carrier packet. If necessary, the Proxy/Server reassembles first before re-encapsulating (and possibly also re-fragmenting) then forwards to the target Client.¶
Note: Clients and their FHS Proxy/Server peers can exchange original IP packets over ANET underlay interfaces without invoking the OAL, since the ANET is secured at the link and physical layers. By forwarding original IP packets without invoking the OAL, however, the Client and Proxy/Server can engage only in classical path MTU discovery since the packets are subject to loss and/or corruption due to the various per-link MTU limitations that may occur within the ANET. Moreover, the original IP packets do not include either the OAL integrity check or per-packet Identification values that can be used for data origin authentication and link-layer retransmissions. The tradeoff therefore involves an assessment of the per-packet encapsulation overhead saved by bypassing the OAL vs. inheritance of classical network "brittleness". (Note however that ANET peers can send small original IP packets without invoking the OAL, while invoking the OAL for larger packets. This presents the beneficial aspects of both small packet efficiency and large packet robustness.)¶
Note: When a Proxy/Server receives a (non-OAL) original IP packet from an ANET Client, or a carrier packet with OAL destination set to its own ADM-ULA from any Client, the Proxy/Server reassembles if necessary then performs ROS functions on behalf of the Client. The Client may at some later time begin sending carrier packets to the OAL address of the actual target instead of the Proxy/Server, at which point it may begin functioning as an ROS on its own behalf and thereby "override" the Proxy/Server's ROS role.¶
Note; Proxy/Servers drop any original IP packets (received either directly from an ANET Client or following reassembly of carrier packets received from an ANET/INET Client) with a destination that corresponds to the Client's delegated MNP. Similarly, Proxy/Servers drop any carrier packet received with both a source and destination that correspond to the Client's delegated MNP regardless of their OMNI link point of origin. These checks are necessary to prevent Clients from either accidentally or intentionally establishing endless loops that could congest Proxy/Servers and/or ANET/INET links.¶
Note: Proxy/Servers forward secure control plane carrier packets via the SRT secured spanning tree and forward other carrier packets via the unsecured spanning tree. When a Proxy/Server receives a carrier packet from the secured spanning tree, it considers the message as authentic without having to verify upper layer authentication signatures. When a Proxy/Server receives a carrier packet from the unsecured spanning tree, it applies data origin authentication itself and/or forwards the unsecured message toward the destination which must apply data origin authentication on its own behalf.¶
Note: If the Proxy/Server has multiple original IP packets to send to the same neighbor, it can concatenate them in a single OAL super-packet [I-D.templin-6man-omni].¶
Gateways forward spanning tree carrier packets while decrementing the OAL header Hop Count but not the original IP header Hop Count/TTL. Gateways convey carrier packets that encapsulate critical IPv6 ND control messages or routing protocol control messages via the SRT secured spanning tree, and may convey other carrier packets via the secured/unsecured spanning tree or via more direct paths according to MFIB information. When the Gateway receives a carrier packet, it removes the L2 headers and searches for an MFIB entry that matches an MFVI or an IP forwarding table entry that matches the OAL destination address.¶
Gateways process carrier packets with OAL destinations that do not match their ADM-ULA or the SRT Subnet Router Anycast address in the same manner as for traditional IP forwarding within the OAL, i.e., nodes use IP forwarding to forward packets not explicitly addressed to themselves. Gateways process carrier packets with their ADM-ULA or the SRT Subnet Router Anycast address as the destination by first examining the packet for a full OAL header with a CRH-32 extension or an OCH header. In that case, the Gateway examines the next MFVI in the carrier packet to locate the MFV entry in the MFIB for next hop forwarding (i.e., without examining IP addresses). When the Gateway forwards the carrier packet, it changes the destination address according to the MFVI value for the next hop found either in the CRH-32 header or in the node's own MFIB. If the Gateway has a NCE for the target Client with an entry for the target underlay interface and current L2 addresses, the Gateway instead forwards directly to the target Client while using the final hop MFVI instead of the next hop (see: Section 3.13.4).¶
Gateways forward carrier packets received from a first segment via the secured spanning tree to the next segment also via the secured spanning tree. Gateways forward carrier packets received from a first segment via the unsecured spanning tree to the next segment also via the unsecured spanning tree. Gateways use a single IPv6 routing table that always determines the same next hop for a given OAL destination, where the secured/unsecured spanning tree is determined through the selection of the underlay interface to be used for transmission (i.e., a secured tunnel or an open INET interface).¶
As for Proxy/Servers, Gateways must verify that the L2 addresses of carrier packets not received from the secured spanning tree are "trusted" before forwarding according to an MFV (otherwise, the carrier packet must be dropped).¶
When an AERO node admits an original IP packet into the OMNI interface, it may receive link-layer or network-layer error indications. The AERO node may also receive OMNI link error indications in OAL-encapsulated uNA messages that include authentication signatures.¶
A link-layer error indication is an ICMP error message generated by a router in an underlay network on the path to the neighbor or by the neighbor itself. The message includes an IP header with the address of the node that generated the error as the source address and with the link-layer address of the AERO node as the destination address.¶
The IP header is followed by an ICMP header that includes an error Type, Code and Checksum. Valid type values include "Destination Unreachable", "Time Exceeded" and "Parameter Problem" [RFC0792][RFC4443]. (OMNI interfaces ignore link-layer IPv4 "Fragmentation Needed" and IPv6 "Packet Too Big" messages for carrier packets that are no larger than the minimum/path MPS as discussed in Section 3.9, however these messages may provide useful hints of probe failures during path MPS probing.)¶
The ICMP header is followed by the leading portion of the carrier packet that generated the error, also known as the "packet-in-error". For ICMPv6, [RFC4443] specifies that the packet-in-error includes: "As much of invoking packet as possible without the ICMPv6 packet exceeding the minimum IPv6 MTU" (i.e., no more than 1280 bytes). For ICMPv4, [RFC0792] specifies that the packet-in-error includes: "Internet Header + 64 bits of Original Data Datagram", however [RFC1812] Section 4.3.2.3 updates this specification by stating: "the ICMP datagram SHOULD contain as much of the original datagram as possible without the length of the ICMP datagram exceeding 576 bytes".¶
The link-layer error message format is shown in Figure 4:¶
The AERO node rules for processing these link-layer error messages are as follows:¶
When an AERO Gateway receives a carrier packet for which the network-layer destination address is covered by an MSP assigned to a black-hole route, the Gateway drops the packet if there is no more-specific routing information for the destination and returns an OMNI interface Destination Unreachable message subject to rate limiting.¶
When an AERO node receives a carrier packet for which reassembly is currently congested, it returns an OMNI interface Packet Too Big (PTB) message as discussed in [I-D.templin-6man-omni] (note that the PTB messages could indicate either "hard" or "soft" errors).¶
AERO nodes include ICMPv6 error messages intended for an OAL source as sub-options in the OMNI option of secured uNA messages. When the OAL source receives the uNA message, it can extract the ICMPv6 error message enclosed in the OMNI option and either process it locally or translate it into a network-layer error to return to the original source.¶
AERO nodes observes the Router Discovery and Prefix Registration specifications found in Section 15 of [I-D.templin-6man-omni]. AERO nodes further coordinate their autoconfiguration actions with the mobility service as discussed in the following sections.¶
Each AERO Proxy/Server on the OMNI link is configured to facilitate Client prefix delegation/registration requests. Each Proxy/Server is provisioned with a database of MNP-to-Client ID mappings for all Clients enrolled in the AERO service, as well as any information necessary to authenticate each Client. The Client database is maintained by a central administrative authority for the OMNI link and securely distributed to all Proxy/Servers, e.g., via the Lightweight Directory Access Protocol (LDAP) [RFC4511], via static configuration, etc. Clients receive the same service regardless of the Proxy/Servers they select.¶
Clients associate each of their ANET/INET underlay interfaces with a FHS Proxy/Server. Each FHS Proxy/Server locally services one or more of the Client's underlay interfaces, and the Client typically selects one among them to serve as the Hub Proxy/Server (the Client may instead select a "third-party" Hub Proxy/Server that does not directly service any of its underlay interfaces). All of the Client's other FHS Proxy/Servers forward proxyed copies of RS/RA messages between the Hub Proxy/Server and Client without assuming the Hub role functions themselves.¶
Each Client associates with a single Hub Proxy/Server at a time, while all other Proxy/Servers are candidates for providing the Hub role for other Clients. An FHS Proxy/Server assumes the Hub role when it receives an RS message with its own ADM-LLA or link-scoped All-Routers multicast as the destination. An FHS Proxy/Server assumes the proxy role when it receives an RS message with the ADM-LLA of another Proxy/Server as the destination. (An FHS Proxy/Server can also assume the proxy role when it receives an RS message addressed to link-scoped All-Routers multicast if it can determine the ADM-LLA of another Proxy/Server to serve as a Hub.)¶
Hosts and Clients on ENET interfaces associate with an upstream Client on the ENET the same as a Client would associate with an ANET Proxy/Server. In particular, the Host/Client sends an RS message via the ENET which directs the message to the upstream Client. The upstream Client returns an RA message. In this way, the downstream nodes see the ENET as an ANET and see the upstream Client as a Proxy/Server for that ANET.¶
AERO Hosts, Clients and Proxy/Servers use IPv6 ND messages to maintain neighbor cache entries. AERO Proxy/Servers configure their OMNI interfaces as advertising NBMA interfaces, and therefore send unicast RA messages with a short Router Lifetime value (e.g., ReachableTime seconds) in response to a Client's RS message. Thereafter, Clients send additional RS messages to keep Proxy/Server state alive.¶
AERO Clients and Hub Proxy/Servers include prefix delegation and/or registration parameters in RS/RA messages. The IPv6 ND messages are exchanged between the Client and Hub Proxy/Server (via any FHS Proxy/Servers acting as proxys) according to the prefix management schedule required by the service. If the Client knows its MNP in advance, it can employ prefix registration by including its MNP-LLA as the source address of an RS message and with an OMNI option with valid prefix registration information for the MNP. If the Hub Proxy/Server accepts the Client's MNP assertion, it injects the MNP into the routing system and establishes the necessary neighbor cache state. If the Client does not have a pre-assigned MNP, it can instead employ prefix delegation by including a TMP-ULA as the source address of an RS message and with an OMNI option with prefix delegation parameters to request an MNP.¶
The following sections outlines Host, Client and Proxy/Server behaviors based on the Router Discovery and Prefix Registration specifications found in Section 15 of [I-D.templin-6man-omni]. These sections observe all of the OMNI specifications, and include additional specifications of the interactions of Client-Proxy/Server RS/RA exchanges with the AERO mobility service.¶
AERO Hosts and Clients discover the addresses of candidate Proxy/Servers by resolving the Potential Router List (PRL) in a similar manner as described in [RFC5214]. Discovery methods include static configuration (e.g., a flat-file map of Proxy/Server addresses and locations), or through an automated means such as Domain Name System (DNS) name resolution [RFC1035]. Alternatively, the Host/Client can discover Proxy/Server addresses through a layer 2 data link login exchange, or through an RA response to a multicast/anycast RS as described below. In the absence of other information, the Host/Client can resolve the DNS Fully-Qualified Domain Name (FQDN) "linkupnetworks.[domainname]" where "linkupnetworks" is a constant text string and "[domainname]" is a DNS suffix for the OMNI link (e.g., "example.com"). The name resolution returns a set of resource records with Proxy/Server addres information.¶
The Host/Client then performs RS/RA exchanges over each of its underlay interfaces to associate with (possibly multiple) FHS Proxy/Serves and a single Hub Proxy/Server as specified in Section 15 of [I-D.templin-6man-omni]. The Host/Client then sends each RS (either directly via Direct interfaces, via a VPN for VPNed interfaces, via an access router for ANET interfaces or via INET encapsulation for INET interfaces) and waits up to RetransTimer milliseconds for an RA message reply (see Section 3.12.3) while retrying up to MAX_RTR_SOLICITATIONS if necessary. If the Host/Client receives no RAs, or if it receives an RA with Router Lifetime set to 0, the Client SHOULD abandon attempts through the first candidate Proxy/Server and try another Proxy/Server.¶
After the Host/Client registers its underlay interfaces, it may wish to change one or more registrations, e.g., if an interface changes address or becomes unavailable, if traffic selectors change, etc. To do so, the Host/Client prepares an RS message to send over any available underlay interface as above. The RS includes an OMNI option with prefix registration/delegation information and with an Interface Attributes sub-option specific to the selected underlay interface. When the Host/Client receives the Hub Proxy/Server's RA response, it has assurance that both the Hub and FHS Proxy/Servers have been updated with the new information.¶
If the Host/Client wishes to discontinue use of a Hub Proxy/Server it issues an RS message over any underlay interface with an OMNI option with a prefix release indication (i.e., by setting the OMNI extension header Preflen to 0). When the Hub Proxy/Server processes the message, it releases the MNP, sets the NCE state for the Host/Client to DEPARTED and returns an RA reply with Router Lifetime set to 0. After a short delay (e.g., 2 seconds), the Hub Proxy/Server withdraws the MNP from the routing system. (Alternatively, when the Host/Client associates with a new FHS/Hub Proxy/Server it can include an OMNI "Proxy/Server Departure" sub-option in RS messages with the MSIDs of the Old FHS/Hub Proxy/Server.)¶
AERO Proxy/Servers act as both IP routers and IPv6 ND proxys, and support a prefix delegation/registration service for Clients. Proxy/Servers arrange to add their ADM-LLAs to the PRL maintained in a static map of Proxy/Server addresses for the link, the DNS resource records for the FQDN "linkupnetworks.[domainname]", etc. before entering service. The PRL should be arranged such that Clients can discover the addresses of Proxy/Servers that are geographically and/or topologically "close" to their underlay network connections.¶
When a FHS/Hub Proxy/Server receives a prospective Client's RS message, it SHOULD return an immediate RA reply with Router Lifetime set to 0 if it is currently too busy or otherwise unable to service the Client; otherwise, it processes the RS as specified in Section 15 of [I-D.templin-6man-omni]. When the Hub Proxy/Server receives the RS, it determines the correct MNPs to provide to the Client by processing the MNP-LLA prefix parameters and/or the DHCPv6 OMNI sub-option. When the Hub Proxy/Server returns the MNPs, it also creates a forwarding table entry for the MNP-ULA corresponding to each MNP resulting in a BGP update (see: Section 3.2.3). For IPv6, the Hub Proxy/Server creates an IPv6 forwarding table entry for each MNP-ULA. For IPv4, the Hub Proxy/Server creates an IPv6 forwarding table entry with the IPv4-compatibility MNP-ULA prefix corresponding to the IPv4 address. The Hub Proxy/Server then returns an RA to the Client via an FHS Proxy/Server if necessary.¶
After the initial RS/RA exchange, the Hub Proxy/Server maintains a ReachableTime timer for each of the Client's underlay interfaces individually (and for the Client's NCE collectively) set to expire after ReachableTime seconds. If the Client (or an FHS Proxy/Server) issues additional RS messages, the Hub Proxy/Server sends an RA response and resets ReachableTime. If the Hub Proxy/Server receives an IPv6 ND message with a prefix release indication it sets the Client's NCE to the DEPARTED state and withdraws the MNP-ULA route from the routing system after a short delay (e.g., 2 seconds). If ReachableTime expires before a new RS is received on an individual underlay interface, the Hub Proxy/Server marks the interface as DOWN. If ReachableTime expires before any new RS is received on any individual underlay interface, the Hub Proxy/Server sets the NCE state to STALE and sets a 10 second timer. If the Hub Proxy/Server has not received a new RS or uNA message with a prefix release indication before the 10 second timer expires, it deletes the NCE and withdraws the MNP from the routing system.¶
The Hub Proxy/Server processes any IPv6 ND messages pertaining to the Client while forwarding to the Client or responding on the Client's behalf as necessary. The Hub Proxy/Server may also issue unsolicited RA messages, e.g., with reconfigure parameters to cause the Client to renegotiate its prefix delegation/registrations, with Router Lifetime set to 0 if it can no longer service this Client, etc. The Hub Proxy/Server may also receive carrier packets via the secured spanning tree that contain initial data packets sent while route optimization is in progress. The Hub Proxy/Server reassembles, then re-encapsulates/re-fragments and forwards the packets to the target Client via an FHS Proxy/Server if necessary. Finally, If the NCE is in the DEPARTED state, the old Hub Proxy/Server forwards any carrier packets it receives from the secure spanning tree and destined to the Client to the new Hub Proxy/Server, then deletes the entry after DepartTime expires.¶
Note: Clients SHOULD arrange to notify former Hub Proxy/Servers of their departures, but Hub Proxy/Servers are responsible for expiring neighbor cache entries and withdrawing routes even if no departure notification is received (e.g., if the Client leaves the network unexpectedly). Hub Proxy/Servers SHOULD therefore set Router Lifetime to ReachableTime seconds in solicited RA messages to minimize persistent stale cache information in the absence of Client departure notifications. A short Router Lifetime also ensures that proactive RS/RA messaging between Clients and FHS Proxy/Servers will keep any NAT state alive (see above).¶
Note: All Proxy/Servers on an OMNI link MUST advertise consistent values in the RA Cur Hop Limit, M and O flags, Reachable Time and Retrans Timer fields the same as for any link, since unpredictable behavior could result if different Proxy/Servers on the same link advertised different values.¶
AERO Clients register with FHS Proxy/Servers for each underlay interface. Each of the Client's FHS Proxy/Servers must inform a single Hub Proxy/Server of the Client's underlay interface(s) that it services. For Clients on Direct and VPNed underlay interfaces, the FHS Proxy/Server for each interface is directly connected, for Clients on ANET underlay interfaces the FHS Proxy/Server is located on the ANET/INET boundary, and for Clients on INET underlay interfaces the FHS Proxy/Server is located somewhere in the connected Internetwork. When FHS Proxy/Server "B" processes a Client registration, it must either assume the Hub role or forward a proxyed registration to another Proxy/Server "A" acting as the Hub. Proxy/Servers satisfy these requirements as follows:¶
After the initial RS/RA exchanges each FHS Proxy/Server forwards any of the Client's carrier packets with OAL destinations for which there is no matching NCE to a Gateway using OAL encapsulation with its own ADM-ULA as the source and with destination determined by the Client. The Proxy/Server instead forwards any carrier packets destined to a neighbor cache target directly to the target according to the OAL/link-layer information - the process of establishing neighbor cache entries is specified in Section 3.13.¶
While the Client is still associated with FHS Proxy/Servers "B", "C", "D", etc., each FHS Proxy/Server can send NS, RS and/or unsolicited NA messages to update the neighbor cache entries of other AERO nodes on behalf of the Client based on changes in Interface Attributes, Traffic Selectors, etc. This allows for higher-frequency Proxy-initiated RS/RA messaging over well-connected INET infrastructure supplemented by lower-frequency Client-initiated RS/RA messaging over constrained ANET data links.¶
If the Hub Proxy/Server "A" ceases to send solicited RAs, FHS Proxy/Servers "B", "C", "D" can send unsolicited RAs over the Client's underlay interface with destination set to (link-local) All-Nodes multicast and with Router Lifetime set to zero to inform Clients that the Hub Proxy/Server has failed. Although FHS Proxy/Servers "B", "C" and "D" can engage in IPv6 ND exchanges on behalf of the Client, the Client can also send IPv6 ND messages on its own behalf, e.g., if it is in a better position to convey state changes. The IPv6 ND messages sent by the Client include the Client's MNP-LLA as the source in order to differentiate them from the IPv6 ND messages sent by a FHS Proxy/Server.¶
If the Client becomes unreachable over all underlay interface it serves, the Hub Proxy/Server sets the NCE state to DEPARTED and retains the entry for DepartTime seconds. While the state is DEPARTED, the Hub Proxy/Server forwards any carrier packets destined to the Client to a Gateway via OAL encapsulation. When DepartTime expires, the Hub Proxy/Server deletes the NCE, withdraws the MNP route and discards any further carrier packets destined to the former Client.¶
In some ANETs that employ a Proxy/Server, the Client's MNP can be injected into the ANET routing system. In that case, the Client can send original IP packets without invoking the OAL so that the ANET routing system transports the original IP packets to the Proxy/Server. This can be beneficial, e.g., if the Client connects to the ANET via low-end data links such as some aviation wireless links.¶
If the ANET first-hop access router is on the same underlay link as the Client and recognizes the AERO/OMNI protocol, the Client can avoid OAL encapsulation for both its control and data messages. When the Client connects to the link, it can send an unencapsulated RS message with source address set to its own MNP-LLA (or to a TMP-ULA), and with destination address set to the ADM-LLA of the Client's selected Proxy/Server or to link-scoped All-Routers multicast. The Client includes an OMNI option formatted as specified in [I-D.templin-6man-omni]. The Client then sends the unencapsulated RS message, which will be intercepted by the AERO-aware ANET access router.¶
The ANET access router then performs OAL encapsulation on the RS message and forwards it to a Proxy/Server at the ANET/INET boundary. When the access router and Proxy/Server are one and the same node, the Proxy/Server would share an underlay link with the Client but its message exchanges with outside correspondents would need to pass through a security gateway at the ANET/INET border. The method for deploying access routers and Proxys (i.e. as a single node or multiple nodes) is an ANET-local administrative consideration.¶
Note: When a Proxy/Server alters the IPv6 ND message contents before forwarding (e.g., such as altering the OMNI option contents), the original IPv6 ND message checksum or authentication signature is invalidated, and a new checksum or authentication signature must be calculated and included.¶
Note: When a Proxy/Server receives a secured Client NS message, it performs the same proxying procedures as for described for RS messages above. The proxying procedures for NS/NA message exchanges is specified in Section 3.13.¶
In environments where fast recovery from Proxy/Server failure is required, FHS Proxy/Servers SHOULD use proactive Neighbor Unreachability Detection (NUD) to track Hub Proxy/Server reachability in a similar fashion as for Bidirectional Forwarding Detection (BFD) [RFC5880]. Each FHS Proxy/Server can then quickly detect and react to failures so that cached information is re-established through alternate paths. The NS/NA(NUD) control messaging is carried only over well-connected ground domain networks (i.e., and not low-end aeronautical radio links) and can therefore be tuned for rapid response.¶
FHS Proxy/Servers perform continuous NS/NA(NUD) exchanges with the Hub Proxy/Server, e.g., one exchange per second. The FHS Proxy/Server sends the NS(NUD) message via the spanning tree with its own ADM-LLA as the source and the ADM-LLA of the Hub Proxy/Server as the destination, and the Hub Proxy/Server responds with an NA(NUD). When the FHS Proxy/Server is also sending RS messages to a Hub Proxy/Server on behalf of Clients, the resulting RA responses can be considered as equivalent hints of forward progress. This means that the FHS Proxy/Server need not also send a periodic NS(NUD) if it has already sent an RS within the same period. If the Hub Proxy/Server fails (i.e., if the FHS Proxy/Server ceases to receive advertisements), the FHS Proxy/Server can quickly inform Clients by sending unsolicited RA messages¶
The FHS Proxy/Server sends unsolicited RA messages with source address set to the Hub Proxy/Server's address, destination address set to (link-local) All-Nodes multicast, and Router Lifetime set to 0. The FHS Proxy/Server SHOULD send MAX_FINAL_RTR_ADVERTISEMENTS RA messages separated by small delays [RFC4861]. Any Clients that had been using the failed Hub Proxy/Server will receive the RA messages and select one of its other FHS Proxy/Servers to assume the Hub role (i.e., by sending an RS with destination set to the ADM-LLA of the new Hub).¶
When a Client is not pre-provisioned with an MNP-LLA, it will need for the Hub Proxy/Server to select one or more MNPs on its behalf and set up the correct state in the AERO routing service. (A Client with a pre-provisioned MNP may also request the Hub Proxy/Server to select additional MNPs.) The DHCPv6 service [RFC8415] is used to support this requirement.¶
When a Client needs to have the Hub Proxy/Server select MNPs, it sends an RS message with source address set to a TMP-ULA and with an OMNI option that includes a DHCPv6 message sub-option with DHCPv6 Prefix Delegation (DHCPv6-PD) parameters. When the Hub Proxy/Server receives the RS message, it extracts the DHCPv6-PD message from the OMNI option.¶
The Hub Proxy/Server then acts as a "Proxy DHCPv6 Client" in a message exchange with the locally-resident DHCPv6 server, which delegates MNPs and returns a DHCPv6-PD Reply message. (If the Hub Proxy/Server wishes to defer creation of MN state until the DHCPv6-PD Reply is received, it can instead act as a Lightweight DHCPv6 Relay Agent per [RFC6221] by encapsulating the DHCPv6-PD message in a Relay-forward/reply exchange with Relay Message and Interface ID options.)¶
When the Hub Proxy/Server receives the DHCPv6-PD Reply, it adds a route to the routing system and creates an MNP-LLA based on the delegated MNP. The Hub Proxy/Server then sends an RA back to the Client with the (newly-created) MNP-LLA as the destination address and with the DHCPv6-PD Reply message and OMNI extension header Preflen coded in the OMNI option. When the Client receives the RA, it creates a default route, assigns the Subnet Router Anycast address and sets its MNP-LLA based on the delegated MNP.¶
Note: Further details of the DHCPv6-PD based MNP registration (as well as a minimal MNP delegation alternative that avoids including a DHCPv6 message sub-option in the RS) are found in [I-D.templin-6man-omni].¶
Note: when the Hub Proxy/Server forwards an RA to the Client via a different node acting as a FHS Proxy/Server, the Hub sets the RA destination to the same address that appeared in the RS source. The FHS Proxy/Server then subsequently sets the RA destination to the MNP-ULA when it forwards the Proxyed version of the RA to the Client - see [I-D.templin-6man-omni] for further details.¶
AERO nodes invoke route optimization when they need to forward initial packets to new target destinations over ANET/INET interfaces and for ongoing multilink forwarding for current destinations. Route optimization is based on IPv6 ND Address Resolution messaging between a Route Optimization Source (ROS) and a Relay or the target Client itself (reached via the current Hub Proxy/Server) acting as a Route Optimization Responder (ROR). Route optimization is initiated by the first eligible ROS closest to the source as follows:¶
The AERO routing system directs a route optimization request sent by the ROS to the ROR, which returns a route optimization reply which must include information that is current, consistent and authentic. The ROS is responsible for periodically refreshing the route optimization, and the ROR is responsible for quickly informing the ROS of any changes. Following address resolution, the ROS and ROR perform ongoing multilink route optimizations to maintain optimal forwarding profiles.¶
The route optimization procedures are specified in the following sections.¶
When one or more original IP packets from a source node destined to a target node arrives, the ROS checks for a NCE with an MNP-LLA that matches the target destination. If there is a NCE in the REACHABLE state, the ROS invokes the OAL and forwards the resulting carrier packets according to the cached state then returns from processing. Otherwise, if there is no NCE the ROS creates one in the INCOMPLETE state.¶
The ROS next prepares an NS message for Address Resolution (NS(AR)) to send toward an ROR while including the original IP packet(s) as trailing data following the NS(AR) in an OAL super-packet [I-D.templin-6man-omni]. The resulting NS(AR) message must be sent securely, and includes:¶
The NS(AR) message also includes an OMNI option with an authentication sub-option if necessary and with OMNI extension header Preflen set to the prefix length associated with the NS(AR) source. The ROS also includes Interface Attributes and Traffic Selectors for all of the source Client's underlay interfaces, calculates the authentication signature or checksum, then selects an Identification value and submits the NS(AR) message for OAL encapsulation with OAL source set to its own {ADM,MNP}-ULA and OAL destination set to the MNP-ULA corresponding to the target and with window synchronization parameters. The ROS then inserts a fragment header, performs fragmentation and L2 encapsulation, then sends the resulting carrier packets into the SRT secured spanning tree without decrementing the network-layer TTL/Hop Limit field.¶
When the ROS is a Client, it must instead use the ADM-ULA of one of its FHS Proxy/Servers as the destination. The ROS Client then fragments, performs L2 encapsulation and forwards the carrier packets to the FHS Proxy/Server. The FHS Proxy/Server then reassembles, verifies the NS(AR) authentication signature or checksum, changes the OAL source to its own ADM-ULA, changes the OAL destination to the MNP-ULA corresponding to the target, selects an appropriate Identification, then re-fragments and forwards the resulting carrier packets into the secured spanning tree on behalf of the Client.¶
Note: both the target Client and its Hub Proxy/Server include current and accurate information for the Client's multilink Interface Attributes profile. The Hub Proxy/Server can be trusted to provide an authoritative response on behalf of the Client should the need arise. While the Client has no such trust basis, any attempt by the Client to mount an attack by providing false Interface Attributes information would only result in black-holing of return traffic, i.e., the "attack" could only result in denial of service to the Client itself. Therefore, the Client's asserted Interface Attributes need not be validated by the Hub Proxy/Server.¶
When the Gateway receives carrier packets containing the NS(AR), it discards the L2 headers and determines the next hop by consulting its standard IPv6 forwarding table for the OAL header destination address. The Gateway then decrements the OAL header Hop-Limit, then re-encapsulates and forwards the carrier packet(s) via the secured spanning tree the same as for any IPv6 router, where they may traverse multiple OMNI link segments. The final-hop Gateway will deliver the carrier packet via the secured spanning tree to the Hub Proxy/Server (or Relay) that services the target.¶
When the Hub Proxy/Server for the target receives the NS(AR) secured carrier packets with the MNP-ULA of the target as the OAL destination, it reassembles then forwards the message to the target Client (while including an authentication signature and encapsulation if necessary) or processes the NS(AR) locally if it is acting as a Relay/IP router or the Client's designated ROR. The Hub Proxy/Server processes the message as follows:¶
The ROR then creates a NCE for the NS(AR) LLA source address if necessary, processes the window synchronization parameters, caches all Interface Attributes and Traffic Selector information, and prepares a (solicited) NA(AR) message to return to the ROS with the source address set to its own MNP-LLA, the destination address set to the NS(AR) LLA source address and the Target Address set to the same value that appeared in the NS(AR) Target Address. The ROR includes an OMNI option with OMNI extension header Preflen set to the prefix length associated with the NA(AR) source address.¶
The ROR then sets the NA(AR) message R flag to 1 (as a router) and S flag to 1 (as a response to a solicitation) and sets the O flag to 1 (as an authoritative responder). The ROR finally submits the NA(AR) for OAL encapsulation with source set to its own ULA and destination set to either the ULA corresponding to the NS(AR) source or the ADM-ULA of its FHS Proxy/Server, selects an appropriate Identification, and includes window synchronization parameters and authentication signature or checksum. The ROR then includes Interface Attributes and Traffic Selector sub-options for all of the target's underlay interfaces with current information for each interface, fragments and encapsulates each fragment in appropriate L2 headers, then forwards the resulting (L2-encapsulated) carrier packets to the FHS Proxy/Server.¶
When the FHS Proxy/Server receives the carrier packets, it reassembles if necessary and verifies the authentication signature or checksum. The FHS Proxy/Server then changes the OAL source address to its own ADM-ULA, changes the destination to the {ADM,MNP}-ULA corresponding to the NA(AR) LLA destination, includes an appropriate Identification, then fragments and forwards the carrier packets into the secured spanning tree.¶
Note: If the Hub Proxy/Server is acting as the Client's ROR but not as a Relay/IP router (i.e., by virtue of receipt of an RS message with the A flag set), it prepares the NS(AR) with the R flag set to 0 but without setting the SYN flag in the OMNI extension header window synchronization parameters. This informs the ROS that it must initiate multilink route optimization to synchronize with the Client either directly or via a FHS Proxy/Server (see: Section 3.13.2).¶
When the Gateway receives NA(AR) carrier packets, it discards the L2 headers and determines the next hop by consulting its standard IPv6 forwarding table for the OAL header destination address. The Gateway then decrements the OAL header Hop-Limit, re-encapsulates the carrier packet and forwards it via the SRT secured spanning tree, where it may traverse multiple OMNI link segments. The final-hop Gateway will deliver the carrier packet via the secured spanning tree to a Proxy/Server for the ROS.¶
When the ROS receives the NA(AR) message, it first searches for a NCE that matches the NA(AR) target address. The ROS then processes the message the same as for standard IPv6 Address Resolution [RFC4861]. In the process, it caches all OMNI option information in the target NCE (including all Interface Attributes), and caches the NA(AR) MNP-LLA source address as the address of the target Client.¶
When the ROS is a Client, the SRT secured spanning tree will first deliver the solicited NA(AR) message to the FHS Proxy/Server, which re-encapsulates and forwards the message to the Client. If the Client is on a well-managed ANET, physical security and protected spectrum ensures security for the NA(AR) without needing an additional authentication signature; if the Client is on the open INET the Proxy/Server must instead include an authentication signature (while adjusting the OMNI option size, if necessary). The Proxy/Server uses its own ADM-ULA as the OAL source and the MNP-ULA of the Client as the OAL destination.¶
Following address resolution, the ROS and ROR can assert multilink paths through underlay interface pairs serviced by the same source/destination LLAs by sending unicast NS/NA messages with Multilink Forwarding Parameters and OMNI extension header window synchronization parameters when necessary. The unicast NS/NA messages establish multilink forwarding state in intermediate nodes in the path between the ROS and ROR.¶
To support multilink route optimization, OMNI interfaces include an additional forwarding table termed the Multilink Forwarding Information Base (MFIB) that supports carrier packet forwarding based on OMNI neighbor underlay interface pairs. The MFIB contains Multilink Forwarding Vectors (MFVs) indexed by 4-octet values known as MFV Indexes (MFVIs).¶
OAL source, intermediate and destination nodes create MFVs/MFVIs when they process an NS message with a Multilink Forwarding Parameters sub-option with Job code '00' (Initialize; Build B) or a solicited NA with Job code '01' (Follow B; Build A) (see: [I-D.templin-6man-omni]). The OAL source of the NS (and OAL destination of the solicited NA) are considered to reside in the "First Hop Segment (FHS)", while the OAL destination of the NS (and OAL source of the solicited NA) are considered to reside in the "Last Hop Segment (LHS)".¶
When an OAL node processes an NS with Job code '00', it creates an MFV, records the NS source and destination ULAs and assigns a "B" MFVI. When the "B" MVFI is referenced, the MVF retains the ULAs in (dst,src) order the opposite of how they appeared in the original NS to support full header reconstruction. (If the NS message included a nested OAL encapsulation, the ULAs of both OAL headers are retained.)¶
When an OAL node processes a solicited NA with Job code '01', it locates the MFV created by the NS and assigns an "A" MFVI. When the "A" MFVI is referenced, the MFV retains the ULAs in (src,dst) order the same as they appeared in the original NS to support full header reconstruction. (If the NS message included a nested OAL encapsulation, the ULAs of both OAL headers are retained.)¶
OAL nodes generate random 32-bit values as candidate A/B MFVIs which must first be tested for local uniqueness. If a candidate MFVI s already in use (or if the value is 0), the OAL node repeats the random generation process until it obtains a unique non-zero value. (Since the number of MFVs in service at each OAL node is likely to be much smaller than 2**32, the process will generate a unique value after a small number of tries; also, an MFVI generated by a first OAL node is never tested for uniqueness on other OAL nodes, since the uniqueness property is node-local only.)¶
OAL nodes maintain A/B MFVIs as follows:¶
When an FHS OAL source has an original IP packet to send to an LHS OAL destination discovered via multilink address resolution, it first selects a source and target underlay interface pair. The OAL source uses its cached information for the target underlay interface as LHS information then prepares an NS message with an OMNI Multilink Forwarding Parameters sub-option with Job code '00' and with source set to its own {ADM,MNP}-LLA. If the LHS FMT-Forward and FMT-Mode bits are both clear, the OAL source sets the destination to the ADM-LLA of the LHS Proxy/Server; otherwise, it sets the destination to the MNP-LLA of the target Client. The OAL source then sets window synchronization information in the OMNI extension header and updates/creates a NCE for the selected destination {ADM,MNP}-LLA in the INCOMPLETE state. The OAL source next creates an MFV based on the NS source and destination LLAs, then generates a "B1" MFVI and assigns it to the MFV while also including it as the first B entry in the MFVI List. The OAL source then populates the NS Multilink Forwarding Parameters based on any FHS/LHS information it knows locally. OAL intermediate nodes on the path to the OAL destination may populate additional FHS/LHS information on a hop-by-hop basis.¶
If the OAL source is the FHS Proxy/Server, it then performs OAL encapsulation/fragmentation while setting the source to its own ADM-ULA and setting the destination to the FHS Subnet Router Anycast ULA determined by applying the FHS SRT prefix length to its ADM-ULA. The FHS Proxy/Server next examines the LHS FMT code. If FMT-Forward is clear and FMT-Mode is set, the FHS Proxy/Server checks for a NCE for the ADM-LLA of the LHS Proxy/Server. If there is no NCE, the FHS Proxy/Server creates one in the INCOMPLETE state. If a new NCE was created (or if the existing NCE requires fresh window synchronization), the FHS Proxy/Server then writes window synchronization parameters into the OMNI Multilink Forwarding Parameters Tunnel Window Synchronization fields. The FHS Proxy/Server then selects an appropriate Identification value and L2 headers and forwards the resulting carrier packets into the secured spanning tree which will deliver them to a Gateway interface that assigns the FHS Subnet Router Anycast ULA.¶
If the OAL source is the FHS Client, it instead includes an authentication signature if necessary, performs OAL encapsulation, sets the source to its own MNP-ULA, sets the destination to {ADM,MNP}-ULA of the FHS Proxy/Server and selects an appropriate Identification value for the FHS Proxy/Server. If FHS FMT-Forward is set and LHS FMT-Forward is clear, the FHS Client creates/updates a NCE for the ADM-LLA of the LHS Proxy/Server as above and includes Tunnel Window Synchronization parameters. The FHS Client then fragments and encapsulates in appropriate L2 headers then forwards the carrier packets to the FHS Proxy/Server. When the FHS Proxy/Server receives the carrier packets, it verifies the Identification, reassembles/decapsulates to obtain the NS then verifies the authentication signature or checksum. The FHS Proxy/Server then creates an MFV (i.e., the same as the FHS Client had done) while assigning the current B entry in the MFVI List (i.e., the one included by the FHS Client) as the "B2" MFVI for this MVF. The FHS Proxy/Server next generates a new unique "B1" MFVI, then both assigns it to the MFV and writes it as the next B entry in the OMNI Multilink Forwarding Parameters MFVI List (while also writing any FHS Client and Proxy/Server addressing information). The FHS Proxy/Server then checks FHS/LHS FMT-Forward/Mode to determine whether to create a NCE for the LHS Proxy/Server ADM-LLA and include Tunnel Window Synchronization parameters the same as above. The FHS Proxy/Server then calculates the checksum, re-fragments while setting the OAL source address to its own ADM-ULA and destination address to the FHS Subnet Router Anycast ULA, and includes an Identification appropriate for the secured spanning tree. The FHS Proxy/Server finally includes appropriate L2 headers and forwards the carrier packets into the secured spanning tree the same as above.¶
Gateways in the spanning tree forward carrier packets not explicitly addressed to themselves, while forwarding those that arrived via the secured spanning tree to the next hop also via the secured spanning tree and forwarding all others via the unsecured spanning tree. When an FHS Gateway receives a carrier packet over the secured spanning tree addressed to its ADM-ULA or the FHS Subnet Router Anycast ULA, it instead reassembles/decapsulates to obtain the NS then verifies the checksum. The FHS Gateway next creates an MFV (i.e., the same as the FHS Proxy/Server had done) while assigning the current B entry in the MFVI List as the MFV "B2" index. The FHS Gateway also caches the NS Multilink Forwarding Parameters FHS information in the MFV, and also caches the first B entry in the MFVI List as "FHS-Client" when FHS FMT-Forward/Mode are both set to enable future direct forwarding to this FHS Client. The FHS Gateway then generates a "B1" MFVI for the MFV and also writes it as the next B entry in the NS's MFVI List.¶
The FHS Gateway then examines the SRT prefixes corresponding to both FHS and LHS. If the FHS Gateway has a local interface connection to both the FHS and LHS (whether they are the same or different segments), the FHS/LHS Gateway caches the NS LHS information, writes its ADM-ULA suffix and LHS INADDR into the NS OMNI Multilink Forwarding Parameters LHS fields, then sets its own ADM-ULA as the source and the ADM-ULA of the LHS Proxy/Server as the destination while selecting an appropriate identification. If the FHS and LHS prefixes are different, the FHS Gateway instead sets the LHS Subnet Router Anycast ULA as the destination. The FHS Gateway then recalculates the NS checksum, selects an appropriate Identification and L2 headers as above then forwards the carrier packets into the secured spanning tree.¶
When the FHS and LHS Gateways are different, the LHS Gateway will receive carrier packets over the secured spanning tree from the FHS Gateway. The LHS Gateway reassembles/decapsulates to obtain the NS then verifies the checksum and creates an MFV (i.e., the same as the FHS Gateway had done) while assigning the current B entry in the MFVI List as the MFV "B2" index. The LHS Gateway also caches the ADM-ULA of the FHS Gateway found in the Multilink Forwarding Parameters as the spanning tree address for "B2", caches the NS Multilink Forwarding Parameters LHS information then generates a "B1" MFVI for the MFV while also writing it as the next B entry in the MFVI List. The LHS Gateway also writes its own ADM-ULA suffix and LHS INADDR into the OMNI Multilink Forwarding Parameters. The LHS Gateway then sets the its own ADM-ULA as the source and the ADM-ULA of the LHS Proxy/Server as the OAL destination, recalculates the checksum, selects an appropriate Identification, then fragments while including appropriate L2 headers and forwards the carrier packets into the secured spanning tree.¶
When the LHS Proxy/Server receives the carrier packets from the secured spanning tree, it reassembles/decapsulates to obtain the NS, verifies the checksum then verifies that the LHS information supplied by the FHS source is consistent with its own cached information. If the information is consistent, the LHS Proxy/Server then creates an MFV and assigns the current B entry in the MFVI List as the "B2" MFVI the same as for the prior hop. If the NS destination is the MNP-LLA of the target Client, the LHS Proxy/Server also generates a "B1" MFVI and assigns it both to the MFVI and as the next B entry in the MFVI List. The LHS Proxy/Server then examines FHS FMT; if FMT-Forward is clear and FMT-Mode is set, the LHS Proxy/Server creates a NCE for the ADM-LLA of the FHS Proxy/Server (if necessary) and sets the state to STALE, then caches any Tunnel Window Synchronization parameters.¶
If the NS destination is its own ADM-LLA, the LHS Proxy/Server next prepares to return a solicited NA with Job code '01'. If the NS source was the MNP-LLA of the FHS Client, the LHS Proxy/Server first creates or updates an NCE for the MNP-LLA with state set to STALE. The LHS Proxy/Server next caches the NS OMNI extension header window synchronization parameters and Multilink Forwarding Parameters information (including the MFVI List) in the NCE corresponding to the LLA source. When the LHS Proxy/Server forwards future carrier packets based on the NCE, it can populate reverse-path forwarding information in a CRH-32 routing header to enable forwarding based on the cached MFVI List B entries instead of ULA addresses.¶
The LHS Proxy/Server then creates an NA with Job code '01' while copying the NS OMNI Multilink Forwarding Parameters FHS/LHS information into the corresponding fields in the NA. The LHS Proxy/Server then generates an "A1" MFVI and both assigns it to the MFV and includes it as the first A entry in NA's MFVI List (see: [I-D.templin-6man-omni] for details on MFVI List A/B processing). The LHS Proxy/Server then includes end-to-end window synchronization parameters in the OMIN extension header (if necessary) and also tunnel window synchronization parameters in the Multilink Forwarding Parameters (if necessary). The LHS Proxy/Server then encapsulates the NA, calculates the checksum, sets the source to its own ADM-ULA, sets the destination to the ADM-ULA of the LHS Gateway, selects an appropriate Identification value and L2 headers then forwards the carrier packets into the secured spanning tree.¶
If the NS destination was the MNP-LLA of the LHS Client, the LHS Proxy/Server instead includes an authentication signature in the NS if necessary (otherwise recalculates the checksum), then changes the OAL source to its own ADM-ULA and changes the destination to the MNP-ULA of the LHS Client. The LHS Proxy/Server then selects an appropriate Identification value, fragments if necessary, includes appropriate L2 headers and forwards the carrier packets to the LHS Client. When the LHS Client receives the carrier packets, it verifies the Identification and reassembles/decapsulates to obtain the NS then verifies the authentication signature or checksum. The LHS Client then creates a NCE for the NS LLA source address in the STALE state. If LHS FMT-Forward is set, FHS FMT-Forward is clear and the NS source was an MNP-LLA, the Client also creates a NCE for the ADM-LLA of the FHS Proxy/Server in the STALE state and caches any Tunnel Window Synchronization parameters. The Client then caches the NS OMNI extension header window synchronization parameters and Multilink Forwarding Parameters in the NCE corresponding to the NS LLA source, then creates an MFV and assigns both the current MFVI List B entry as "B2" and a locally generated "A1" MFVI the same as for previous hops (the LHS Client also includes the "A1" value in the solicited NA - see above and below). The LHS Client also caches the previous MFVI List B entry as "LHS-Gateway" since it can include this value when it sends future carrier packets directly to the Gateway (following appropriate neighbor coordination).¶
The LHS Client then prepares an NA using exactly the same procedures as for the LHS Proxy/Server above, except that it uses its MNP-LLA as the source and the {ADM,MNP}-LLA of the FHS correspondent as the destination. The LHS Client also includes an authentication signature if necessary (otherwise calculates the checksum), then encapsulates the NA with OAL source set to its own MNP-ULA and destination set to the ADM-ULA of the LHS Proxy/Server, includes an appropriate Identification and L2 headers and forwards the carrier packets to the LHS Proxy/Server. When the LHS Proxy/Server receives the carrier packets, it verifies the Identifications, reassembles/decapsulates to obtain the NA, verifies the authentication signature or checksum, then uses the current MVFI List B entry to locate the MFV. The LHS Proxy/Server then writes the current MFVI List A entry as the "A2" value for the MVF, generates an "A1" MFVI and both assigns it to the MFV and writes it as the next MFVI List A entry. The LHS Proxy/Server then examines the FHS/LHS FMT codes to determine if it needs to include Tunnel Window Synchronization parameters. The LHS Proxy/Server then recalculates the checksum, re-fragments the NA while setting the OAL source to its own ADM-ULA and destination to the ADM-ULA of the LHS Gateway, includes an appropriate Identification and L2 headers and forwards the carrier packets into the secured spanning tree.¶
When the LHS Gateway receives the carrier packets, it reassembles/decapsulates to obtain the NA while verifying the checksum then uses the current MFVI List B entry to locate the MFV. The LHS Gateway then writes the current MFVI List A entry as the MFV "A2" index and generates a new "A1" value which it both assigns the MFV and writes as the next MFVI List A entry. (The LHS Gateway also caches the first A entry in the MFVI List as "LHS-Client" when LHS FMT-Forward/Mode are both set to enable future direct forwarding to this LHS Client.) If the LHS Gateway is connected directly to both the FHS and LHS segments (whether the segments are the same or different), the FHS/LHS Gateway will have already cached the FHS/LHS information based on the original NS. The FHS/LHS Gateway recalculates the checksum then re-fragments the NA while setting the OAL source to its own ADM-ULA and destination to the ADM-ULA of the FHS Proxy/Server. If the FHS and LHS prefixes are different, the FHS Gateway instead re-fragments while setting the destination to the ADM-ULA of the FHS Gateway. The LHS Gateway selects an appropriate Identification and L2 headers then forwards the carrier packets into the secured spanning tree.¶
When the FHS and LHS Gateways are different, the FHS Gateway will receive the carrier packets from the LHS Gateway over the secured spanning tree. The FHS Gateway reassembles/decapsulates to obtain the NA while verifying the checksum, then locates the MFV based on the current MFVI List B entry. The FHS Gateway then assigns the current MFVI List A entry as the MFV "A2" index and caches the ADM-ULA of the LHS Gateway as the spanning tree address for "A2". The FHS Gateway then generates an "A1" MVFI and both assigns it to the MVF and writes it as the next MFVI List A entry while also writing its ADM-ULA and INADDR in the NA FHS Gateway fields. The FHS Gateway then recalculates the checksum, re-encapsulates/re-fragments with its own ADM-ULA as the source, with the ADM-ULA of the FHS Proxy/Server as the destination, then selects an appropriate Identification value and L2 headers and forwards the carrier packets into the secured spanning tree.¶
When the FHS Proxy/Server receives the carrier packets from the secured spanning tree, it reassembles/decapsulates to obtain the NA while verifying the checksum then locates the MFV based on the current MFVI List B entry. The FHS Proxy/Server then assigns the current MFVI List A entry as the "A2" MFVI the same as for the prior hop. If the NA destination is its own ADM-LLA, the FHS Proxy/Server then caches the NA Multilink Forwarding Parameters with the MFV and examines LHS FMT. If FMT-Forward is clear, the FHS Proxy/Server locates the NCE for the ADM-LLA of the LHS Proxy/Server and sets the state to REACHABLE then caches any Tunnel Window Synchronization parameters. If the NA source is the MNP-LLA of the LHS Client, the FHS Proxy/Server then locates the LHS Client NCE and sets the state to REACHABLE then caches the OMNI extension header window synchronization parameters and prepares to return an NA acknowledgement, if necessary.¶
If the NA destination is the MNP-LLA of the FHS Client, the FHS Proxy/Server also searches for and updates the NCE for the ADM-LLA of the LHS Proxy/Server if necessary the same as above. The FHS Proxy/Server then generates an "A1" MFVI and assigns it both to the MFVI and as the next MFVI List A entry, then includes an authentication signature or checksum in the NA message. The FHS Proxy/Server then sets the OAL source to its own ADM-LA and sets the destination to the MNP-ULA of the FHS Client, then selects an appropriate Identification value and L2headers and forwards the carrier packets to the FHS Client.¶
When the FHS Client receives the carrier packets, it verifies the Identification, reassembles/decapsulates to obtain the NA, verifies the authentication signature or checksum, then locates the MFV based on the current MFVI List B entry. The FHS Client then assigns the current MFVI List A entry as the "A2" MFVI the same as for the prior hop. The FHS Client then caches the NA Multilink Forwarding Parameters (including the MFVI List) with the MFV and examines LHS FMT. If FMT-Forward is clear, the FHS Client locates the NCE for the ADM-LLA of the LHS Proxy/Server and sets the state to REACHABLE then caches any Tunnel Window Synchronization parameters. If the NA source is the MNP-LLA of the LHS Client, the FHS Proxy/Server then locates the LHS Client NCE and sets the state to REACHABLE then caches the OMNI extension header window synchronization parameters and prepares to return an NA acknowledgement, if necessary. The FHS Client also caches the previous MFVI List A entry as "FHS-Gateway" since it can include this value when it sends future carrier packets directly to the Gateway (following appropriate neighbor coordination).¶
If either the FHS Client or FHS Proxy/Server needs to return an acknowledgement to complete window synchronization, it prepares a uNA message with an OMNI Multilink Forwarding Parameters sub-option with Job code set to '10' (Follow A; Record B) (note that this step is unnecessary when Rapid Commit route optimization is used per Section 3.13.3). The FHS node sets the source to its own {ADM,MNP}-LLA, sets the destination to the {ADM,MNP}-LLA of the LHS node then includes Tunnel Window Synchronization parameters if necessary. The FHS node next sets the MFVI List to the cached list of A entries received in the Job code '01' NA, but need not set any other FHS/LHS information. The FHS node then encapsulates the uNA message in an OAL header with its own {ADM,MNP}-ULA as the source. If the FHS node is the Client, it next sets the ADM-ULA of the FHS Proxy/Server as the OAL destination, includes an authentication signature or checksum, selects an appropriate Identification value and L2 headers and forwards the carrier packets to the FHS Proxy/Server. The FHS Proxy/Server then verifies the Identification, reassembles/decapsulates, verifies the authentication signature or checksum, then uses the current MFVI List A entry to locate the MFV. The FHS Proxy/Server then writes its "B1" MFVI as the next MFVI List B entry and determines whether it needs to include Tunnel Window Synchronization parameters the same as it had done when it forwarded the original NS.¶
The FHS Proxy/Server recalculates the uNA checksum then re-fragments while setting its own ADM-ULA as the source and the ADM-ULA of the FHS Gateway as the destination, then selects an appropriate Identification and L2 headers and forwards the carrier packets into the secured spanning tree. When the FHS Gateway receives the carrier packets, it reassembles/decapsulates to obtain the uNA while verifying the checksum then uses the current MFVI List A entry to locate the MFV. The FHS Gateway then writes its "B1" MFVI as the next MFVI List B entry, then re-fragments while setting the OAL source and destination. If the FHS Gateway is also the LHS Gateway, it sets the ADM-ULA of the LHS Proxy/Server as the destination; otherwise it sets the ADM-ULA of the LHS Gateway. The FHS Gateway recalculates the checksum then selects an appropriate Identification and L2 headers, re-fragments/forwards the carrier packets into the secured spanning tree. If an LHS Gateway receives the carrier packets, it processes them exactly the same as the FHS Gateway had done while setting the carrier packet destination to the ADM-ULA of the LHS Proxy/Server.¶
When the LHS Proxy/Server receives the carrier packets, it reassembles/decapsulates to obtain the uNA message while verifying the checksum. The LHS Proxy/Server then locates the MFV based on the current MFVI List A entry then determines whether it is a tunnel ingress the same as for the original NS. If it is a tunnel ingress, the LHS Proxy/Server updates the NCE for the tunnel far-end based on the Tunnel Window Synchronization parameters. If the uNA destination is its own ADM-LLA, the LHS Proxy/Server next updates the NCE for the source LLA based on the OMNI extension header window synchronization parameters and MAY compare the MVFI List to the version it had cached in the MFV based on the original NS.¶
If the uNA destination is the MNP-LLA of the LHS Client, the LHS Proxy/Server instead writes its "B1" MFV as the next MFVI List B entry, includes an authentication signature or checksum, writes its own ADM-ULA as the source and the MNP-ULA of the Client as the destination then selects an appropriate Identification and L2 headers and forwards the resulting carrier packets to the LHS Client. When the LHS Client receives the carrier packets, it verifies the Identification, reassembles/decapsulates to obtain the uNA, verifies the authentication signature or checksum then processes the message exactly the same as for the LHS Proxy/Server case above.¶
Following the NS/NA exchange with Multilink Forwarding Parameters, OAL end systems and tunnel endpoints can begin exchanging ordinary carrier packets with Identification values within their respective send/receive windows without requiring security signatures and/or secured spanning tree traversal. Either peer can refresh window synchronization parameters and/or send other carrier packets requiring security at any time using the same secured procedures described above. OAL end systems and intermediate nodes can also use their own A1/B1 MFVIs when they receive carrier packets to unambiguously locate the correct MFV and determine directionality and can use any discovered A2/B2 MFVIs to forward carrier packets to other OAL nodes that configure the corresponding A1/B1 MFVIs. When an OAL node uses an MFVI included in a carrier packet to locate an MFV, it need not also examine the carrier packet addresses.¶
OAL sources can also begin including CRH-32s in carrier packets with a list of A/B MFVIs that OAL intermediate nodes can use for shortest-path carrier packet forwarding based on MFVIs instead of spanning tree addresses. OAL sources and intermediate nodes can also begin forwarding carrier packets with OAL compressed headers termed "OCH" (see: [I-D.templin-6man-omni]) that include only a single A/B MFVI meaningful to the next hop, since all nodes in the path up to (and sometimes including) the OAL destination have already established MFV forwarding information. Note that when an FHS OAL source receives a solicited NA with Job code '01', the message will contain an MFVI List with A entries populated in the reverse order needed for populating a CRH-32 routing header. The FHS OAL source must therefore write the MFVI List A entries last-to-first when it populates a CRH-32, or must select the correct A entry to include in an OCH header based on the intended OAL intermediate node or destination.¶
When a Gateway receives unsecured carrier packets destined to a local segment Client that has asserted direct reachability, the Gateway performs direct carrier packet forwarding while bypassing the local Proxy/Server based on the Client's advertised MFVIs and discovered NATed INADDR information (see: Section 3.13.4). If the Client cannot be reached directly (or if NAT traversal has not yet converged), the Gateway instead forwards carrier packets directly to the local Proxy/Server.¶
When a Proxy/Server receives carrier packets destined to a local Client or forwards carrier packets received from a local Client, it first locates the correct MFV. If the carrier packets include a secured IPv6 ND message, the Proxy/Server uses the Client's NCE established through RS/RA exchanges to re-encapsulate/re-fragment while forwarding outbound secured carrier packets via the secured spanning tree and forwarding inbound secured carrier packets while including an authentication signature or checksum. For ordinary carrier packets, the Proxy/Server uses the same MFV if directed by MFVI and/or OAL addressing. Otherwise it locates an MFV established through an NS/NA exchange between the Client and the remote peer, and forwards the carrier packets without first reassembling/decapsulating.¶
When a Proxy/Server or Client configured as a tunnel ingress receives a carrier packet with a full OAL header with an MNP-ULA source and CRH-32 routing header, or an OCH header with an MFVI that matches an MFV, the ingress encapsulates the carrier packet in a new full OAL header or an OCH header containing the next hop MVFI and an Identification value appropriate for the end-to-end window and the outer header containing an Identification value appropriate for the tunnel endpoints. When a Proxy/Server or Client configured as a tunnel egress receives an encapsulated carrier packet, it verifies the Identification in the outer header, then discards the outer header and forwards the inner carrier packet to the final destination.¶
When a Proxy/Server with FMT-Forward/Mode set to 0/1 for a source Client receives carrier packets from the source Client, it first reassembles to obtain the original OAL packet then re-fragments if necessary to cause the Client's packets to match the MPS on the path from the Proxy/Server as a tunnel ingress to the tunnel egress. The Proxy/Server then performs OAL-in-OAL encapsulation and forwards the resulting carrier packets to the tunnel egress. When a Proxy/Server with FMT-Forward/Mode set to 0/1 for a target Client receives carrier packets from a tunnel ingress, it first decapsulates to obtain the original fragments then reassembles to obtain the original OAL packet. The Proxy/Server then re-fragments if necessary to cause the fragments to match the target Client's underlay interface (Path) MTU and forwards the resulting carrier packets to the target Client.¶
When a source Client forwards carrier packets it can employ header compression according to the MFVIs established through an NS/NA exchange with a remote or local peer. When the source Client forwards to a remote peer, it can forward carrier packets to a local SRT Gateway (following the establishment of INADDR information) while bypassing the Proxy/Server (see: Section 3.13.4). When a target Client receives carrier packets that match a local MFV, the Client first verifies the Identification then decompresses the headers if necessary, reassembles if necessary to obtain the OAL packet then decapsulates and delivers the IP packet to upper layers.¶
When synchronized peer Clients in the same SRT segment with FMT-Forward and FMT-Mode set discover each other's NATed INADDR addresses, they can exchange carrier packets directly with header compression using MFVIs discovered as above (see: Section 3.13.5). The FHS Client will have cached the A MFVI for the LHS Client, which will have cached the B MVFI for the FHS Client.¶
After window synchronization state has been established, the ROS and ROR can begin forwarding carrier packets while performing additional NS/NA exchanges as above to update window state, register new interface pairs for optimized multilink forwarding and/or confirm reachability. The ROS sends carrier packets to the FHS Gateway discovered through the NS/NA exchange. The FHS Gateway then forwards the carrier packets over the unsecured spanning tree to the LHS Gateway, which forwards them via LHS encapsulation to the LHS Proxy/Server or directly to the target Client itself. The target Client in turn sends packets to the ROS in the reverse direction while forwarding through the Gateways to minimize Proxy/Server load whenever possible.¶
While the ROS continues to actively forward packets to the target Client, it is responsible for updating window synchronization state and per-interface reachability before expiration. Window synchronization state is shared by all underlay interfaces in the ROS' NCE that use the same destination LLA so that a single NS/NA exchange applies for all interfaces regardless of the specific interface used to conduct the exchange. However, the window synchronization exchange only confirms target Client reachability over the specific underlay interface pair. Reachability for other underlay interfaces that share the same window synchronization state must be determined individually using additional NS/NA messages.¶
When the ROR receives an NS(AR) with a set of Interface Attributes for the source Client, it can perform "rapid commit" by immediately invoking multilink route optimization as above instead of returning an NA(AR). In order to perform rapid commit, the ROR prepares a unicast NS message with an OMNI option with window synchronization information responsive to the NS(AR), with a Multilink Forwarding Parameters sub-option selected for a specific underlay interface pair and with Interface Attributes for all of the ROR's other underlay interfaces. The ROR can also include ordinary IP packets as OAL super-packet extensions to the NS message if it has immediate data to send to the ROS. The ROR then returns the NS to the ROS the same as for the NA(AR) case.¶
When the NS message traverses the return path to the ROR, all intermediate nodes in the path establish state exactly the same as for an ordinary NS/NA multilink route optimization exchange. When the NS message arrives at the ROS, the window synchronization parameters confirm that the NS is taking the place of the NA(AR), thereby eliminating an extraneous message transmission and associated delay. The ROS then completes the route optimization by returning a responsive NA.¶
Note: The ROS must accept unicast NS messages with an ACK matching the SYN included in the NS(AR) as an equivalent message replacement for the NA(AR). Address resolution and multilink forwarding coordination can therefore be coordinated in a single three-way handshake connection with minimal messaging and delay (i.e., as opposed to a four-message exchange).¶
Following multilink route optimization for specific underlay interface pairs, ROS/ROR Clients located on open INETs can invoke Client/Gateway route optimization to improve performance and reduce load and congestion on their respective FHS/LHS Proxy/Servers. To initiate Client/Gateway route optimization, the Client prepares an NS message with its own MNP-LLA address as the source and the ADM-LLA of its Gateway as the destination while creating a NCE for the Gateway if necessary. The NS message must be no larger than the minimum MPS and encapsulated as an atomic fragment.¶
The Client then includes an Interface Attributes sub-option for its underlay interface as well as an authentication signature but does not include window synchronization parameters. The Client then performs OAL encapsulation with its own MNP-ULA as the source and the ADM-ULA of the Gateway as the destination while including a randomly-chosen Identification value, then performs L2 encapsulation on the atomic fragment and sends the resulting carrier packet directly to the Gateway.¶
When the Gateway receives the carrier packet, it verifies the authentication signature then creates a NCE for the Client. The Gateway then caches the L2 encapsulation addresses (which may have been altered by one or more NATs on the path) as well as the Interface Attributes for this Client omIndex, and marks this Client underlay interface as "trusted". The Gateway then prepares an NA reply with its own ADM-LLA as the source and the MNP-LLA of the Client as the destination where the NA again must be no larger than the minimum MPS.¶
The Gateway then echoes the Client's Interface Attributes, includes an Origin Indication with the Client's observed L2 addresses and includes an authentication signature. The Gateway then performs OAL encapsulation with its own ADM-ULA as the source and the MNP-ULA of the Client as the destination while using the same Identification value that appeared in the NS, then performs L2 encapsulation on the atomic fragment and sends the resulting carrier packet directly to the Client.¶
When the Client receives the NA reply, it caches the carrier packet L2 source address information as the Gateway target address via this underlay interface while marking the interface as "trusted". The Client also caches the Origin Indication L2 address information as its own (external) source address for this underlay interface.¶
After the Client and Gateway have established NCEs as well as "trusted" status for a particular underlay interface pair, each node can begin forwarding ordinary carrier packets intended for this multilink route optimization directly to one another while omitting the Proxy/Server from the forwarding path while the status is "trusted". The NS/NA messaging will have established the correct state in any NATs in the path so that NAT traversal is naturally supported. The Client and Gateway must maintain a timer that watches for activity on the path; if no carrier packets and/or NS/NA messages are sent or received over the path before NAT state is likely to have expired, the underlay interface pair status becomes "untrusted".¶
Thereafter, when the Client forwards a carrier packet with an MFVI toward the Gateway as the next hop, the Client uses the MFVI for the Gateway (discovered during multilink route optimization) instead of the MFVI for its Proxy/Server; the Gateway will accept the packet from the Client if and only if the underlay interface status is trusted and if the MFVI is correct for the next hop toward the final destination. (The same is true in the reverse direction when the Gateway sends carrier packets directly to the Client.)¶
Note that the Client and Gateway each maintain a single NCE, but that the NCE may aggregate multiple underlay interface pairs. Each underlay interface pair may use differing source and target L2 addresses according to NAT mappings, and the "trusted/untrusted" status of each pair must be tested independently. When no "trusted" pairs remain, the NCE is deleted.¶
Note that the above method requires Gateways to participate in NS/NA message authentication signature application and verification. In an alternate approach, the Client could instead exchange NS/NA messages with authentication signatures via its Proxy/Server but addressed to the ADM-LLA of the Gateway, and the Proxy/Server and Gateway could relay the messages over the secured spanning tree. However, this would still require the Client to send additional messages toward the L2 address of the Gateway to populate NAT state; hence the savings in complexity for Gateways would result in increased message overhead for Clients.¶
When the ROS/ROR Clients are both located on the same SRT segment, Client-to-Client route optimization is possible following the establishment of any necessary state in NATs in the path. Both Clients will have already established state via their respective shared segment Proxy/Servers (and possibly also the shared segment Gateway) and can begin forwarding packets directly via NAT traversal while avoiding any Proxy/Server and/or Gateway hops.¶
When the ROR/ROS Clients on the same SRT segment perform the initial NS/NA exchange to establish Multilink Forwarding state, they also include an Origin Indication (i.e., in addition to Multilink Forwarding Parameters) with the mapped addresses discovered during the RS/RA exchanges with their respective Proxy/Servers. After the MFV paths have been established, both Clients can begin sending packets via strict MFV paths while establishing a direct path for Client-to-Client route optimization.¶
To establish the direct path, either Client (acting as the source) transmits a bubble to the mapped L2 address for the target Client which primes its local chain of NATs for reception of future packets from that L2 address (see: [RFC4380] and [I-D.templin-6man-omni]). The source Client then prepares an NS message with its own MNP-LLA as the source, with the MNP-LLA of the target as the destination and with an OMNI option with an Interface Attributes sub-option. The source Client then encapsulates the NS in an OAL header with its own MNP-ULA as the source, with the MNP-ULA of the target Client as the destination and with an in-window Identification for the target. The source Client then fragments and encapsulates in L2 headers addressed to its FHS Proxy/Server then forwards the resulting carrier packets to the Proxy/Server.¶
When the FHS Proxy/Server receives the carrier packets, it re-encapsulates and forwards them as unsecured carrier packets according to MFV state where they will eventually arrive at the target Client which can verify that the identifications are within the acceptable window and reassemble if necessary. Following reassembly, the target Client prepares an NA message with its own MNP-LLA as the source, with the MNP-LLA of the source Client as the destination and with an OMNI option with an Interface Attributes sub-option. The target Client then encapsulates the NA in an OAL header with its own MNP-ULA as the source, with the MNP-ULA of the source Client as the destination and with an in-window Identification for the source Client. The target Client then fragments and encapsulates in L2 headers addressed to the source Client's Origin addresses then forwards the resulting carrier packets directly to the source Client.¶
Following the initial NS/NA exchange, both Clients mark their respective (source, target) underlay interface pairs as "trusted" for no more than ReachableTime seconds. While the Clients continue to exchange carrier packets via the direct path avoiding all Proxy/Servers and Gateways, they should perform additional NS/NA exchanges via their local Proxy/Servers to refresh NCE state as well as send additional bubbles to the peer's Origin address information if necessary to refresh NAT state.¶
Note that these procedures are suitable for a widely-deployed but basic class of NATs. Procedures for advanced NAT classes are outlined in [RFC6081], which provides mechanisms that can be employed equally for AERO using the corresponding sub-options specified by OMNI.¶
Note also that each communicating pair of Clients may need to maintain NAT state for peer to peer communications via multiple underlay interface pairs. It is therefore important that Origin Indications are maintained with the correct peer interface and that the NCE may cache information for multiple peer interfaces.¶
Note that the source and target Client exchange Origin information during the secured NS/NA multilink route optimization exchange. This allows for subsequent NS/NA exchanges to proceed using only the Identification value as a data origin confirmation. However, Client-to-Client peerings that require stronger security may also include authentication signatures for mutual authentication.¶
Clients may be recursively nested within the ENETs of other Clients. When a Client is the downstream-attached ENET neighbor of an upstream Client, it still supports the route optimization functions discussed above by maintaining an MFIB and assigning MFVI values. When the Client processes an IPv6 ND NS/NA message that includes a Multilink Forwarding Parameters sub-option, it writes its MFVI information as the first/last MFVI list entry the same as for the single Client case discussed above.¶
The Client then forwards the NS/NA message to the next Client in the extended OMNI link toward the FHS/LHS Proxy/Server, which records the MVFI value then overwrites the MFVI list entry with its own MFVI value. This process iteratively continues until the Client that will forward the NS/NA message to the FHS/LHS Proxy/Server is reached, at which point the NS/NA MFVI list entries are populated by the intermediate nodes on the path to the LHS/FHS the same as discussed above.¶
In this way, each Client in the extended OMNI link discovers the A/B MVFIs of the next/previous Client without intruding into the Multilink Forwarding Parameters MFVI list. Therefore the list can remain fixed at 5 entries even though the Client-to-Client OMNI link extension can be arbitrarily long. Therefore, route optimization is not possible between consecutive Client members of the extended OMNI link but becomes possible at the Internetworking border that separates the FHS and LHS elements.¶
When a Client forwards a packet from a Host or another Client connected to one of its downstream ENETs to a peer within the same downstream ENET, the Client returns an IPv6 ND Redirect message to inform the source that that target can be reached directly. The contents of the Redirect message are the same as specified in [RFC4861].¶
In the same fashion, when a Proxy/Server forwards a packet from a Host or Client connected to one of its downstream ANETs to a peer within the same downstream ANET, the Proxy/Server returns an IPv6 ND Redirect message.¶
All other route optimization functions are conducted per the NS/NA messaging discussed in the previous sections.¶
AERO nodes perform Neighbor Unreachability Detection (NUD) per [RFC4861] either reactively in response to persistent link-layer errors (see Section 3.11) or proactively to confirm reachability. The NUD algorithm is based on periodic control message exchanges and may further be seeded by IPv6 ND hints of forward progress, but care must be taken to avoid inferring reachability based on spoofed information. For example, IPv6 ND message exchanges that include authentication codes and/or in-window Identifications may be considered as acceptable hints of forward progress, while spurious random carrier packets should be ignored.¶
AERO nodes can perform NS/NA(NUD) exchanges over the OMNI link secured spanning tree (i.e. the same as described above) to test reachability without risk of DoS attacks from nodes pretending to be a neighbor. These NS/NA(NUD) messages use the unicast LLAs and ULAs of the parties involved in the NUD test. When only reachability information is required without updating any other NCE state, AERO nodes can instead perform NS/NA(NUD) exchanges directly between neighbors without employing the secured spanning tree as long as they include in-window Identifications and either an authentication signature or checksum.¶
After an ROR directs an ROS to a target neighbor with one or more link-layer addresses, either node may invoke multilink forwarding state initialization to establish authentic intermediate node state between specific underlay interface pairs which also tests their reachability. Thereafter, either node acting as the source may perform additional reachability probing through NS(NUD) messages over the SRT secured or unsecured spanning tree, or through NS(NUD) messages sent directly to an underlay interface of the target itself. While testing a target underlay interface, the source can optionally continue to forward carrier packets via alternate interfaces, maintain a small queue of carrier packets until target reachability is confirmed or include them as trailing data with the NS(NUD) in an OAL super-packet [I-D.templin-6man-omni].¶
NS(NUD) messages are encapsulated, fragmented and transmitted as carrier packets the same as for ordinary original IP data packets, however the encapsulated destinations are the LLA of the source and either the ADM-LLA of the LHS Proxy/Server or the MNP-LLA of the target itself. The source encapsulates the NS(NUD) message the same as described in Section 3.13.2 and includes an Interface Attributes sub-option with omIndex set to identify its underlay interface used for forwarding. The source then includes an in-window Identification, fragments the OAL packet and forwards the resulting carrier packets into the unsecured spanning tree, directly to the target if it is in the local segment or directly to a Gateway in the local segment.¶
When the target receives the NS(NUD) carrier packets, it verifies that it has a NCE for this source and that the Identification is in-window, then submits the carrier packets for reassembly. The target then verifies the authentication signature or checksum, then searches for Interface Attributes in its NCE for the source that match the NS(NUD) for the NA(NUD) reply. The target then prepares the NA(NUD) with the source and destination LLAs reversed, encapsulates and sets the OAL source and destination, includes an Interface Attributes sub-option in the NA(NUD) to identify the omIndex of the underlay interface the NS(NUD) arrived on and sets the Target Address to the same value included in the NS(NUD). The target next sets the R flag to 1, the S flag to 1 and the O flag to 1, then selects an in-window Identification for the source and performs fragmentation. The node then forwards the carrier packets into the unsecured spanning tree, directly to the source if it is in the local segment or directly to a Gateway in the local segment.¶
When the source receives the NA(NUD), it marks the target underlay interface tested as "trusted". Note that underlay interface states are maintained independently of the overall NCE REACHABLE state, and that a single NCE may have multiple target underlay interfaces in various "trusted/untrusted" states while the NCE state as a whole remains REACHABLE.¶
AERO is a fully Distributed Mobility Management (DMM) service in which each Proxy/Server is responsible for only a small subset of the Clients on the OMNI link. This is in contrast to a Centralized Mobility Management (CMM) service where there are only one or a few network mobility collective entities for large Client populations. Clients coordinate with their associated FHS and Hub Proxy/Servers via RS/RA exchanges to maintain the DMM profile, and the AERO routing system tracks all current Client/Proxy/Server peering relationships.¶
Hub Proxy/Servers provide a designated router service for their dependent Clients, while FHS Proxy/Servers provide a proxy conduit between the Client and both the Hub and OMNI link in general. Clients are responsible for maintaining neighbor relationships with their Proxy/Servers through periodic RS/RA exchanges, which also serves to confirm neighbor reachability. When a Client's underlay interface attributes change, the Client is responsible for updating the Hub Proxy/Server through new RS/RA exchanges using the FHS Proxy/Server as a first-hop conduit. The FHS Proxy/Server can also act as a proxy to perform some IPv6 ND exchanges on the Client's behalf without consuming bandwidth on the Client underlay interface.¶
Mobility management considerations are specified in the following sections.¶
RORs and ROSs accommodate Client mobility and/or multilink change events by sending secured uNA messages to each active neighbor. When an ROR/ROS sends a uNA message, it sets the IPv6 source address to the its own LLA, sets the destination address to the neighbor's {ADM,MNP}-LLA and sets the Target Address to the Client's MNP-LLA. The ROR/ROS also includes an OMNI option with OMNI extension header Preflen set to the prefix length associated with the Client's MNP-LLA, includes Interface Attributes and Traffic Selectors for the Client's underlay interfaces and includes an authentication signature if necessary. The ROR then sets the uNA R flag to 1, S flag to 0 and O flag to 1, then encapsulates the message in an OAL header with source set to its own ULA and destination set to its FHS Proxy/Server's ADM-ULA. When the FHS Proxy/Server receives the uNA, it reassembles, verifies the authentication signature, then changes the destination to the ULA corresponding to the LLA destination and forwards the uNA into the secured spanning tree.¶
As discussed in Section 7.2.6 of [RFC4861], the transmission and reception of uNA messages is unreliable but provides a useful optimization. In well-connected Internetworks with robust data links uNA messages will be delivered with high probability, but in any case the ROR/ROS can optionally send up to MAX_NEIGHBOR_ADVERTISEMENT uNAs to each neighbor to increase the likelihood that at least one will be received. Alternatively, the ROR/ROS can set the PNG flag in the uNA OMNI option header to request a uNA acknowledgement as specified in [I-D.templin-6man-omni].¶
When the ROR/ROS Proxy/Server receives a uNA message prepared as above, if the uNA destination was its own ADM-LLA the Proxy/Server uses the included OMNI option information to update its NCE for the target but does not reset ReachableTime since the receipt of a uNA message does not provide confirmation that any forward paths to the target Client are working. If the destination was the MNP-LLA of the ROR/ROS Client, the Proxy/Server instead changes the OAL source to its own ADM-ULA, includes an authentication signature if necessary, and includes an in-window Identification for this Client. Finally, if the uNA message PNG flag was set, the node that processes the uNA returns a uNA acknowledgement as specified in [I-D.templin-6man-omni].¶
When a Client needs to change its underlay Interface Attributes and/or Traffic Selectors (e.g., due to a mobility event), the Client sends an RS message to its Hub Proxy/Server via a first-hop FHS Proxy/Server, if necessary. The RS includes an OMNI option with an Interface Attributes sub-option with the omIndex and with new link quality and any other information.¶
Note that the first FHS Proxy/Server may change due to the underlay interface change. If the Client supplies the address of the former FHS Proxy/Server, the new FHS Proxy/Server can send a departure indication (see below); otherwise, any stale state in the former FHS Proxy/Server will simply expire after ReachableTime expires with no effect on the Hub Proxy/Server.¶
Up to MAX_RTR_SOLICITATIONS RS messages MAY be sent in parallel with sending carrier packets containing user data in case one or more RAs are lost. If all RAs are lost, the Client SHOULD re-associate with a new Proxy/Server.¶
After performing the RS/RA exchange, the Client sends uNA messages to all neighbors the same as described in the previous section.¶
When a Client needs to bring new underlay interfaces into service (e.g., when it activates a new data link), it sends an RS message to the Hub Proxy/Server via a FHS Proxy/Server for the underlay interface (if necessary) with an OMNI option that includes an Interface Attributes sub-option with appropriate link quality values and with link-layer address information for the new link. The Client then again sends uNA messages to all neighbors the same as described above.¶
When a Client needs to deactivate an existing underlay interface, it sends a uNA message toward the Hub Proxy/Server via an FHS Proxy/Server with an OMNI option with appropriate Interface Attributes values for the deactivated link - in particular, the link quality value 0 assures that neighbors will cease to use the link.¶
If the Client needs to send uNA messages over an underlay interface other than the one being deactivated, it MUST include Interface Attributes with appropriate link quality values for any underlay interfaces being deactivated. The Client then again sends uNA messages to all neighbors the same as described above.¶
Note that when a Client deactivates an underlay interface, neighbors that receive the ensuing uNA messages need not purge all references for the underlay interface from their neighbor cache entries. The Client may reactivate or reuse the underlay interface and/or its omIndex at a later point in time, when it will send new RS messages to an FHS Proxy/Server with fresh interface parameters to update any neighbors.¶
The Client performs the procedures specified in Section 3.12.2 when it first associates with a new Hub Proxy/Server or renews its association with an existing Hub Proxy/Server.¶
When a Client associates with a new Hub Proxy/Server, it sends RS messages to register its underlay interfaces with the new Hub while including the 32 least significant bits of the old Hub's ADM-LLA in the "Old Hub Proxy/Server MSID" field of a Proxy/Server Departure OMNI sub-option. When the new Hub Proxy/Server returns the RA message via the FHS Proxy/Server (acting as a Proxy), the FHS Proxy/Server sends a uNA to the old Hub Proxy/Server (i.e., if the MSID is non-zero and different from its own). The uNA has the MNP-LLA of the Client as the source and the ADM-LLA of the old hub as the destination and with OMNI extension header Preflen set to 0. The FHS Proxy/Server encapsulates the uNA in an OAL header with the ADM-ULA of the new Hub as the source and the ADM-ULA of the old Hub as the destination, the fragments and sends the carrier packets via the secured spanning tree.¶
When the old Hub Proxy/Server receives the uNA, it changes the Client's NCE state to DEPARTED, resets DepartTime and caches the new Hub Proxy/Server ADM-ULA. After a short delay (e.g., 2 seconds) the old Hub Proxy/Server withdraws the Client's MNP from the routing system. While in the DEPARTED state, the old Hub Proxy/Server forwards any carrier packets received via the secured spanning tree destined to the Client's MNP-ULA to the new Hub Proxy/Server's ADM-ULA. After DepartTime expires, the old Hub Proxy/Server deletes the Client's NCE.¶
Mobility events may also cause a Client to change to a new FHS Proxy/Server over a specific underlay interface at any time such that a Client RS/RA exchange over the underlay interface will engage the new FHS Proxy/Server instead of the old. The Client can arrange to inform the old FHS Proxy/Server of the departure by including a Proxy/Server Departure sub-option with an MSID for the "Old FHS Proxy/Server MSID", and the new FHS Proxy/Server will issue a uNA using the same procedures as outlined for the Hub above while using its own ADM-ULA as the source address. This can often result in successful delivery of packets that would otherwise be lost due to the mobility event.¶
Clients SHOULD NOT move rapidly between Hub Proxy/Servers in order to avoid causing excessive oscillations in the AERO routing system. Examples of when a Client might wish to change to a different Hub Proxy/Server include a Hub Proxy/Server that has gone unreachable, topological movements of significant distance, movement to a new geographic region, movement to a new OMNI link segment, etc.¶
Clients provide an IGMP (IPv4) [RFC2236] or MLD (IPv6) [RFC3810] proxy service for its ENETs and/or hosted applications [RFC4605] and act as a Protocol Independent Multicast - Sparse-Mode (PIM-SM, or simply "PIM") Designated Router (DR) [RFC7761] on the OMNI link. Proxy/Servers act as OMNI link PIM routers for Clients on ANET, VPNed or Direct interfaces, and Relays also act as OMNI link PIM routers on behalf of nodes on other links/networks.¶
Clients on VPNed, Direct or ANET underlay interfaces for which the ANET has deployed native multicast services forward IGMP/MLD messages into the ANET. The IGMP/MLD messages may be further forwarded by a first-hop ANET access router acting as an IGMP/MLD-snooping switch [RFC4541], then ultimately delivered to an ANET Proxy/Server. The FHS Proxy/Server then acts as an ROS to send NS(AR) messages to an ROR for the multicast source. Clients on INET and ANET underlay interfaces without native multicast services instead send NS(AR) messages as an ROS to cause their FHS Proxy/Server forward the message to an ROR. When the ROR receives an NA(AR) response, it initiates PIM protocol messaging according to the Source-Specific Multicast (SSM) and Any-Source Multicast (ASM) operational modes as discussed in the following sections.¶
When an ROS "X" (i.e., either a Client or Proxy/Server) acting as PIM router receives a Join/Prune message from a node on its downstream interfaces containing one or more ((S)ource, (G)roup) pairs, it updates its Multicast Routing Information Base (MRIB) accordingly. For each S belonging to a prefix reachable via X's non-OMNI interfaces, X then forwards the (S, G) Join/Prune to any PIM routers on those interfaces per [RFC7761].¶
For each S belonging to a prefix reachable via X's OMNI interface, X sends an NS(AR) message (see: Section 3.13) using its own LLA as the source address, the solicited node multicast address corresponding to S as the destination and the LLA of S as the target address. X then encapsulates the NS(AR) in an OAL header with source address set to its own ULA and destination address set to the ULA for S, then forwards the message into the secured spanning tree which delivers it to ROR "Y" that services S. The resulting NA(AR) will return an OMNI option with Interface Attributes for any underlay interfaces that are currently servicing S.¶
When X processes the NA(AR) it selects one or more underlay interfaces for S and performs an NS/NA multilink route optimization exchange over the secured spanning tree while including a PIM Join/Prune message for each multicast group of interest in the OMNI option. If S is located behind any Proxys "Z"*, each Z* then updates its MRIB accordingly and maintains the LLA of X as the next hop in the reverse path. Since Gateways forward messages not addressed to themselves without examining them, this means that the (reverse) multicast tree path is simply from each Z* (and/or S) to X with no other multicast-aware routers in the path.¶
Following the initial combined Join/Prune and NS/NA messaging, X maintains a NCE for each S the same as if X was sending unicast data traffic to S. In particular, X performs additional NS/NA exchanges to keep the NCE alive for up to t_periodic seconds [RFC7761]. If no new Joins are received within t_periodic seconds, X allows the NCE to expire. Finally, if X receives any additional Join/Prune messages for (S,G) it forwards the messages over the secured spanning tree.¶
Client C that holds an MNP for source S may later depart from a first Proxy/Server Z1 and/or connect via a new Proxy/Server Z2. In that case, Y sends a uNA message to X the same as specified for unicast mobility in Section 3.15. When X receives the uNA message, it updates its NCE for the LLA for source S and sends new Join messages in NS/NA exchanges addressed to the new target Client underlay interface connection for S. There is no requirement to send any Prune messages to old Proxy/Server Z1 since source S will no longer source any multicast data traffic via Z1. Instead, the multicast state for (S,G) in Proxy/Server Z1 will soon expire since no new Joins will arrive.¶
When an ROS X acting as a PIM router receives Join/Prune messages from a node on its downstream interfaces containing one or more (*,G) pairs, it updates its Multicast Routing Information Base (MRIB) accordingly. X first performs an NS/NA(AR) exchange to receive route optimization information for Rendezvous Point (RP) R for each G. X then includes a copy of each Join/Prune message in the OMNI option of an NS message with its own LLA as the source address and the LLA for R as the destination address, then encapsulates the NS message in an OAL header with its own ULA as the source and the ADM-ULA of R's Proxy/Server as the destination then sends the message into the secured spanning tree.¶
For each source S that sends multicast traffic to group G via R, Client S* that aggregates S (or its Proxy/Server) encapsulates the original IP packets in PIM Register messages, includes the PIM Register messages in the OMNI options of uNA messages, performs OAL encapsulation and fragmentation then forwards the resulting carrier packets with Identification values within the receive window for Client R* that aggregates R. Client R* may then elect to send a PIM Join to S* in the OMNI option of a uNA over the secured spanning tree. This will result in an (S,G) tree rooted at S* with R as the next hop so that R will begin to receive two copies of the original IP packet; one native copy from the (S, G) tree and a second copy from the pre-existing (*, G) tree that still uses uNA PIM Register encapsulation. R can then issue a uNA PIM Register-stop message over the secured spanning tree to suppress the Register-encapsulated stream. At some later time, if Client S* moves to a new Proxy/Server, it resumes sending original IP packets via uNA PIM Register encapsulation via the new Proxy/Server.¶
At the same time, as multicast listeners discover individual S's for a given G, they can initiate an (S,G) Join for each S under the same procedures discussed in Section 3.16.1. Once the (S,G) tree is established, the listeners can send (S, G) Prune messages to R so that multicast original IP packets for group G sourced by S will only be delivered via the (S, G) tree and not from the (*, G) tree rooted at R. All mobility considerations discussed for SSM apply.¶
Bi-Directional PIM (BIDIR-PIM) [RFC5015] provides an alternate approach to ASM that treats the Rendezvous Point (RP) as a Designated Forwarder (DF). Further considerations for BIDIR-PIM are out of scope.¶
An AERO Client can connect to multiple OMNI links the same as for any data link service. In that case, the Client maintains a distinct OMNI interface for each link, e.g., 'omni0' for the first link, 'omni1' for the second, 'omni2' for the third, etc. Each OMNI link would include its own distinct set of Gateways and Proxy/Servers, thereby providing redundancy in case of failures.¶
Each OMNI link could utilize the same or different ANET connections. The links can be distinguished at the link-layer via the SRT prefix in a similar fashion as for Virtual Local Area Network (VLAN) tagging (e.g., IEEE 802.1Q) and/or through assignment of distinct sets of MSPs on each link. This gives rise to the opportunity for supporting multiple redundant networked paths (see: Section 3.2.4).¶
The Client's IP layer can select the outgoing OMNI interface appropriate for a given traffic profile while (in the reverse direction) correspondent nodes must have some way of steering their original IP packets destined to a target via the correct OMNI link.¶
In a first alternative, if each OMNI link services different MSPs the Client can receive a distinct MNP from each of the links. IP routing will therefore assure that the correct OMNI link is used for both outbound and inbound traffic. This can be accomplished using existing technologies and approaches, and without requiring any special supporting code in correspondent nodes or Gateways.¶
In a second alternative, if each OMNI link services the same MSP(s) then each link could assign a distinct "OMNI link Anycast" address that is configured by all Gateways on the link. Correspondent nodes can then perform Segment Routing to select the correct SRT, which will then direct the original IP packet over multiple hops to the target.¶
AERO Client MNs and INET correspondent nodes consult the Domain Name System (DNS) the same as for any Internetworking node. When correspondent nodes and Client MNs use different IP protocol versions (e.g., IPv4 correspondents and IPv6 MNs), the INET DNS must maintain A records for IPv4 address mappings to MNs which must then be populated in Relay NAT64 mapping caches. In that way, an IPv4 correspondent node can send original IPv4 packets to the IPv4 address mapping of the target MN, and the Relay will translate the IPv4 header and destination address into an IPv6 header and IPv6 destination address of the MN.¶
When an AERO Client registers with an AERO Proxy/Server, the Proxy/Server can return the address(es) of DNS servers in RDNSS options [RFC6106]. The DNS server provides the IP addresses of other MNs and correspondent nodes in AAAA records for IPv6 or A records for IPv4.¶
OAL encapsulation ensures that dissimilar INET partitions can be joined into a single unified OMNI link, even though the partitions themselves may have differing protocol versions and/or incompatible addressing plans. However, a commonality can be achieved by incrementally distributing globally routable (i.e., native) IP prefixes to eventually reach all nodes (both mobile and fixed) in all OMNI link segments. This can be accomplished by incrementally deploying AERO Gateways on each INET partition, with each Gateway distributing its MNPs and/or discovering non-MNP IP GUA prefixes on its INET links.¶
This gives rise to the opportunity to eventually distribute native IP addresses to all nodes, and to present a unified OMNI link view even if the INET partitions remain in their current protocol and addressing plans. In that way, the OMNI link can serve the dual purpose of providing a mobility/multilink service and a transition/coexistence service. Or, if an INET partition is transitioned to a native IP protocol version and addressing scheme that is compatible with the OMNI link MNP-based addressing scheme, the partition and OMNI link can be joined by Gateways.¶
Relays that connect INETs/ENETs with dissimilar IP protocol versions may need to employ a network address and protocol translation function such as NAT64 [RFC6146].¶
In environments where rapid failure recovery is required, Proxy/Servers and Gateways SHOULD use Bidirectional Forwarding Detection (BFD) [RFC5880]. Nodes that use BFD can quickly detect and react to failures so that cached information is re-established through alternate nodes. BFD control messaging is carried only over well-connected ground domain networks (i.e., and not low-end radio links) and can therefore be tuned for rapid response.¶
Proxy/Servers and Gateways maintain BFD sessions in parallel with their BGP peerings. If a Proxy/Server or Gateway fails, BGP peers will quickly re-establish routes through alternate paths the same as for common BGP deployments. Similarly, Proxys maintain BFD sessions with their associated Gateways even though they do not establish BGP peerings with them.¶
In some use cases, it is desirable, beneficial and efficient for the Client to receive a constant MNP that travels with the Client wherever it moves. For example, this would allow air traffic controllers to easily track aircraft, etc. In other cases, however (e.g., intelligent transportation systems), the MN may be willing to sacrifice a modicum of efficiency in order to have time-varying MNPs that can be changed every so often to defeat adversarial tracking.¶
The DHCPv6 service offers a way for Clients that desire time-varying MNPs to obtain short-lived prefixes (e.g., on the order of a small number of minutes). In that case, the identity of the Client would not be bound to the MNP but rather to a Node Identification value (see: [I-D.templin-6man-omni]) to be used as the Client ID seed for MNP prefix delegation. The Client would then be obligated to renumber its internal networks whenever its MNP (and therefore also its MNP-LLA) changes. This should not present a challenge for Clients with automated network renumbering services, however presents limits for the durations of ongoing sessions that would prefer to use a constant address.¶
An early AERO implementation based on OpenVPN (https://openvpn.net/) was announced on the v6ops mailing list on January 10, 2018 and an initial public release of the AERO proof-of-concept source code was announced on the intarea mailing list on August 21, 2015.¶
Many AERO/OMNI functions are implemented and undergoing final integration. OAL fragmentation/reassembly buffer management code has been cleared for public release.¶
The IANA has assigned the UDP port number "8060" for an earlier experimental first version of AERO [RFC6706]. This document together with [I-D.templin-6man-omni] reclaims UDP port number "8060" as the service port for UDP/IP encapsulation. This document makes no request of IANA, since [I-D.templin-6man-omni] already provides instructions. (Note: although [RFC6706] was not widely implemented or deployed, it need not be obsoleted since its messages use the invalid ICMPv6 message type number '0' which implementations of this specification can easily distinguish and ignore.)¶
No further IANA actions are required.¶
AERO Gateways configure secured tunnels with AERO Proxy/Servers and Relays within their local OMNI link segments. Applicable secured tunnel alternatives include IPsec [RFC4301], TLS/SSL [RFC8446], DTLS [RFC6347], WireGuard [WG], etc. The AERO Gateways of all OMNI link segments in turn configure secured tunnels for their neighboring AERO Gateways in a secured spanning tree topology. Therefore, control messages exchanged between any pair of OMNI link neighbors over the secured spanning tree are already protected.¶
To prevent spoofing vectors, Proxy/Servers MUST discard without responding to any unsecured NS/NA(AR) messages. Also, Proxy/Servers MUST discard without forwarding any original IP packets received from one of their own Clients (whether directly or following OAL reassembly) with a source address that does not match the Client's MNP and/or a destination address that does match the Client's MNP. Finally, Proxy/Servers MUST discard without forwarding any carrier packets with an OAL source and destination that both match the same MNP.¶
For INET partitions that require strong security in the data plane, two options for securing communications include 1) disable route optimization so that all traffic is conveyed over secured tunnels, or 2) enable on-demand secure tunnel creation between Client neighbors. Option 1) would result in longer routes than necessary and impose traffic concentration on critical infrastructure elements. Option 2) could be coordinated between Clients using NS/NA messages with OMNI Host Identity Protocol (HIP) "Initiator/Responder" message sub-options [RFC7401][I-D.templin-6man-omni] to create a secured tunnel on-demand, or to use the QUIC-TLS protocol to establish a secured connection [RFC9000][RFC9001][RFC9002].¶
AERO Clients that connect to secured ANETs need not apply security to their IPv6 ND messages, since the messages will be authenticated and forwarded by a perimeter Proxy/Server that applies security on its INET-facing interface as part of the secured spanning tree (see above). AERO Clients connected to the open INET can use network and/or transport layer security services such as VPNs or can by some other means establish a direct link to a Proxy/Server. When a VPN or direct link may be impractical, however, INET Clients and Proxy/Servers SHOULD include and verify authentication signatures for their IPv6 ND messages as specified in [I-D.templin-6man-omni].¶
Application endpoints SHOULD use transport-layer (or higher-layer) security services such as QUIC-TLS, TLS/SSL, DTLS or SSH [RFC4251] to assure the same level of protection as for critical secured Internet services. AERO Clients that require host-based VPN services SHOULD use network and/or transport layer security services such as IPsec, TLS/SSL, DTLS, etc. AERO Proxys and Proxy/Servers can also provide a network-based VPN service on behalf of the Client, e.g., if the Client is located within a secured enclave and cannot establish a VPN on its own behalf.¶
AERO Proxy/Servers and Gateways present targets for traffic amplification Denial of Service (DoS) attacks. This concern is no different than for widely-deployed VPN security gateways in the Internet, where attackers could send spoofed packets to the gateways at high data rates. This can be mitigated through the AERO/OMNI data origin authentication procedures, as well as connecting Proxy/Servers and Gateways over dedicated links with no connections to the Internet and/or when connections to the Internet are only permitted through well-managed firewalls. Traffic amplification DoS attacks can also target an AERO Client's low data rate links. This is a concern not only for Clients located on the open Internet but also for Clients in secured enclaves. AERO Proxy/Servers and Proxys can institute rate limits that protect Clients from receiving packet floods that could DoS low data rate links.¶
AERO Relays must implement ingress filtering to avoid a spoofing attack in which spurious messages with ULA addresses are injected into an OMNI link from an outside attacker. AERO Clients MUST ensure that their connectivity is not used by unauthorized nodes on their ENETs to gain access to a protected network, i.e., AERO Clients that act as routers MUST NOT provide routing services for unauthorized nodes. (This concern is no different than for ordinary hosts that receive an IP address delegation but then "share" the address with other nodes via some form of Internet connection sharing such as tethering.)¶
The PRL MUST be well-managed and secured from unauthorized tampering, even though the list contains only public information. The PRL can be conveyed to the Client in a similar fashion as in [RFC5214] (e.g., through layer 2 data link login messaging, secure upload of a static file, DNS lookups, etc.).¶
The AERO service for open INET Clients depends on a public key distribution service in which Client public keys and identities are maintained in a shared database accessible to all open INET Proxy/Servers. Similarly, each Client must be able to determine the public key of each Proxy/Server, e.g. by consulting an online database. When AERO nodes register their public keys indexed by a unique Host Identity Tag (HIT) [RFC7401] in a distributed database such as the DNS, and use the HIT as an identity for applying IPv6 ND message authentication signatures, a means for determining public key attestation is available.¶
Security considerations for IPv6 fragmentation and reassembly are discussed in [I-D.templin-6man-omni]. In environments where spoofing is considered a threat, OMNI nodes SHOULD employ Identification window synchronization and OAL destinations SHOULD configure an (end-system-based) firewall.¶
SRH authentication facilities are specified in [RFC8754]. Security considerations for accepting link-layer ICMP messages and reflected packets are discussed throughout the document.¶
Discussions in the IETF, aviation standards communities and private exchanges helped shape some of the concepts in this work. Individuals who contributed insights include Mikael Abrahamsson, Mark Andrews, Fred Baker, Bob Braden, Stewart Bryant, Scott Burleigh, Brian Carpenter, Wojciech Dec, Pavel Drasil, Ralph Droms, Adrian Farrel, Nick Green, Sri Gundavelli, Brian Haberman, Bernhard Haindl, Joel Halpern, Tom Herbert, Bob Hinden, Sascha Hlusiak, Lee Howard, Christian Huitema, Zdenek Jaron, Andre Kostur, Hubert Kuenig, Eliot Lear, Ted Lemon, Andy Malis, Satoru Matsushima, Tomek Mrugalski, Thomas Narten, Madhu Niraula, Alexandru Petrescu, Behcet Saikaya, Michal Skorepa, Dave Thaler, Joe Touch, Bernie Volz, Ryuji Wakikawa, Tony Whyman, Lloyd Wood and James Woodyatt. Members of the IESG also provided valuable input during their review process that greatly improved the document. Special thanks go to Stewart Bryant, Joel Halpern and Brian Haberman for their shepherding guidance during the publication of the AERO first edition.¶
This work has further been encouraged and supported by Boeing colleagues including Akash Agarwal, Kyle Bae, M. Wayne Benson, Dave Bernhardt, Cam Brodie, John Bush, Balaguruna Chidambaram, Irene Chin, Bruce Cornish, Claudiu Danilov, Don Dillenburg, Joe Dudkowski, Wen Fang, Samad Farooqui, Anthony Gregory, Jeff Holland, Seth Jahne, Brian Jaury, Greg Kimberly, Ed King, Madhuri Madhava Badgandi, Laurel Matthew, Gene MacLean III, Kyle Mikos, Rob Muszkiewicz, Sean O'Sullivan, Satish Raghavendran, Vijay Rajagopalan, Greg Saccone, Bhargava Raman Sai Prakash, Rod Santiago, Madhanmohan Savadamuthu, Kent Shuey, Brian Skeen, Mike Slane, Carrie Spiker, Katie Tran, Brendan Williams, Amelia Wilson, Julie Wulff, Yueli Yang, Eric Yeh and other members of the Boeing mobility, networking and autonomy teams. Akash Agarwal, Kyle Bae, Wayne Benson, Madhuri Madhava Badgandi, Vijayasarathy Rajagopalan, Bhargava Raman Sai Prakash, Katie Tran and Eric Yeh are especially acknowledged for their work on the AERO implementation. Chuck Klabunde is honored and remembered for his early leadership, and we mourn his untimely loss.¶
This work was inspired by the support and encouragement of countless outstanding colleagues, managers and program directors over the span of many decades. Beginning in the late 1980s,' the Digital Equipment Corporation (DEC) Ultrix Engineering and DECnet Architects groups identified early issues with fragmentation and bridging links with diverse MTUs. In the early 1990s, engagements at DEC Project Sequoia at UC Berkeley and the DEC Western Research Lab in Palo Alto included investigations into large-scale networked filesystems, ATM vs Internet and network security proxys. In the mid-1990s to early 2000s employment at the NASA Ames Research Center (Sterling Software) and SRI International supported early investigations of IPv6, ONR UAV Communications and the IETF. An employment at Nokia where important IETF documents were published gave way to a present-day engagement with The Boeing Company. The work matured at Boeing through major programs including Future Combat Systems, Advanced Airplane Program, DTN for the International Space Station, Mobility Vision Lab, CAST, Caravan, Airplane Internet of Things, the NASA UAS/CNS program, the FAA/ICAO ATN/IPS program and many others. An attempt to name all who gave support and encouragement would double the current document size and result in many unintentional omissions - but to all a humble thanks.¶
Earlier works on NBMA tunneling approaches are found in [RFC2529][RFC5214][RFC5569].¶
Many of the constructs presented in this second edition of AERO are based on the author's earlier works, including:¶
Note that these works cite numerous earlier efforts that are not also cited here due to space limitations. The authors of those earlier works are acknowledged for their insights.¶
This work is aligned with the NASA Safe Autonomous Systems Operation (SASO) program under NASA contract number NNA16BD84C.¶
This work is aligned with the FAA as per the SE2025 contract number DTFAWA-15-D-00030.¶
This work is aligned with the Boeing Commercial Airplanes (BCA) Internet of Things (IoT) and autonomy programs.¶
This work is aligned with the Boeing Information Technology (BIT) MobileNet program.¶
AERO can be applied to a multitude of Internetworking scenarios, with each having its own adaptations. The following considerations are provided as non-normative guidance:¶
Route optimization as discussed in Section 3.13 results in the creation of NCEs. The NCE state is set to REACHABLE for at most ReachableTime seconds. In order to refresh the NCE lifetime before the ReachableTime timer expires, the specification requires implementations to issue a new NS/NA(AR) exchange to reset ReachableTime while data packets are still flowing. However, the decision of when to initiate a new NS/NA(AR) exchange and to perpetuate the process is left as an implementation detail.¶
One possible strategy may be to monitor the NCE watching for data packets for (ReachableTime - 5) seconds. If any data packets have been sent to the neighbor within this timeframe, then send an NS(AR) to receive a new NA(AR). If no data packets have been sent, wait for 5 additional seconds and send an immediate NS(AR) if any data packets are sent within this "expiration pending" 5 second window. If no additional data packets are sent within the 5 second window, reset the NCE state to STALE.¶
The monitoring of the neighbor data packet traffic therefore becomes an ongoing process during the NCE lifetime. If the NCE expires, future data packets will trigger a new NS/NA(AR) exchange while the packets themselves are delivered over a longer path until route optimization state is re-established.¶
OMNI interface neighbors MAY provide a configuration option that allows them to perform implicit mobility management in which no IPv6 ND messaging is used. In that case, the Client only transmits packets over a single interface at a time, and the neighbor always observes packets arriving from the Client from the same link-layer source address.¶
If the Client's underlay interface address changes (either due to a readdressing of the original interface or switching to a new interface) the neighbor immediately updates the NCE for the Client and begins accepting and sending packets according to the Client's new address. This implicit mobility method applies to use cases such as cellphones with both WiFi and Cellular interfaces where only one of the interfaces is active at a given time, and the Client automatically switches over to the backup interface if the primary interface fails.¶
When a Client's OMNI interface is configured over a Direct interface, the neighbor at the other end of the Direct link can receive packets without any encapsulation. In that case, the Client sends packets over the Direct link according to traffic selectors. If the Direct interface is selected, then the Client's IP packets are transmitted directly to the peer without going through an ANET/INET. If other interfaces are selected, then the Client's IP packets are transmitted via a different interface, which may result in the inclusion of Proxy/Servers and Gateways in the communications path. Direct interfaces must be tested periodically for reachability, e.g., via NUD.¶
AERO Gateways can be either Commercial off-the Shelf (COTS) standard IP routers or virtual machines in the cloud. Gateways must be provisioned, supported and managed by the INET administrative authority, and connected to the Gateways of other INETs via inter-domain peerings. Cost for purchasing, configuring and managing Gateways is nominal even for very large OMNI links.¶
AERO INET Proxy/Servers can be standard dedicated server platforms, but most often will be deployed as virtual machines in the cloud. The only requirements for INET Proxy/Servers are that they can run the AERO/OMNI code and have at least one network interface connection to the INET. INET Proxy/Servers must be provisioned, supported and managed by the INET administrative authority. Cost for purchasing, configuring and managing cloud Proxy/Servers is nominal especially for virtual machines.¶
AERO ANET Proxy/Servers are most often standard dedicated server platforms with one underlay interface connected to the ANET and a second interface connected to an INET. As with INET Proxy/Servers, the only requirements are that they can run the AERO/OMNI code and have at least one interface connection to the INET. ANET Proxy/Servers must be provisioned, supported and managed by the ANET administrative authority. Cost for purchasing, configuring and managing Proxys is nominal, and borne by the ANET administrative authority.¶
AERO Relays are simply Proxy/Servers connected to INETs and/or ENETs that provide forwarding services for non-MNP destinations. The Relay connects to the OMNI link and engages in eBGP peering with one or more Gateways as a stub AS. The Relay then injects its MNPs and/or non-MNP prefixes into the BGP routing system, and provisions the prefixes to its downstream-attached networks. The Relay can perform ROS/ROR services the same as for any Proxy/Server, and can route between the MNP and non-MNP address spaces.¶
AERO Proxy/Servers may appear as a single point of failure in the architecture, but such is not the case since all Proxy/Servers on the link provide identical services and loss of a Proxy/Server does not imply immediate and/or comprehensive communication failures. Proxy/Server failure is quickly detected and conveyed by Bidirectional Forward Detection (BFD) and/or proactive NUD allowing Clients to migrate to new Proxy/Servers.¶
If a Proxy/Server fails, ongoing packet forwarding to Clients will continue by virtue of the neighbor cache entries that have already been established in route optimization sources (ROSs). If a Client also experiences mobility events at roughly the same time the Proxy/Server fails, uNA messages may be lost but neighbor cache entries in the DEPARTED state will ensure that packet forwarding to the Client's new locations will continue for up to DepartTime seconds.¶
If a Client is left without a Proxy/Server for a considerable length of time (e.g., greater than ReachableTime seconds) then existing neighbor cache entries will eventually expire and both ongoing and new communications will fail. The original source will continue to retransmit until the Client has established a new Proxy/Server relationship, after which time continuous communications will resume.¶
Therefore, providing many Proxy/Servers on the link with high availability profiles provides resilience against loss of individual Proxy/Servers and assurance that Clients can establish new Proxy/Server relationships quickly in event of a Proxy/Server failure.¶
The AERO architectural model is client / server in the control plane, with route optimization in the data plane. The same as for common Internet services, the AERO Client discovers the addresses of AERO Proxy/Servers and connects to one or more of them. The AERO service is analogous to common Internet services such as google.com, yahoo.com, cnn.com, etc. However, there is only one AERO service for the link and all Proxy/Servers provide identical services.¶
Common Internet services provide differing strategies for advertising server addresses to clients. The strategy is conveyed through the DNS resource records returned in response to name resolution queries. As of January 2020 Internet-based 'nslookup' services were used to determine the following:¶
The above example strategies show differing approaches to Internet resilience and service distribution offered by major Internet services. The Google approach exposes only a single IPv4 and a single IPv6 address to clients. Clients can then select whichever IP protocol version offers the best response, but will always use the same IP address according to the current Internet connection point. This means that the IP address offered by the network must lead to a highly-available server and/or service distribution point. In other words, resilience is predicated on high availability within the network and with no client-initiated failovers expected (i.e., it is all-or-nothing from the client's perspective). However, Google does provide for worldwide distributed service distribution by virtue of the fact that each Internet connection point responds with a different IPv6 and IPv4 address. The IETF approach is like google (all-or-nothing from the client's perspective), but provides only a single IPv4 or IPv6 address on a worldwide basis. This means that the addresses must be made highly-available at the network level with no client failover possibility, and if there is any worldwide service distribution it would need to be conducted by a network element that is reached via the IP address acting as a service distribution point.¶
In contrast to the Google and IETF philosophies, Yahoo and Amazon both provide clients with a (short) list of IP addresses with Yahoo providing both IP protocol versions and Amazon as IPv4-only. The order of the list is randomized with each name service query response, with the effect of round-robin load balancing for service distribution. With a short list of addresses, there is still expectation that the network will implement high availability for each address but in case any single address fails the client can switch over to using a different address. The balance then becomes one of function in the network vs function in the end system.¶
The same implications observed for common highly-available services in the Internet apply also to the AERO client/server architecture. When an AERO Client connects to one or more ANETs, it discovers one or more AERO Proxy/Server addresses through the mechanisms discussed in earlier sections. Each Proxy/Server address presumably leads to a fault-tolerant clustering arrangement such as supported by Linux-HA, Extended Virtual Synchrony or Paxos. Such an arrangement has precedence in common Internet service deployments in lightweight virtual machines without requiring expensive hardware deployment. Similarly, common Internet service deployments set service IP addresses on service distribution points that may relay requests to many different servers.¶
For AERO, the expectation is that a combination of the Google/IETF and Yahoo/Amazon philosophies would be employed. The AERO Client connects to different ANET access points and can receive 1-2 Proxy/Server ADM-LLAs at each point. It then selects one AERO Proxy/Server address, and engages in RS/RA exchanges with the same Proxy/Server from all ANET connections. The Client remains with this Proxy/Server unless or until the Proxy/Server fails, in which case it can switch over to an alternate Proxy/Server. The Client can likewise switch over to a different Proxy/Server at any time if there is some reason for it to do so. So, the AERO expectation is for a balance of function in the network and end system, with fault tolerance and resilience at both levels.¶
<< RFC Editor - remove prior to publication >>¶
Changes from earlier versions:¶