Internet-Draft | FARE | November 2023 |
Xu | Expires 25 May 2024 | [Page] |
Large language models (LLMs) like ChatGPT have become increasingly popular in recent years due to their impressive performance in various natural language processing tasks. These models are built by training deep neural networks on massive amounts of text data, often consisting of billions or even trillions of parameters. However, the training process for these models can be extremely resource-intensive, requiring the deployment of thousands or even tens of thousands of GPUs in a single AI training cluster. Therefore, three-stage or even five-stage CLOS networks are commonly adopted for AI networks. The non-blocking nature of the network become increasingly critical for large-scale AI models. Therefore, adaptive routing is necessary to dynamically load balance traffic to the same destination over multiple ECMP paths, based on network capacity and even congestion information along those paths.¶
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [RFC2119].¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 25 May 2024.¶
Copyright (c) 2023 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
Large language models (LLMs) like ChatGPT have become increasingly popular in recent years due to their impressive performance in various natural language processing tasks. These models are built by training deep neural networks on massive amounts of text data, often consisting of billions or even trillions of parameters. However, the training process for these models can be extremely resource-intensive, requiring the deployment of thousands or even tens of thousands of GPUs in a single AI training cluster. Therefore, three-stage or even five-stage CLOS networks are commonly adopted for AI networks. Furthermore, since rail-optimized topology is prevalent, most traffic between GPU servers would traverse the intra-rail networks rather than the inter-rail networks.¶
The non-blocking nature of the network, especially the network for intra-rail communication, become increasingly critical for large-scale AI models. AI workloads tend to be extremely bandwidth-hungry and they usually generate a few elephant flows simutanously. If the traditional hash-based ECMP load-banlancing was used without any optimization, it's highly possible to cause serious congestion and high latency in the network once multiple elephant flows are routed to the same link. Since the job completion time depends on worst-case performance, serious congestion will result in model training time longer than expected. Therefore, adaptive routing is necessary to dynamically load balance traffic to the same destination over multiple ECMP paths, based on network capacity and even congestion information along those paths. In other words, adaptive routing is a capacity-aware and even congestion-aware path selection algorithm.¶
Furthermore, to reduce the congestion risk to the maximum extent, the routing should be more granular if possible. Flow-granular adaptive routing still has a certain statistical possibility of congestion. Therefore, packet-granular adaptive routing is more desirable although packet spray would cause out-of-order delievery issue. A flexible reordering mechanism must be put in place(e.g., egress ToR or the receiving server).¶
To enable adaptive routing, no matter flow-granualar or packet-granualar adaptive routing, it is necessary to propagate network topology information, including link capacity and/or even available link capacity (i.e., link capacity minus link load) across the CLOS network. Therefore, it seems straightforward to use link-state protocols such as OSPF or ISIS as the underlay routing protocol in the CLOS network, instead of BGP, for propagating link capacity and/or even available capacity by using OSPF or ISIS TE Metric or Extended TE Metric [RFC3630] [RFC7471] [RFC5305] [RFC7810],. Regarding how to address the flooding issue in large-scale CLOS networks, please refer to the following drafts [I-D.xu-lsr-flooding-reduction-in-clos].¶
This memo makes use of the terms defined in [RFC2328] and [RFC1195].¶
+----+ +----+ +----+ +----+ | S1 | | S2 | | S3 | | S4 | (Spine) +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ | L1 | | L2 | | L3 | | L4 | | L5 | | L6 | | L7 | | L8 | (Leaf) +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ Figure 1¶
(Note that the diagram above does not include the connections between nodes. However, it can be assumed that leaf nodes are connected to every spine node in their CLOS topology.)¶
In a three-stage CLOS network as shown in Figure 1, also known as a leaf-spine network, all nodes should be in OSPF area zero or ISIS Level-2.¶
Leaf nodes are enabled for adaptive routing for OSPF area zero or ISIS Level-2.¶
When a leaf node, such as L1, calculates the shortest path to a specific IP prefix originated by another leaf node in the same OSPF area or ISIS Level-2 area, say L2, four equal-cost multi-path (ECMP) routes will be created via four spine nodes: S1, S2, S3, and S4. To enable adaptive routing, weight values based on link capacity or even available link capacity associated with upstream and downstream links should be considered for global load-balancing. In particular, the minimum value between the capacity of upstream link (e.g., L1->S1) and the capacity of downstream link (S1->L2) of a given path (e.g., L1->S1->L2) is used as a weight value for that path when performing weighted ECMP load-balancing.¶
========================================= # +----+ +----+ +----+ +----+ # # | L1 | | L2 | | L3 | | L4 | (Leaf) # # +----+ +----+ +----+ +----+ # # PoD-1 # # +----+ +----+ +----+ +----+ # # | S1 | | S2 | | S3 | | S4 | (Spine) # # +----+ +----+ +----+ +----+ # ========================================= =============================== =============================== # +----+ +----+ +----+ +----+ # # +----+ +----+ +----+ +----+ # # |SS1 | |SS2 | |SS3 | |SS4 | # # |SS1 | |SS2 | |SS3 | |SS4 | # # +----+ +----+ +----+ +----+ # # +----+ +----+ +----+ +----+ # # (Super-Spine@Plane-1) # # (Super-Spine@Plane-4) # #============================== ... =============================== ========================================= # +----+ +----+ +----+ +----+ # # | S1 | | S2 | | S3 | | S4 | (Spine) # # +----+ +----+ +----+ +----+ # # PoD-8 # # +----+ +----+ +----+ +----+ # # | L1 | | L2 | | L3 | | L4 | (Leaf) # # +----+ +----+ +----+ +----+ # ========================================= Figure 2¶
(Note that the diagram above does not include the connections between nodes. However, it can be assumed that the leaf nodes in a given PoD are connected to every spine node in that PoD. Similarly, each spine node (e.g., S1) is connected to all super-spine nodes in the corresponding PoD-interconnect plane (e.g., Plane-1).)¶
For a five-stage CLOS network as illustrated in Figure 2, each Pod consisting of leaf and spine nodes is configured as an OSPF non-zero area or an ISIS Level-1 area. The PoD-interconnect plane consisting of spine and super-spine nodes is configured as an OSPF area zero or an ISIS Level-2 area. Therefore, spine nodes play the role of OSPF area border routers or ISIS Level-1-2 routers.¶
In rail-optimized topology, PoD networks handle bandwidth-intensive intra-rail communication, while the PoD-interconnect planes handle inter-rail communication with lower bandwidth requirements. Therefore, enabling adaptive routing only in PoD networks is sufficient. In particular, only leaf nodes are enabled for adaptive routing in their associated OSPF non-zero area or ISIS Level-1 area.¶
When a leaf node within given PoD (a.k.a., in a given OSPF non-zero area or ISIS Level-1 area), such as L1 in PoD-1, calculates the shortest path to a specific IP prefix originated by another leaf node in the same PoD, say L2 in PoD-1, four equal-cost multi-path (ECMP) routes will be created via four spine nodes: S1, S2, S3, and S4 in the same PoD. To enable adaptive routing, weight values based on link capacity or even available link capacity associated with upstream and downstream links should be considered for global load-balancing. In particular, the minimum value between the capacity of upstream link (e.g., L1->S1) and the capacity of downstream link (e.g., S1->L2) of a given path (e.g., L1->S1->L2) is used as a weight value of that path.¶
Once an OSPF or ISIS router is enabled for adaptive routing, the capacity or even available capacity of the SPF path should be calculated as a weight value for global load-balancing purpose.¶
TBD.¶
TBD.¶